Friday, April 19, 2024

Comparative Reflections on the Acquisition of Language in Hearing and Deaf Children: A Case of Natural Learning of Mexican Sign Language - Juniper Publishers

Intellectual & Developmental Disabilities - Juniper Publishers


Abstract

Sign languages are visual and iconic languages used by Deaf communities worldwide. Sign language develops from the linguistic stimulus in the visuo-gestural modality, unlike hearing children who receive the stimulus in the auditory-vocal modality. This paper presents one case study in Mexico where deaf children naturally acquire Mexican Sign Language (LSM). Deaf children can access and develop sign language if the immediate context offers ways parallel to oral language development. Universal Grammar and biolinguistics support the natural process of the acquisition of languages.

Keywords: Deaf children; Mexican sign language; Hearing; Natural learning

Introduction

About the naturalness of language acquisition

Systematic and purposeful studies refer to sign language being linguistically equal to oral languages [1], with equal naturalness in its acquisition. Studies regarding language acquisition have achieved significant progress thanks to the antecedent of universal grammar [2], from now on, UG. The theory of Generative Grammar states that the child’s early language acquisition is a process of grammatical induction. It implies that the child has an innate capacity to develop her or his own linguistic experience from this language stimulation. Then, the child experiences the grammatical rules, and he or she must induce the grammatical inner to the Universal Grammar [3].

Understanding grammatical induction in the prelinguistic stage

Different approaches over the years have nourished children’s understanding of grammatical induction. The biggest problem to solve in the mid-70s was the need for a clear notion of how grammatical induction operated in children. At the end of the 70s, the deduction of the principles and parameters of the Universal Grammar model led to a radical reformulation of language acquisition and its development. This reformulation instils no specific rule for the child to induce and acquire language in early language acquisition because there is no specific language system for the child to internalise. In addition, as Khul [4], states neuroscience studies have demonstrated that the child induces grammatical rules in the language acquisition process.

Understanding the induction of grammatical rules according to each language

The principles and parameters considered the variability between languages in the first stage of the Universal Grammar model (UG). UG observed that those principles that followed a rule generated different results between different languages. Thus, this interlingual variation exists because these principles allow restricted variability between languages [5,6]. Under this conception, the particular grammar of a language is simply the UG but with the parameters arranged in a particular way according to each language. The particular arrangement of the parameters according to each language receives the name of the “Parametric Model”.

The Parametric Model impacted the field of comparative syntax as it established a theoretical language that allowed an understanding of the constants between the different languages. The Parametric Model facilitated the understanding of the ranges of variation between languages. Either the UG and Parametric Model framework is helpful to understand the objective of this article: Adding support that signs languages, like all languages, present variation and natural induction of their grammatical rules in the prelinguistic stage of the deaf child.

Prelinguistic development in hearing and deaf children

The prelinguistic stage considers the time between birth and when a person begins to use words or signs meaningfully. It is a time when children often increase their ability to communicate with others, first using eye gaze, paying attention, and socialemotional affection, and then adding gestures and other nonverbal means to communicate. This stage lays the foundation for the later development of skills such as using words (or signs) and their combination in sentences to communicate [7].

In this exact order of ideas, the parameters theory has had essential contributions to analysing the null subject in the first linguistic productions. The null subject is the omission of the subject pronoun [6,8,9], shed light on the variations in null subjects between some languages. Furthermore [8,9], studied the mechanism of language production in children early in various languages, including Spanish, English, and German. These authors did findings on null subjects. They observed that in the first productions, all languages allow the null subject even when the adult language no longer allows it.

For their part [10], proposed two hypotheses regarding the analysis of the Omitted Subject in the first children’s productions:
I. The first is the hypothesis of the parameters established in the early stage (Very Early Parameter Settings -VEPS). This hypothesis proposes that the basic parameters in a language are established correctly very early, that is, at observable ages around 18 months.
II. The second hypothesis consists of early knowledge of inflexion (Very Early Knowledge of Inflection -VEKI). This hypothesis implies that the child, in the earliest stage, knows the grammatical or phonological properties of many critical inflectional elements of his language. Based on the Royal Spanish Academy, inflexion is an elevation or attenuation done with the voice, breaking it or going from one tone to another. Meanwhile, in the specific field of grammar, an inflexion is an alteration of specific agents that implies a change in the root vowel or the ending to encode particular contents.

The theory of language development is closely related to VEPS and VEKI [10]. The VEPS theory is beneficial to show and confirm how the child correctly learns the values of the parameters before showing this learning in the production of it [6], in agreement with the VEPS theory, proposes that the child quickly sets the correct value of the null subject. If VEPS is correct, children cannot use negative information in their productions, so Hamburger and Wexler, based on Brown and Hanon [11], rule out “Negative Evidence in Language Acquisition”. Negative evidence would help eliminate ungrammatical constructions by revealing what is not grammatical. VEPS, VEKI and Negative Evidence in Language Acquisition shed light on the bio linguistical background supporting development in all languages’ prelinguistic.

A second impact of VEPS on learning is the nature of learning itself. That is, based on the child setting the parameter value correctly before the one-word stage, they know to set the parameter without guidance, as perceptual learning. According to VEPS theory, perceptual learning is the basis of linguistic parameter setting. Consequently, learning theory and the empirical properties of grammatical development converge on perceptual learning as the correct model of grammatical evolution.

Authors such as Rizzi [6], and Valian [9], support Wexler’s theory (1973-1998); however, the most critical support for the argumentative logic of this theory (VEPS AND VEKI) is the discovery of the Optional Infinitive (OI) stage in the development of Children’s grammar by Wexler [12]. The Optional Infinitives (OI) stage is when the child presents optional infinitives, showing a higher proportion of null subjects than of main verbs in the infinitive tense.

Summarising, the Optional Infinitives stage results from the maturation of Universal Linguistics. Then, the development interacts with the particular characteristics of each language. Examples of optional infinitive languages are Danish, German, English, French, and Irish, which are still under study. Italian, Spanish and Catalan are not Optional Infinitive languages [13-15]. In addition, the literature reports differences in the distribution of null subjects contingent on verbal inflexion between the different languages of null subjects. They frequently appear in a percentage of 70 to 95% of null subjects in non-finite verbs and 15 to 30% in infinitive verbs. Given this result, various theories [16].

Wexler [8] also names null subjects of final verbs as a type of pragmatic error. Languages like English sometimes omit certain types of topics. Regarding this phenomenon, Wexler and Chien [17], explained that some children’s productions in specific languages treat information that is not a substantial topic as a topic of great importance, so an important issue is consequently omitted. The authors comment (Chien & Wexler) [17], that this phenomenon is consistent with the general vision in which the child assumes that those who listen to him know more than they know now. For this reason, the child believes that some subjects that constitute vital topics should be omitted.

From the above, in some languages, the child presents a pragmatic error (natural and expected at his age) since he treats some topics that are not very dominant as if they were and, consequently, omits them. Languages like English sometimes need to catch certain types of cases. Wexler and Chien [17], concluded that in some languages, the child presents a pragmatic error (natural and expected at his age) since he treats some topics that are not very dominant as if they were and, consequently, omits them. As mentioned in the Theory of the Parameters of Universal Grammar by Chomsky [2], this phenomenon of the percentage of null subjects and the distribution of verbs seems to only occur in some languages since each language sets its respective parameters. Based on these findings, Wexler [8], demonstrated that the characteristics of null subjects changed depending on the language and concluded that null subjects are natural and expected in the Infant stage of Optional Infinitives. Regarding these findings, Wexler [8], demonstrated that the characteristics of null subjects changed depending on the language and concluded that null subjects are natural and expected in the Infant stage of Optional Infinitives. Rizzi [6], supporting Wexler’s [8], argument about the naturalness of null subjects in children’s productions, added that in early linguistic productions, children tend to omit null subjects even when the target language is not the subject.

Consequently, the omission of the subject in the first productions is a very stable and constant phenomenon in language development. Studying other languages confirmed the proposal for omitting the null subject.

On the other hand, Van Kampen pointed out that the child omits this topic from a very early age. In this regard, Wexler [8], showed that children miss topics more frequently than adults would expect. Likewise, he showed that German children present the characteristics of the Optional Infinitives stage and that they should produce final verbs in the last position. According to the parameters of the Universal Grammar, all these phenomena of the percentage of null subjects and the distribution of verbs seem to only occur in some languages since each language sets its respective parameters. Based on these findings, Wexler [8], demonstrated that the characteristics of null subjects changed depending on the language and concluded that null subjects are natural and expected in the Infant stage of Optional Infinitives.

Rizzi [6], supporting Wexler’s [8], argument about the naturalness of null subjects in children’s productions, added that in early linguistic shows, children tend to omit null subjects even when the target language is not the subject null. Consequently, the omission of the subject in the first productions is a very stable and constant phenomenon in language development—these conclusions about the stability and continuous phenomenon in all languages. It does not mean that there is no variability between languages.

Non-linguistic factors which impact language development

Recent cross-linguistic studies have revealed that there are background factors in the language production of each child. It can be biologically (internal) and environmentally (external) determined. Among many of them, the effects of gender, birth order and maternal/paternal education level have been particularly well studied [18]. This last study suggested lexical and world combination ability in children of two years varied significantly with gender but not with external factors. The authors concluded that internal factors might influence early language development more than external factors.

Biolinguistics for all languages

In all languages, some biolinguistic input is disposed of for language development. Everyone is born with the capacity to develop and learn a language. Language development is instinctive [19]. Biolinguistics is a theory that postulates the existence of an innate mental structure that allows the production and comprehension of any statement in any natural language, enabling the process of acquisition and spoken language. It requires very little linguistic input for proper functioning and develops practically automatically [20,21]. In the following provision, we expose the case of deaf children and their similar process to early language production.

What is the provision for early language production of deaf children?

Based on the results of the studies of Lillo-Martin and Henner [22], on the acquisition of word order in American Sign Language (ASL), Dutch Sign Language (NGT) and Brazilian Sign Language (Libras) are compatible with the theories and observations of spoken language acquisition, indicating that the basic canonical word order is typically observed as soon as words are combined and that, in general, children who acquire languages with variability in word order quickly develop operations that alter word order for various purposes of information structure grammatically [23].

We have exposed some comparative reflections of the prelinguistic stage of hearing and deaf children, finding that if a deaf child is exposed to sign language early, he shows at the same time the prelinguistic changes expected in oral languages. However, there are some differences in the prelinguistic acquisition of sign language, mainly due to the visogestural modality. It is discussed below.

Effects of the viso-gestural modality of sign language on prelinguistic development

As mentioned above, children can perceive and develop sign language in ways that are pretty parallel to spoken language development. However, it is also necessary to consider some modality effects. For example, the different physical development of the articulators for signing versus speech probably plays a role in the earlier first signs, as discussed above. No human being is born with a mental grammar of a particular language but can acquire any grammar of a natural language [5]. In our experience with deaf children in Mexico, they naturally acquire Mexican Sign Language (LSM) as sign language develops from the linguistic stimulus in the visuo-gestural modality; unlike hearing children, the stimulus is given in the auditory-vocal modality.

The iconicity of sign languages

Sign language is iconic, meaning it mostly remains on culture-associated codes. Iconicity allows sign languages to be universally understood since they are limited to concrete and pictorial concepts while developing several ideas simultaneously [24]. In the case of visogestural languages, signs are linguistic signs in which a visual image perceptible to the senses is present, associated with a mental image that, in turn, time is linked to the previous one. Therefore, linguistic signs in this language also distinguish between two planes. The first plane is the signifier, which consists of a visual kinesic image in the plane of expression associated with a mental idea. The second plane is the concept in the domain of meaning. In the LSM and other sign languages (Libra, .. LSC, etc.), lexical signs reproduce some aspect of the object or action they name. These signs are recognised as predominantly iconic signs [25].

Sign languages are natural languages developed in Deaf communities with the same linguistic status as spoken languages [26]. In Mexican Sign Language, the use of space by the signer is part of its grammar, the iconicity to acquire and express abstract concepts. One of the most used syntactic structures is the general form: Object-Subject-Verb (OSV). Mexican Sign Language (LSM) has different grammatical structures, as we present structures more frequently among all the disposed of ones. It is important to remark on the nature of Sign Language as a tridimensional language, allocated in the physical space and conform the messages from the most general ideas to the specific characters (Figure 1) (Table 1).

Next are the keys to trying the syntactic structure of LSM Time: When?
Place: Where?
Subject: Who?
Object Which?
Verb: What is the action or what happens/happened?
The subsequent provision is related to the one work developed in Mexico.

The learning of Mexican Sign Language (LSM) in children without linguistic development of LSM: One experience of natural learning in Mexico. The learning space for deaf users in the Central Library of the State of Hidalgo, “Ricardo Garibay,” has provided linguistic input in LSM to deaf children and their families. This program has benefited around fifty hearing families with a deaf member between 3 and 4 years old. For sixteen years, linguistic input has been offered in lexical, syntaxis and pragmatics approaches so children can develop the meanings using a grammar by themselves [27].

This experience is nationally unique, while the LSM is naturally and gradually acquired. Deaf Linguistics Models guide this learning, so the interaction from the interculturality encompasses all this learning of LSM.

One example of the activities in the learning room for deaf users can be appreciated in Figure 2. One common objective for families: Communication with their children. Families’ journey to communicate with their deaf children is often challenging and complex in Mexico. Families arrive at the learning space for deaf users at the “Sala de Silentes, Biblioteca Central del Estado de Hidalgo Ricardo Garibay”, usually because they are looking forward to supporting the writing learning process of their deaf children. They did not find this support in the health institutions as they adopted a view from the rehabilitative medical approach, likewise “a solution to deafness”. Health Institutions have some responsibilities in this one-part view as they frequently recommend not bringing deaf children near the signs. With this last information in mind, families hope to find a place that rehabilitates in orality.

From the above background, when families arrive at this learning room, they suppose deaf children will receive speaking tutoring. After a few weeks, they are usually disappointed and quit [28,29]. Like oral languages, they expect their children, when using hearing aids, to develop oral language. It is frequently that deaf people are deprived of their natural language in the first years of life.

Why does the learning room support learning in the community?

Learning in the community facilitates acquiring the LSM more naturally and fluidly. The learning room for deaf users favours a coexistence between equals (deaf-deaf) at an early age. A group of deaf people grow up together, sharing experiences, friendships and signs. In the same learning room, there is a common bond among children, young and Deaf adults who are linguistic models of the LSM. This friendship among equals builds an identity as Deaf users of the Mexican Sign Language. They identify with LSM and increase their confidence to express themselves visuallygesturally daily [30,31]. This gradual acquisition process of LSM, while they express heartfelt admiration for deaf youth and adults who master LSM. They also express positive emotions about going to the service. Although they do not have mastery of the lexical signatures of the LSM, spontaneous configurations arise to express their ideas as they appropriate their language.

Back and forward in LSM learning

There is one problem in the persistence of acquiring LSM. A couple of months after having begun the interacción in the learning group, it is common for families to quit. They usually return with their deaf children after a few years. However, children have lost precious early years to access comprehension and language. Although they begin later with this approach to the LSM, there are evident differences in the proficiency of the language: Deaf children who acquire it at an earlier age reach a higher speed of the signs and comprehension of messages than children who access discontinuous LSM learning.

Conclusion

Sign languages are complete and integral languages as oral ones. Prelinguistic acquisition of deaf children is through the visual channel. Sign languages are visual languages and allow deaf children to access them naturally. Deaf children must be immersed early, simultaneously with their deaf counterparts. The theory of the Parameters of Universal Grammar by Chomsky [2], sheds light on how each language sets its parameters. In addition, some languages allow pragmatic error while children of early age frequently omit subjects.

In all languages, some biolinguistic input is disposed of for language development. Everyone is born with the capacity to develop and learn a language. Language development is instinctive [19], while linguistic input allows deaf children to develop comprehension and sign language as hearing children develop oral language. The opportune stimulation of language allows the development of this faculty at an early age. It enables children to produce spontaneously and recognise the grammatical rules of any language they were exposed to, whether oral or visual.



To Know more about our Juniper Publishers

Thursday, April 18, 2024

An Evidence Based Review of Vitamin D in COVID-19 Severity and Mortality - Juniper Publishers

 Complementary Medicine & Alternative Healthcare - Juniper Publishers


Abstract

Introduction: This evidence-based review aims to explore the association between vitamin D status and the severity and mortality of COVID-19, providing insights for healthcare professionals and policymakers in managing the disease.

Methods: A systematic review process was conducted to identify relevant studies on vitamin D and COVID-19 using electronic databases and specific search terms. Thirteen studies were selected and analyzed, including quantitative research at levels III, IV, and V.

Results: The analysis revealed a significant association between vitamin D deficiency and increased severity and mortality of COVID-19. Vitamin D levels were found to be inversely related to the severity of the disease, with deficiency acting as an independent predictor of COVID-19-related mortality. Studies demonstrated a higher prevalence of vitamin D deficiency among hospitalized COVID-19 patients. Bolus doses of vitamin D supplementation were associated with improved clinical outcomes and lower mortality rates in COVID-19 patients.

Conclusion: The evidence suggests that maintaining adequate vitamin D levels may have a protective effect against the severity and mortality of COVID-19. Vitamin D supplementation, in combination with safe sun exposure education, could be a cost-effective and safe measure to mitigate the impact of the SARS-CoV-2 pandemic. However, further interventional studies are needed to evaluate the efficacy and optimal dosing regimens of vitamin D supplementation in COVID-19 patients.

Keywords: COVID-19; SARS-CoV-2; vitamin D; Severity; Mortality; Supplementation; Evidence-based practice

Introduction

Coronavirus disease 2019 (COVID-19), declared a global pandemic by the World Health Organization (WHO), presents a significant challenge to healthcare systems worldwide (WHO, 2020) [1]. In January 2022, COVID-19 ranked among the top four leading causes of death for all age groups, with older adults being particularly vulnerable [2]. Hospitalizations and mortality rates are significantly higher in adults over 65 years of age compared to those under 65 [3,4].

Between January 2020 and July 2022, there were over 562 million confirmed cases of COVID-19, resulting in approximately 6.3 million deaths worldwide (WHO, 2022). The economic impact of COVID-19 is staggering, estimated at over $16 trillion, which accounts for approximately 90% of the annual gross domestic product (GDP) of the United States [5].

Vitamin D, a hormone produced by both the kidneys and the skin, plays a crucial role in regulating blood calcium concentration and impacting the immune system. It is known by various names, including calcitriol, ergocalciferol, calcidiol, and cholecalciferol.

The two widely available pharmacologic preparations are cholecalciferol (D3) and ergocalciferol (D2). More recently, vitamin D has shown antiviral effects and plays a crucial role in the immune system [6-8]. It is being investigated for its potential in mitigating infections, enhancing immune responses, and suppressing the cytokine storm [9-11] Comparatively, vitamin D deficiency has been linked to increased susceptibility to viral infections. Research has not demonstrated a strong association between vitamin D levels and the prevention of COVID-19 infection [12-15]. However, there is a growing interest in exploring the potential role of vitamin D in relation to the severity of COVID-19 disease.

At the time of submission, COVID-19 has tragically resulted in the loss of over 6 million lives globally (WHO, 2022). Despite this significant impact, there remains limited knowledge about potential protective factors against the disease. Notably, advanced age and underlying chronic medical conditions, especially chronic pulmonary and cardiac diseases, have emerged as prominent predisposing factors for severe COVID-19 development and subsequent mortality [16,17]. This comprehensive literature appraisal aims to investigate potential associations between vitamin D status and disease severity and survival in COVID-19 patients. By analyzing the available evidence, this analysis provides a recommendation while considering the balance of benefit, harm, and cost.

Methods

This systematic review, conducted in collaboration with a faculty advisor and university librarian, examines the relationship between vitamin D and COVID-19, focusing on severity and mortality outcomes. The review process involved comprehensive searches of electronic databases, including PubMed, using key terms such as Vitamin D, Vitamin D Level, Vitamin D Deficiency, Covid, Covid-19, and Coronavirus. Inclusion criteria were limited to English-language articles published between 2020 and 2022, and excluded research proposals and protocols. A total of 20 articles were retrieved, and after reviewing the title and abstracts, 13 relevant studies were selected for appraisal using the Johns Hopkins Appraisal Tool. The levels of evidence were graded using Johns Hopkins Level of Evidence table. This review is comprised of 11 non experimental level III research articles, the highest level of evidence available to date.

Literature Review

Vitamin D levels have been found to be notably depleted among the aging population, a group that exhibits heightened vulnerability to COVID-19 [15]. Further evidence highlights the prevalence of vitamin D deficiency among hospitalized COVID-19 patients, with 59% of admitted individuals presenting vitamin D insufficiency. Vitamin Ds deficiency upon admission has demonstrated a significant association with COVID-19 severity and mortality, even after adjusting for factors such as age, gender, and comorbidity.

There is possibly a blood level dependent association between vitamin D level and COVID-19 severity. A retrospective multicentric study of 212 patients [7] found that critical COVID-19 cases had the lowest levels of vitamin D, whereas mild cases had the highest levels. Similarly, found similar results when stratifying COVID-19 patients by vitamin D level. There were two additional studies conducted by that reported weaker correlations between vitamin D levels and COVID-19 cases and mortality. Finally, there is a small body of evidence supporting the use of bolus doses of vitamin D3 supplementation administered during or shortly before the onset of COVID-19 (2020) [9] and Karahan and Katkat (2021) [5] both demonstrated a lower incidence of COVID-19 infection and improvement in COVID-19 severity with bolus dosed vitamin D.

There were a few studies that failed to demonstrate a positive effect of vitamin D on COVID-19. These studies were small, completed on a younger and healthier population, and primarily studied the relationship between vitamin D level and COVID-19 infection rates, but did not study the correlation between the vitamin D level and COVID-19 severity or mortality (Table 1).

Discussion

In response to the profound burden imposed by the COVID-19 pandemic and the potential for mitigating severe disease outcomes through the exploration of protective factors, numerous researchers have established a compelling association between vitamin D deficiency and the severity of COVID-19 [18-21]. While research has failed to demonstrate that Vitamin D prevents Covid-19 infection, there is a moderate amount of research establishing that optimal vitamin D levels are associated with less severe cases of COVID-19 and conversely, low vitamin D levels have been associated with more severe cases. Furthermore, bolus dosing Vitamin D3 may provide some protection in the severity of the infection, particularly in populations at risk [22,23].

Implications for Practice

Nurse practitioners manage patients in primary care who are at risk of COVID-19. Staying up to date with the current evidence is crucial in supporting clinical practice. The evidence appraised in this review are all non-experimental research. Observational and cohort studies provide valuable insights into potential associations, and the majority of the reviewed studies indicate a positive correlation between vitamin D levels and COVID-19 outcomes. Moreover, vitamin D supplementation is generally considered safe when administered within the recommended dosage range. Although further experimental research is needed to establish a causal relationship, considering the low risk profile of vitamin D supplementation, it is the recommendation of the authors that nurse practitioners consider prescribing Vitamin D supplementation to improve the severity of COVID-19 infections and that blood levels should be monitored to achieve optimal circulating levels ranging from 75 to 100 nmol/L. As healthcare leaders, nurse practitioners have the responsibility to actively seek opportunities to educate both patients and colleagues. By doing so, we can drive practice change and promote the adoption of evidence-based approaches. Disseminating evidence-based practices is vital in improving patient outcomes and ultimately enhancing the overall quality of care.


To Know more about Journal of Complementary Medicine & Alternative Healthcare

To Know more about our Juniper Publishers

Wednesday, April 17, 2024

The Use of Enterprise Information Systems and IoT Technologies for Traceability in Food Supply Chain: A Study in Greek Food Companies - Juniper Publishers

 Annals of Reviews and Research - Juniper Publishers



Abstract

Traceability in food supply chain is an issue of growing importance and is directly associated with safety and quality of products and consumers. The emergence of Industry 4.0 and Internet of Things created new opportunities and challenges for companies in the food sector, for ensuring authenticity, safety, and quality. This paper attempts to analyze and highlight the necessity of IoT technologies use in the food supply chain. Moreover, through empirical research, this paper explores the use of enterprise information systems and IoT technologies in a sample of 53 Greek food companies. The results show that, at the moment, companies are still using older and, in many cases, manual systems and they have not moved forward to adopt new technologies such as Qr codes, Wireless Sensor Networks, Blockchain, DNA barcoding etc. However, their traceability systems are connected to the ERP system and provide some basic information, although this information is not shared with partners in the supply chain. The outcomes of our study can help managers to discover the potential of new technologies and enterprise systems on food traceability, along with their challenges and the current state of their implementation. Also, this study can provide the basis for other research efforts that may analyze in more detail factors that hinder the wide adoption of IoT technologies in the food supply chain and also the required workforce skills for the successful implementation and use of these technologies.

Keywords: Traceability; Enterprise Information Systems; IoT; Food safety; Food Supply Chain

Abbreviations: IoT: Internet of Things; RFID: Radio frequency identification; GPS: Global positioning system; WSN: Wireless sensor network; EPC: Electronic product code

Introduction

4. Introduction

Food traceability is a contentious term, since it has a variety of meanings and definitions, based on the supply chain industry sector and the viewpoints of the various stakeholders and users Souali [1]. In general, traceability is known as the ability to track the movement of food product and its ingredients backwards and forward through the supply chain Souali [1]. During the last decade the credibility of the food industry’s quality assurance systems has been seriously questioned due to the emergence of serious food-related incidents, eroding consumer’s trust for food safety and quality. At the same time, the spread of international food trade and economic globalization have caused the expansion of the size and complexity of food supply chains Kher et al. [2]. Food supply chains are complex due to the large number of participants, the different types of record-keeping methods and tools, from modern ERPs to manual systems, the difficulties in predicting supply, the variances in quality and other issues Rejeb et al. [3]. In light of these circumstances, it is essential to set up a reliable traceability system in order to decrease the production and distribution of unsafe and poor-quality food Aung & Chang [4]. For this reason, monitoring is required at every stage of the food supply chain to guarantee the consistency and accuracy of food traceability Houghton et al. [5].

Technologies for food traceability

In the era of Industry 4.0, the transition of businesses from traditional ways of traceability to the application of Internet of Things (IoT) technologies that allow end to end visibility is considered as essential. The goal of IoT is to connect different and disparate smart devices without requiring human intervention Jagtap et al. [6]. According to Ding [7], Song et al. [8] and Mehannaoui [9], technologies used in IoT-based food traceability can be classified in the following categories: 1) identification and monitoring technologies (barcodes, qr codes, radio frequency identification, wireless sensor network etc.), 2) communication technologies (proximity, wireless personal area network, wireless local area network, wireless metropolitan area network and wireless wide area network and 3) data management technologies (big data, cloud computing, data mining, blockchain etc.).

Each of the above-mentioned technologies, with all the advantages and disadvantages mentioned in the literature, can play a significant role in food traceability efforts across the supply chain. For example, barcodes have the advantages of simplicity and lower reader costs, but they have limited data storage capacity, and their use is from specific devices, time consuming and error prone Ayalew et al. [10]. In contrast to conventional one-dimensional barcodes, qr codes can store more data and they can be used by a variety of devices Aho [11]. The use of qr codes is now quite widespread in the food supply chain, not only from companies but also from consumers, who can access information about a product through scanning the qr code with their smartphone Qian et al. [12]. However, a disadvantage of qr code is that once it is generated it cannot be altered Aho [11]. A type of contactless (wireless) data communication technology is radio frequency identification (RFID), which is used to track products using wireless sensor network and global positioning system (GPS) Alfian et al. [13]. The use of RFID improves the accuracy of tracking data and leads to increased supply chain visibility particularly in the food sector Zelbst et al. [14], although it still has high implementation and operation costs and issues such as protection of data privacy and security Nguyen [15].

A modern version of barcode technology is DNA barcoding, which is a labeling method based on DNA sequence, with initial applications in livestock and agriculture, in order to track origin and quality of food products Yu et al. [16]. The use of DNA barcoding can reduce the toxicological and microbiological risks related to consuming food and food-related products Dawan & Ahn [17]. An important limitation of this technology is that requires a reference database of known DNA sequences for comparison: while this database is constantly being updated, it may not contain all possible species, especially those that are rare or have limited commercial use. This may limit the utility of DNA barcoding in identifying specific food products. Processing complex mixtures can be challenging because some food products, like processed foods or dishes with multiple ingredients, may contain DNA from different species. This can make it difficult to accurately identify the product's composition using DNA barcoding Raclariu [18].

A wireless sensor network (WSN) is a network of numerous small sensor nodes that can be mobile or stationary, linked to a central node or gateway and communicate with each other wirelessly Jaiswal & Anand [19]. These sensors can gather a variety of information, including temperature, humidity, location and other environmental factors that have an impact on the quality and safety of food products Mehannaoui & Mouss [9]. WSNs offer a dependable and economical way to keep track and monitor food products from farm to fork, which can improve the transparency, accuracy and accountability of the food supply chain and contribute to the reduction in food waste Mehannaoui [20]. However, WSNs have limited range, making it difficult to monitor food products throughout the entire supply chain and they rely on battery-powered sensors, which may have a short lifespan. Cloud computing is a model of providing on-demand ICT infrastructure and services over the internet, provided through several deployment and service models Nanos [21]. Through cloud computing, food businesses can operate and maintain systems and applications in a more efficient and cost-effective way. Moreover, cloud computing enables participants in the food supply chain to share information and collaborate with business partners, something that will eventually lead to increased visibility across the supply chain Srivastava & Wood [22]. Finally, the use of blockchain in the food supply chain can significantly increase accuracy, efficiency and visibility, through a decentralized series of time-stamped validated and verified blocks that provides immutable data to all parties Devan [23], including consumers, who can trace products back to their sources of origin and verify their authenticity Awan et al. [24].

Background studies

Henson [25] on their research provided an in-depth insight into the implementation of product traceability systems in the Canadian dairy processing sector. They investigated factors that lead to the adoption of product traceability, the nature and level of traceability (measured with variables such as depth, precision and breadth), the costs and benefits associated with this adoption and the constraints in implementing product traceability, according to firm size, product type and markets served. The findings of the research showed that that product traceability is relatively widespread and only a small percentage of survey participants did not use a system for product traceability, however the systems' level of sophistication is not very high. Although they typically allow traceability to the level of at least one day's production, through to retail distribution, and back to single or at least groups of milk producers, the majority of systems were manual rather than computer-based. Also, through the study, three categories of enabling factors for implementing product traceability were identified: market drivers, product recall and legal requirements. The main problems experienced in implementing product traceability were the need for staff training (managerial, production, supervisory and administrative staff), and the need for cooperation with customers and suppliers, as far as the flow of information is concerned. The main expenses associated with implementing a system of product traceability are costs related to auditing, inspection and laboratory testing, procedures that are usually outsourced to external partners.

Preziosi [26] in his dissertation assessed the impact of traceability on the Italian poultry supply chain firms, through an index that measured the level of traceability, based on variables such as breadth, depth, access, technology and precision. Additionally, the survey investigated the benefits of traceability in three main areas: food safety management, potential process improvements and product differentiation. The perceived barriers-challenges of traceability implementation were also examined, such as required changes in existing methods and ICT tools and techniques as well as cost and changes in customer and partner relationship management. The results showed that most companies were in a medium and deep level of traceability. These two groups indicated a significant difference in terms of difficulties in implementing traceability. On the other hand, there is no discernible difference in relation to costs and benefits. In general, firms with a lower level of traceability consider that all factors (benefits, costs, and implementation challenges of traceability) have a greater impact on their operations than firms with deep traceability. Finally, firms with lower traceability demonstrated more significant challenges implementing the European Commission Regulation No. 1169/2011 about food labeling European Commission [27]. Kalogianni [28] conducted a survey in food industry, aiming at the evaluation of traceability systems application and at the analysis of motivating factors for implementing a traceability system, as well as the various problems that arise and the proposed solutions.

According to the results, almost half of the companies used both manual (handwritten) and electronic (computer-based) systems. Also, it was found that firms did not keep traceability information about each product, but for the batch of products that were produced every day. As far as the main drivers for implementation of traceability systems are concerned, the survey indicated the need for accurate recalls and the pressure from competitors and partners in the supply chain, especially from other countries. The main problems that arise when implementing traceability systems is the reaction time in the event of a recall, the response time and transparency in supplier information, the retention time of manual traceability and the traceability of bulk products. Tools such as electronic traceability application system, electronic product code (EPC) and RFID tags with sensors, seem to be first in preferences of the food companies examined. Also, most firms suggested that legislation about traceability should be more specific and strict. Finally, it should be noted that most businesses are quite hesitant to send products with more information on the shelf, because they are concerned about confidentiality of their business data.

Based on all the above studies, the aim of this paper is to explore and analyze the use of enterprise systems and technologies for traceability in food supply chain, through an empirical study in Greek companies. The main research questions are the following: RQ1: Which are the enterprise systems and IoT technologies used for traceability by Greek food companies? RQ2: How accurate and deep is the traceability system in capturing and tracking multiple attributes of products? RQ3: What is the level of accessibility of the traceability system for different stakeholders within the supply chain? RQ4: How deep is the traceability system in terms of allowing for trace-back and trace-forward analysis in case of quality issues or recalls?

Materials and Methods

The research method chosen was a quantitative research through an online survey in food companies in northern Greece. The population of the survey was companies of the food industry sector that are based in northern Greece and are either food producers or wholesalers. In order to locate these companies, an initial research was made in the website infood.gr, which is a complete guide for food and beverage suppliers in Greece. Moreover, research was performed in resources such as the Association of Greek Food Producing Companies, in government portals such as enterprisegreece.gov.gr and finally in Google and LinkedIn. An initial list of 160 companies was created. Every company was contacted firstly by phone, informing managers about the aim and scope of the research and then an email was sent, containing the link to the electronic questionnaire and instructions for the survey. A total number of 53 companies participated in the survey, thus forming the sample of our research. Responses were collected through a Google form. After the end of the survey, data was coded and analyzed with IBM SPSS Statistics 25.0.

Results

Initially, the participants were asked to answer about the traceability system applied in their company (handwritten, electronic or both). Those who answered electronic or both were then asked if the system is connected to an ERP (Enterprise Resource Planning) system as well as which technology they use to record, track, and identify products. According to the results presented at Table 1, most of the companies replied that they use both manual and electronic system, while 35,85% use exclusively electronic traceability system. In all companies that are using an electronic traceability system, this system is connected to an ERP (see Table 2). Missing values refer to the companies that apply manual traceability system. Then, the respondents were asked to answer about the type of technology (Barcode, Qr code, RFID, DNA barcoding, Wireless Sensor Networks and Blockchain) used in their company in order to record, track and identify products. The participants could select either one or more than one answer, e.g. barcode, barcode and qr code, DNA barcoding and Blockchain and Wireless Sensor Networks etc. None of the companies uses the technology of blockchain and wireless sensor networks, while the vast percentage of companies (79,50%) uses barcode. Some companies (12,80%) use barcode and Qr code, while a limited number of companies (7,70) uses barcode and Qr code and Wireless Sensor Networks (Table 3).

To determine how accurate and comprehensive is the applied traceability system, companies were asked how they define the batch, what information their traceability system provides them and which data is captured when a raw material or a product arrives at their premises. As presented at Table 4, half of the respondents (50,90%) define the batch as “the set of products produced under the same conditions”, while a percentage of 43,40% consider it as “the total production in one day”. Concerning the type of information provided by a traceability system, 52,80% of the companies can find information on a specific batch, while 43,40% can find information on both a specific batch and a specific product (Table 5). Regarding the type of data collected when a raw material or a product arrives at the company, respondents had to choose between the date and time of entry, supplier, origin, quantity, packaging details, temperature and storage conditions, quality and safety certifications and a combination of them. As shown in Table 6, the majority of companies (35,80%) collect all the reported data, while 32,10% of the sample collects only the date and time of entry, supplier details, origin, quantity, temperature and storage conditions, quality and safety certifications.

In the following section of our questionnaire, the level of accessibility of the traceability system for different stakeholders within the supply chain was investigated. Firstly, companies were asked if they use a common online database to share real-time information in the event of a food safety threat. According to the results in Table 7, most of the companies (41,50%) do not use such a system, while a percentage of 32,10% maintains a common online database to share real time information with the competent public authorities in the event of a food safety threat. Then, the participants were asked if their companies use a common traceability system with their suppliers in order to exchange information. As presented at Table 8, the vast majority of companies (84,90%) do not use such a system. Concerning the depth of the traceability system, in terms of allowing for trace-back and trace-forward analysis in case of quality issues or recalls, the results showed that 94,30% of the companies trace the origin of raw materials (Table 9). At the same time, and according to Table 10, half of the companies record all data until the arrival of the goods at the retailer.

Discussion

As far as the first research question (RQ1) is concerned, about the systems and technologies used for food products traceability, most companies apply a combination of manual and electronic system, connected in most of the cases with an ERP system. The technologies used for traceability are mainly barcodes and qr codes and companies have not adopted yet current IoT technologies, such as RFID, Wireless Sensor Networks, Blockchain, DNA barcoding etc. With the second research question (RQ2) we tried to identify the accuracy and the depth-range of the traceability system in identifying and tracking individual products or products batches throughout the supply chain. As far as the accuracy is concerned, more than half of the companies define the batch as the set of products produced under the same conditions. This type of traceability system is generally more accurate in ensuring that products with identical production conditions are correctly identified and tracked. Also, it helps maintain consistency in quality and safety for products that undergo similar processes. Regarding the information provided by the traceability system, companies track information both on batch level and a specific product. As far as the depth (range) of the traceability system is concerned, the majority of food companies record the data that is required by the European Commission regulation No 668/2014 European Commission [29], which refers to the supplier, the origin and the quantity. A small percentage of businesses do not record either quantity or origin. Furthermore, many companies collect additional data beyond the legislative prerequisites, such as date and time of entry, packaging details, temperature and storage conditions, quality, and safety certifications.

The third research question (RQ3) regarded the level of accessibility of the traceability system for different participants in the food supply chain. In the majority of companies there is no use of a common online database with supply chain partners, while a respectful percentage of companies use a common database to share information with competent public authorities. Similarly, most of the companies do not have a common traceability system to exchange information with suppliers. Finally, in the case of tracing back or tracing forward in order to examine the depth of traceability system, which is our fourth research question (RQ4), in most companies the traceability system allows them to track the origin of raw materials (trace-back). It should be noted though, that half of the companies in our sample can record data up to the arrival of the product at the retailer, while the other half do not (trace-forward).

Conclusion

From the literature review it is evident that traceability in the food supply chain is a very important aspect and of growing importance. Technological developments are rapid and continuous, offering multiple and innovative solutions for industry and stakeholders that enhance the quality and safety of the final product. IoT technologies is one of the most significant and recent developments in the field of IT. A main advantage recorded in the literature concerning the importance of using IoT technologies, is the empowerment of stakeholders by enabling them to control and manage connected equipment and by monitoring food production flows in real time. In addition, the use of IoT and of proper enterprise systems enables the quick identification of quality issues and allows more efficient and accurate data collection and analysis, leading to improved decision-making and enhanced operational efficiency. However, some concerns regarding the adoption of IoT technologies are identified, such as connectivity issues, data security, economic sustainability of all stakeholders and willingness of consumers to pay more for food produced with greater transparency.

The empirical research conducted in order to identify the technologies used by Greek food companies and the capabilities of their traceability systems, showed that the use of IoT technologies is not yet widespread. Companies still use older technologies, such as barcodes, together with manual and electronic systems. The traceability system of most companies is quite accurate in identifying batches of products as they define the batch as the total of products produced under the same conditions. In addition, the traceability system applied by companies is quite detailed in capturing and tracking multiple attributes of products as, apart from the legal requirements, it also records additional data at the entry of a product. However, most companies do not have a common online real-time information database, both with the competent authorities and with stakeholders, nor a system for exchanging information with suppliers. As far as the depth of the applied traceability system is concerned, most companies have the ability to track the origin of raw materials as well as record all data until the product arrives at the retailer.

The limitations of our research mainly refer to the sample: a larger number of companies or a sample consisting of companies from all over the country might have produced different results. However, the outcomes of our study can still be useful and applicable, both for practitioners-managers in the food supply chain and for academics-researchers. Further research in the area may include analysis of the factors that hinder the wide adoption of IoT technologies in the food supply chain and on the skills of the workforce for the successful implementation and use of these technologies.

To Know More About Annals of Reviews and Research Please click on:
https://juniperpublishers.com/arr/index.php

For more Open Access Journals in Juniper Publishers please click on: 
https://juniperpublishers.com/index.php

Tuesday, April 16, 2024

Design and Evaluation of Mobile-Based Logbook Application for Type 2 Diabetes Patients - Juniper Publishers

 Nursing & Health Care - Juniper Publishers


Abstract

Introduction: A paper-based logbook used to record the daily diabetes data may have many complications such as damage, forgetting to bring on physician appointments, and incomplete data analysis. Therefore, it is not counted as an effective tool for logging such data. On the other hand, due to the high popularity of mobile phones among the population, it is possible to use the numerous capabilities of this tool to help patients with diabetes. In this study, a mobile-based application for logging type 2 diabetes patients’ data was designed, implemented, and then evaluated by patients and physicians.

Materials and Methods: Initially, the structural and content-based features of paper-based logbooks used in various diabetes associations were extracted, and four data elements containing blood glucose, food, medication, and physical activity were selected. The software was then designed using object-oriented analysis and programmed for Android and iOS operating systems. Heuristic and adaptability tests were also performed for proper application performance. Then, a questionnaire designed to evaluate the user interface and functionality of the software and was provided to 5 patients referred to the diabetes physician. Furthermore, five other patients were asked to report their data to the paper-based logbook in the traditional manner. Finally, another questionnaire was given to their treating physician (diabetes specialist) to compare the treatment process between the two groups of patients.

Findings: The findings of the study reveal that the software satisfied patients’ needs by providing them facilities like blood glucose monitoring with automatic recording time, food intake and carbohydrate intake, physical activity, and calorie counting of each activity, as well as a list of patient medications. The comparison between the modern and traditional groups also showed the appropriate treatment in the modern group compared to the traditional group.

Conclusion: Using a mobile-based application for type 2 diabetes will provide necessary data to the physician to make better decisions. It will also help the treatment process and ultimately the effective management of diabetes.

Keywords: Type 2 diabetes; diabetes logbook application; mobile health;

Introduction

The Personalization of the treatment process for patients with type 2 diabetes is now primarily done through the paper logbook. This personalization includes measuring and recording blood glucose, exercise activities, food, and medications taken during day [1,2]. Due to the characteristics of these paper logbooks, special attention needs to be paid to problems such as damage, loss, failure to bring the logbook at the physician appointment time, and potential absence of disease history. Many patients enter unrealistic data just before the physician appointment time in the daily logbook [3]. Also, if a patient has a specific and variable glucose pattern, it is difficult to be analyzed by physician [4]. Reviewing and comparing old and new data from these daily reporting logbooks and prescribing treatment plans is also not performed efficiently by physicians [5].

Due to the high usage of mobile phones among community members - around 7 billion users worldwide [6,7] - people prefer to do a lot of their daily activities through mobile phones, and in a way, it can be described as an integral part of modern human life. In general, the prevalence of mobile phone use in the world has created an opportunity to provide a powerful platform for providing individual health care for patient comfort [8-10]. Using mobile-based software to manage the treatment process for patients with type 2 diabetes with the ability to record daily reports can alleviate the difficulties of using paper logbooks with electronic data recording, data storage, and retrieval operations performed in a more efficient and less error-prone manner. Patient data can always be accessible to the user, data loss is minimized, and data can be researchable for physicians. On the other hand, by providing all the data elements needed by the physician and controlling the timing of each data entry to distinguish realistic from unrealistic data, a practical step is taken to manage type 2 diabetes properly [11-14].

The primary purpose of this study was to design a mobilebased application for the daily report of type 2 diabetes patients called DiaLog and then evaluate this software among its users and a diabetes specialist.

Materials and Methods

Software Development

First of all, the structural and content-based features of the diabetes logbooks were needed. For this purpose, the diabetes logbooks were selected worldwide. The recommended data elements on blood glucose, food intake, medication, and physical activity [15] were presented to 10 diabetes practitioners as the most frequently used data elements in logbooks. Finally, they decided to select all of these data elements for the structure of the electronic logbook. Moreover, mobile-based applications available for diabetes were downloaded and installed for both Android and iOS operating systems. Extracted data such as the presence of mentioned data elements and software-specific features in 13 mobile applications are summarized in Table 1. Critical features of the standard diabetes logbook in our mobile application contain four data elements of blood glucose, food intake, patient medication, and physical activity. The exact time of recording each data element was also identified as an essential feature in designing the electronic logbook structure. Subsequently, the object-oriented conceptual model of the software was designed using the UML modeling language, and software programming was performed based on the presented model. Considering the high number of users of both Android and iOS operating systems, which make up 99.6% of smartphones [16], the software produced in this study was generated for both Android and iOS using the JavaScript programming language and Exploratory and compatibility tests were also performed on a variety of mobile phones for proper performance (Figure 1).

Then software called DiaLog (Diabetes Logbook) was produced, and the following main features were considered:
i. Blood glucose record along with automatic registrations to prevent unrealistic dates from those being entered
ii. High or low blood sugar alert
iii. Food intake record
iv. Automatic calculation of carbohydrates consumed during the day
v. Daily carbohydrate alert
vi. Picture list of patient medications with name, dose, and time of use
vii. Physical activity record during the day
viii. Automatic Calculation and display of calories consumed for each activity

Software Evaluation

After producing the mobile application, two patient groups were selected. One group involved 5 patients using the paperbased logbook and another group 5 patients using the DiaLog mobile application to record the daily report. These patients were asked to record their data for three months (the first group in the paper-based logbook and the second group in DiaLog).

Then, two structured questionnaires were designed, and their validity and reliability were confirmed. One of the questionnaires related to the evaluation of the user interface, the items in the software, the number of positive effects, and its continued use was provided to DiaLog users. Another questionnaire was designed to compare the treatment process between the traditional and modern groups, and the physician was asked to answer the questions.

Findings

The results obtained from the evaluation of software features by five users are shown in Table 2. On the other hand, the overall evaluation obtained by the specialist physician regarding the comparison of the treatment process with the software users and the traditional patients also yielded the results shown Table 3. Software evaluation results showed that 98% of DiaLog users were satisfied with the software interface. Also, everyone stated that the features considered in the software were requisite ones. Some of the software’s features, such as the automatic calculation of carbohydrates and calories burned, have helped control their blood sugar. The effectiveness of the software in the patient’s mind was also assessed in this evaluation. These effects include not forgetting the medications after a few days of using the software, the habit of accurately recording blood sugar, and continually checking other software items such as nutrition and physical activity, with 90% reporting a positive response. It should note that 90% of patients stated that they would continue using this software, citing its simplicity and functionality. Findings from the questionnaire provided by the physician also showed that patients using the software had better control over their blood glucose in 80% of cases and always (100% of cases) had the needed data with them.

Discussion

The results showed that mobile application users provide complete and higher quality data than the traditional group. Therefore, blood glucose control in software users was better than in the traditional group. According to the specialist physician, the features considered in the software provide the data necessary to make a decision that is not available in the paper-based logbook. Forgetting to have data while visiting a physician is also a significant issue that software users have addressed through mobile phones. However, this study frequently forgot the paperbased logbook in traditional patients.

In this way, the mobile-based application with the aforementioned features can be a good substitute for the diabetes paper-based logbook to provide the maximum data needed for treatment. Moreover, such software can lead to effective care of the patient and their involvement in their treatment, which motivates them to continue the treatment process. One of the essential aspects of using a diabetes mobile-based application is to keep the patient aware of their blood glucose fluctuations at different times to have proper control over them. Awareness of consumed carbohydrates and calories burned will also help manage disease treatment better. The conflict of different medications and the lack of information of the specialist physicians about individual medications may interfere with the patient’s treatment process. In this mobile-based application, the patient can prevent such problems by registering the medications used in the software and presenting them to physicians.

The use of mobile-based technologies may be complex at first glance for the elderly or illiterate, but with proper training, this problem resolves. Whereas educating family members or their nurses is the primary solution to preventing such problems.

Conclusion

In general, the mobile-based diabetes application can be a good substitute for paper-based logbooks. The ever-present mobile phone and its non-forgetting makes it an effective platform for providing essential data for patients with diabetes. Adequate and sufficient data for proper decision-making by physicians and the ability to perform a variety of analyses on them can lead to practical results in the treatment process. Continuing to use and encouraging patients to collect data is another essential feature of mobile-based applications that paper-based logbooks do not have. It assures that the patient can always access his data in the electronic logbook, record it, and share with the doctor immediately. By expanding this process and encouraging more patients to use these kinds of software, we can see an appropriate trend for chronic disease control, which is not an exception. Providing the right platform for the proper management of diabetes and providing data requirements in the mobile-based application compatible with a varied range of mobile phones are all considered in this research. We expect to see such software to help treat chronic diseases more and more in the future.

To Know More About JOJ Nursing & Health Care Please click on:

For more Open Access Journals in Juniper Publishers please click on: 

Monday, April 15, 2024

Examining the Relationship between Adult Attention Deficit and Emotional Intelligence: Exploration of fundamental Univariate and Multi-Variate Relationships - Juniper Publishers

Intellectual & Developmental Disabilities - Juniper Publishers


Abstract

This research study examines the associations between dimensions of emotional intelligence (EQ) and adult attention deficit (AAD) in order to provide a framework for future research. A total of 219 management students completed three measures of AAD and a multi-dimensional measure of EQ (Bar-on EQI). Product moment correlations were used to examine the univariate associations between dimensions of EQ and AAD, and multiple regression examined the simultaneous multivariate relationship. Both the global measure of EQ and the all the sub-dimensions of EQ were significantly correlated with three established measures of AAD (College ADHD Response Evaluation, Brown AAD Scale and the DSM-V items used to identify inattention), except for non-significant univariate relationships between Brown-AAD and both empathy and social responsibility. Self-regard, self-actualization, reality-testing and stress-tolerance displayed the strongest univariate correlations, while self-actualization, reality-testing, happiness and stress tolerance remained significant when a composite score of the standardized scores from the 3 measures of AAD was simultaneously regressed on all the dimensions of EQ. Further research is required to confirm the directionality of the associations which will help to address the question of whether enhancing emotional competency will help reduce AAD symptoms and associated performance challenges.

Keywords: Emotional intelligence; Adult attention deficit; Adult attention deficit hyperactivity disorder

Introduction

Much of the traditional research on intelligence and performance has focused on the role of cognitive intelligence (IQ), which is defined as the capacity to understand, learn, recall, think rationally and solve problems [1]. Gardner [2], expanded on the concept of cognitive intelligence by suggesting that intelligence encompasses both cognitive and personal (emotional) elements. The personal (emotional) component includes two general components referred to as intrapsychic and interpersonal skills. Salovey and Mayer [3], referred to these components as emotional intelligence (EQ). Goleman [4], popularized the concept and suggested that EQ might be a better predictor of individual performance than IQ in a wide range of situations [5]. Goleman [4], generally defines EQ as the capacity to recognize and manage emotions in oneself and others.

Emotional intelligence is typically measured using either an omnibus test (e.g. Schutte Test) or tests that explicitly measure various dimensions of the construct (e.g. Bar-on EQI). Bar-on [6], developed a measure of EQ that contains fifteen dimensions of EQ and is considered to be one of the more comprehensive measures. The total score is referred to as the global emotional intelligence quotient (EQI). Bar-on [7], defines emotional intelligence as an “array of non-cognitive capabilities, competencies, and skills that influence one’s ability to succeed in coping with environmental demands and pressures.” The Bar-on measure of EQ is comprised of five core elements each with their own sub-dimensions. The five core components are intrapersonal capacities, adaptability, general mood, interpersonal capacities and stress management. The intrapersonal component includes self-regard, emotional self-awareness, assertiveness, independence and self-actualization. The adaptability component includes reality-testing, flexibility and problem-solving, and the general mood component includes optimism and happiness. The interpersonal component includes empathy, social-responsibility and interpersonal-relationships, and the stress-management component includes stress- tolerance and impulse-control.

Research has shown that EQ has a positive impact on individual performance (Kelley & Caplan, 1993), team performance [8-10], and organizational performance [11]. Research has also shown a positive impact on employee satisfaction [12], customer service [13], customer satisfaction [14], sales success [15], leadership [16,17], organizational change [18] and conflict management [19]. Goleman [20], suggests that unlike IQ, EQ can change throughout a person’s life, and is therefore a competency that should be the targeted in training programs and interventions aimed at improving performance in the workplace.

The above research highlights the importance of identifying personal and organizational factors that influence the development of EQ. A recent study on brain activity found that low EQ is related to underarousal of the left-frontal cortex, a condition that is also associated with attention deficit disorder [21]. A recent Harvard Business Review paperback [22], on bringing the whole self to work included articles by Edward Hallowell, Herbert Benson, Daniel Goleman, and Manfred Kets de Vries on how increased emotional intelligence can help to overcome attention deficit problems. This suggests a link between emotional intelligence and adult attention deficit.

Adult attention deficit

A recent national survey found that 4.2 percent of US workers had adult attention deficit and hyperactivity disorder (ADHD) resulting in $19.5 billion in lost human capital per annum [23]. Lifespan research suggests that the majority of children with ADHD continue to experience symptoms as adults [24-29]. Prevalence estimates of ADHD among adults in the United States vary according to the measurement criteria used, with estimates ranging from less than 10 percent to as high as 70 percent [24,26,27,30]. A recent population screen of 966 adults in the United States suggests prevalence rates of 2.9 percent for narrowly defined ADHD and 16.4 percent using a more broad definition [31]. Kessler et al., [23], concludes that adult attention deficit disorders are a common and costly problem within the US workforce.

The Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition (DSM-IV) defines ADHD (attention deficit and hyperactivity disorder) as “a persistent pattern of inattention and/or hyperactivity-impulsivity that is more frequent and severe than is typically observed in individuals at a comparable level of development” [32]. A recent national survey by Harris Interactive (2004) found that the majority of adults with ADHD believed that the disorder had constrained them from achieving both short and long term goals. Research has confirmed that adults with ADHD attain lower occupational ranking, socioeconomic status and social class standing when compared with their peers [26,33]. Research by Biederman et al., [33], found that, on average, adults with ADHD have household incomes that are $10,791 lower for high school graduates and $4,334 lower for college graduates. Annual income loss for adults with ADHD in the United States is estimated at $77 billion, which is similar to income loss estimates for drug abuse ($58 billion) and alcohol abuse ($86 billion). Research has also established a link between ADHD and substance abuse [33].

A recent study using data from Fortune 200 companies found that absenteeism and medical costs for employees diagnosed with ADHD were 48 percent higher [34]. Adults with ADHD were also more likely to change jobs [35,36], engage in part time employment [33], and seek out jobs that don’t require concentration over long periods of time [37]. They also avoid jobs that require close supervision, repetitive tasks and sedentary performance conditions [26]. The disorder is also associated with higher accident rates and lower productivity [38,39]. Adults with ADHD are perceived by their employers as requiring more supervision and less able to complete assignments [40]. Research also suggests that team members with attention disorders have lower efficacy for working in teams [41]. Adults with ADHD have difficulty focusing on their problem behavior and without help will often fall into a chain of failures [42]. Barkley [40], suggests that depression, anxiety and diminished hopes of future success may help to develop and exacerbate the symptoms of adult ADHD. This suggests that without intervention, adults with attention disorders may find themselves trapped in a self reinforcing and debilitating cycle between strengthening symptoms and ongoing failures.

ADHD may also be associated with positive behaviors like ingenuity, creativity and determination [26], which may explain why entrepreneurs appear to have relatively higher levels of the disorder [43]. In fast paced work environments, adults with ADHD may perform just as well, if not better, than non-ADHD employees [44]. Hartman [45], encourages a more encompassing view of adult workers with ADHD by suggesting that employers consider both the negative and positive behaviors associated with the condition.

Research on adult ADHD suggests that the hyperactivity/impulsivity component of the disorder may disappear or not exist, [28,46], whereas the inattention component and related cognitive symptoms, referred to as adult attention deficit (AAD), are more likely to persist or develop [47]. Brown [48], suggests that measures of AAD should exclude hyperactivity/impulsivity due to the inconsistency with which these symptoms appear in adults with attention disorders. Brown [48] also suggests that strict reference to the symptoms of inattention may not capture all of the key symptoms. Brown [48], proposes five clusters of symptoms all of which seem to commonly occur among adults with AADs. The five symptom clusters include difficulties with activation, concentration, effort, managing emotional interference and accessing memory. This suggests that AAD, as opposed to ADHD, may be a more prevalent problem for adult workers and that some of the key symptoms associated with the disorder are not sufficiently represented within the inattention component of traditional measures of ADHD.

Researchers have also expressed concern about strictly treating attention deficit disorder as a categorical diagnosis, as opposed to a dimensional construct with varying levels of severity [49,50]. Categorical diagnosis promotes simplistic use and interpretation of the construct. In order to overcome this limitation, Brown [48], suggests that measures of attention deficit disorder need to capture varying levels of severity and provide cut scores that separate out clinical vs. non-clinical sub-groups. This research defines adult attention deficit (AAD) as a persistent pattern of inattention and related cognitive symptoms that occur with varying levels of severity. AAD creates additional challenges within the academic, work and social life of adults.

Although empirical research on the impact of attention disorders on organizational behavior is limited, research to date suggests that AAD is having a wide range of negative consequences in the workplace [23]. Research also suggests that employees with attention related disorders may excel on certain tasks and in certain work environments. This highlights the importance of identifying the particular competencies, tasks and work situations that are negatively affected by AAD.

Emotional intelligence and adult attention deficit

This research study will examine the influence of AAD deficit on the fifteen elements of EQ proposed by Bar-on [7]. The intrapersonal core component includes self-regard, emotional self-awareness, assertiveness, independence and self-actualization. Difficulties with activation, concentration, effort, managing emotional interference and use of short term memory will undermine attempts at mastering key life tasks. Persistent difficulties with mastery will undermine general efficacy, self-regard and eventually assertiveness. Assertiveness in face of persistent failures will produce escalating dissonance which should ultimately lower expectations and proactive behavior if success is not available. The inability to concentrate, sustain effort and manage emotional interference will constrain emotional awareness and increase emotional dependence on others. All of the elements of AAD represent considerable barriers to realizing one’s full potential and living a meaningful and satisfying life, therefore AAD represents a significant barrier to self-actualization.

H1: Self-regard will be negatively associated with adult attention deficit

H2: Emotional self-awareness will be negatively associated with adult attention deficit

H3: Assertiveness will be negatively associated with adult attention deficit

H4: Independence will be negatively associated with adult attention deficit

H5: Self-actualization will be negatively associated with adult attention deficit

The adaptability core component includes reality testing, flexibility and problem solving. The ability to assess the correspondence between what is experienced and what objectively exists will be constrained by difficulties with concentration, emotional interference and accessing short term memory. Difficulties with concentration, effort and emotional interference will limit the extent to which someone is able to adjust their emotions, thoughts and behavior to changing situations and conditions. The process of identifying, analyzing and taking actions to remove problems requires concentration and effort. This suggests that adults with AAD will have difficulty with problem solving.

H6: Reality testing will be negatively associated with adult attention deficit

H7: Flexibility will be negatively associated with adult attention deficit

H8: Problem solving will be negatively associated with adult attention deficit

The general mood core component includes optimism and happiness. The ability to look on the brighter side of life and remain hopeful in the face of adversity is difficult to do when afflicted with significant cognitive and emotional constraints. Difficulty achieving personally valued outcomes and states will limit enjoyment and satisfaction.

H9: Optimism will be negatively associated with adult attention deficit

H10: Happiness will be negatively associated with adult attention deficit

The interpersonal core component includes empathy, social responsibility and interpersonal relationships. Difficulties achieving a sense of personal value and efficacy should increase preoccupation with self. Heightened self-preoccupation coupled with emotion interference and the inability to concentrate, will reduce the ability to empathize with others. All of the symptoms of AAD will constrain a person’s ability to be a cooperative, contributing and constructive member of a group. This does not mean that group members with AAD have anti-social intent, rather it means that ongoing difficulties and preoccupation with self makes it difficult to display higher levels of social responsibility. Adults with AAD will have difficulty establishing and maintaining mutually satisfying relationships characterized by intimacy and the exchange of affection. Intimacy and the exchange of affection requires attentiveness, effort and non-reactive expression of difficult emotions. AAD will limit such abilities, and in doing so, will undermine the process of establishing and maintaining healthy interpersonal relationships.

H11: Empathy will be negatively associated with adult attention deficit

H12: Social responsibility will be negatively associated with adult attention deficit

H13: Interpersonal relationships will be negatively associated with adult attention deficit

The stress management core component includes stress tolerance and impulse control. The ability to tolerate stress and control reactive behavior requires concentration, emotional control and effort. This suggests that adults with AAD will have difficulty with stress and controlling impulsive behavior.

H14: Stress tolerance will be negatively associated with adult attention deficit

H15: Impulse control will be negatively associated with adult attention deficit

All of the elements of EQ are potentially constrained by difficulties with activation, concentration, effort , managing emotional difficulties and memory. Therefore, total or global EQ should be negatively associated with AAD.

H16: Global emotional intelligence will be negatively associated with adult attention deficit

Methods

Subjects and procedures

The subjects were two hundred and nineteen university students enrolled in two business courses at public universities in the Northwestern United States. The subjects completed two measures of adult attention deficit and a multi-dimensional measure of EQ during the course of the semester. The hypotheses regarding associations between emotional intelligence and adult attention deficit were tested using Pearson product moment correlations.

Measures

Bar-on Emotional Intelligence Quotient (EQI). Emotional intelligence was measured using the Bar-on EQI [7]. The Bar-on EQI is a comprehensive instrument that measures fifteen conceptual components of emotional intelligence that are group into five core components. The following definitions of each of the fifteen conceptual components were taken from the professional manual accompanying the Bar-on EQI measure. The intrapersonal core component includes self-regard, emotional self awareness, assertiveness, independence and self actualization. Self regard is defined as the ability to respect and accept oneself as basically good, and an example item is: “I’m happy with the type of person that I am.” Emotional self awareness is defined as the ability to recognize one’s feelings, and an example item is: “It’s hard for me to describe my feelings.” Assertiveness is the ability to express feelings, beliefs, and thoughts, and defend one’s rights in a non-destructive way. An example item is: “It’s hard for me to say no when I want to.” Independence is the ability to be self-directed and self-controlled in one’s thinking and actions, and to be free of emotional dependency. An example item is: “I tend to cling to others.” Self actualization is the ability to realize one’s potential capabilities, and an example item is: “I don’t have a good idea of what I want to do in life.”

The adaptability core component includes reality testing, flexibility and problem solving. Reality testing is the ability to assess the correspondence between what is experienced and what objectively exists. An example item is: “I tend to exaggerate.” Flexibility is the ability to adjust one’s emotions, thoughts and behavior to changing situations and conditions, and an example item is: “It’s easy for me to adjust to new conditions.” Problem solving is the ability to identify and define problems as well as to generate and implement potentially effective solutions. An example item is: “I generally get stuck when thinking about different ways of solving problems.”

The general mood core component includes optimism and happiness. Optimism is the ability to look at the brighter side of life and to maintain a positive attitude even in the face of adversity. An example item is: “I generally expect that things will turn out alright, despite setbacks from time to time.” Happiness is the ability to feel satisfied with one’s life, to enjoy oneself and others, and to have fun. An example item is: “I’m a fairly cheerful person.”

The interpersonal core component includes empathy, social responsibility and interpersonal relationships. Empathy is the ability to be aware of, to understand, and to appreciate the feelings of others, and an example item is: “I’m good at understanding the way other people feel.” Social responsibility is the ability to demonstrate oneself as a cooperative, contributing, and constructive member of one’s social group, and an example item is: “It doesn’t bother me to take advantage of other people, especially if they deserve it.” Interpersonal relationships is the ability to establish and maintain mutually satisfying relationships that are characterized by intimacy and by giving and receiving affection. An example item is: “I don’t keep in touch with friends.”

The stress management core component includes stress tolerance and impulse control. Stress tolerance is the ability to withstand adverse events and stressful situations without falling apart by actively and positively coping with the stress. An example item is: “I know how to keep calm in difficult situations.” Impulse control is the ability to resist or delay an impulse, drive, or temptation to act, and an example item is: “I tend to explode with anger easily.”

The measure contains one hundred and seventeen items that are answered using a five point scale (1=very seldom or not true of me, 2=seldom true of me, 3=sometimes true of me, 4=often true of me, 5=very often true of me or true of me).

Adult Attention Deficit (AAD): The Brown [48], attention deficit disorder scales were used to measure adult attention deficit. The instrument has been designed and tested for use with adults eighteen years and older. The forty self-report items on the Brown AAD scales are grouped into five clusters of conceptually related symptoms of AAD. Organizing and activating to work (cluster 1) measures difficulty in getting organized and started on tasks. An example item is: I am disorganized; I have excessive difficulty keeping track of plans, money, or time. Sustaining concentration (cluster2) measures problems in sustaining attention while performing tasks. An example item is: I listen and try to pay attention (e.g., in a meeting, lecture, or conversation) but my mind often drifts; I miss out on desired information. Sustaining energy and effort (cluster 3) measures problems in keeping up consistent energy and effort while performing tasks. An example item is: I “run out of steam” and don’t follow through; my effort fades quickly. Managing affective interference (cluster 4) measures difficulty with moods and sensitivity to criticism. An example item is: I become irritated easily; I am “short-fused” with sudden outbursts of anger. Utilizing working memory and accessing recall (cluster 5) measures forgetfulness in daily routines and problems in recall of learned material. An example item is: I intend to do things but forget (e.g., turn off appliances, get things from store, return phone calls, keep appointments, pay bills, do assignments).

College ADHD Response Evaluation: In addition to the Brown measure of adult attention deficit, a measure of inattention was taken from the college adult attention deficit and hyperactivity response evaluation (CARE) [51]. The CARE measure also included a separate inattention scale containing the DSM-IV items used to identify the inattention component of ADHD. The CARE measure of inattention was developed to assist with the increasing number of requests for academic accommodations for college students with ADHD. It is the first measure of ADHD expressly designed for individuals at the university level. Twenty-one items were used to measure the inattention component of the CARE, and an example item is: “I notice important details on an assignment.” There are nine DSM-IV items used to identify the inattention component of ADHD, and an example item is: “I avoid, dislike or am reluctant to engage in tasks that require sustained mental effort (such as schoolwork or homework).” For both the CARE and the DSM-IV inattention scales, subjects rated their level of agreement with each item using a three point scale (0=disagree, 1=undecided, 2=agree).

Results

Descriptive statistics

Means, standard deviations, internal reliabilities and correlations appear in table one. All variable distributions are approximately normal and demonstrate reasonable variation across their respective scales. Cronbach alpha coefficients ranged from α=0.73 to α=0.90 suggesting good internal reliabilities. No univariate or bivariate outliers were considered problematic and the product moment correlations revealed significant associations between the variables (Table 1).

Empirical tests of hypotheses

Unless stated otherwise, all hypothesized correlations were in the expected direction, and all reported statistical probabilities are based on two tailed tests (α=0.05). Given the sample size and the use of two-tailed significance tests, correlations above 0.14 are statistically significant.

Hypothesis 1: The correlations between self-regard and adult attention deficit were statistically significant (Brown r = -0.38, Care r = -0.33, DSM-IV r = -0.41). This provides support for the hypothesis that self-regard is negatively associated with adult attention deficit.

Hypothesis 2: The correlations between emotional self-awareness and adult attention deficit were statistically significant (Brown r = -0.33, Care r = -0.30, DSM-IV r = -0.34). This provides support for the hypothesis that emotional self-awareness is negatively associated with adult attention deficit.

Hypothesis 3: The correlations between assertiveness and adult attention deficit were statistically significant (Brown r = -0.36, Care r = -0.20, DSM-IV r = -0.33). This provides support for the hypothesis that assertiveness is negatively associated with adult attention deficit.

Hypothesis 4: The correlations between independence and adult attention deficit were statistically significant (Brown r = -0.34, Care r = -0.18, DSM-IV r = -0.28). This provides support for the hypothesis that independence is negatively associated with adult attention deficit.

Hypothesis 5: The correlations between self-actualization and adult attention deficit were statistically significant (Brown r = -0.39, Care r = -0.34, DSM-IV r = -0.40). This provides support for the hypothesis that self-actualization is negatively associated with adult attention deficit.

Hypothesis 6: The correlations between reality testing and adult attention deficit were statistically significant (Brown r = -0.42, Care r = -0.44, DSM-IV r = -0.44). This provides support for the hypothesis that reality testing is negatively associated with adult attention deficit.

Hypothesis 7: The correlations between flexibility and adult attention deficit were statistically significant (Brown r = -0.25, Care r = -0.15, DSM-IV r = -0.24). This provides support for the hypothesis that flexibility is negatively associated with adult attention deficit.

Hypothesis 8: The correlations between problem solving and adult attention deficit were statistically significant (Brown r = -0.20, Care r = -0.25, DSM-IV r = -0.19,). This provides support for the hypothesis that problem solving is negatively associated with adult attention deficit.

Hypothesis 9: The correlations between optimism and adult attention deficit were statistically significant (Brown r = -0.29, Care r = -0.25, DSM-IV r = -0.33). This provides support for the hypothesis that optimism is negatively associated with adult attention deficit.

Hypothesis 10: The correlations between happiness and adult attention deficit were statistically significant (Brown r = -0.22, Care r = -0.22, DSM-IV r = -0.25). This provides support for the hypothesis that happiness is negatively associated with adult attention deficit.

Hypothesis 11: The correlations between empathy and adult attention deficit were mostly statistically significant (Brown r = -0.02, Care r = -0.19, DSM-IV r = -0.16) except for the correlation with Brown AAD which was non-significant. This provides some support for the hypothesis that empathy is negatively associated with adult attention deficit.

Hypothesis 12: The correlations between social responsibility and adult attention deficit were mostly statistically significant (Brown r = -0.04, Care r = -0.28, DSM-IV r = -0.24), except for the correlation with Brown AAD which was non-significant. This provides some support for the hypothesis that social responsibility is negatively associated with adult attention deficit.

Hypothesis 13: The correlations between interpersonal relationships and adult attention deficit were statistically significant (Brown r = -0.20, Care r = -0.21, DSM-IV r = -0.28). This provides support for the hypothesis that interpersonal relationships is negatively associated with adult attention deficit.

Hypothesis 14: The correlations between stress tolerance and adult attention deficit were statistically significant (Brown r = -0.41, p = 0.00; Care r = -0.31, p = 0.00; DSM-IV r = -0.40, p = 0.00). This provides support for the hypothesis that stress tolerance is negatively associated with adult attention deficit.

Hypothesis 15: The correlations between impulse control and adult attention deficit were statistically significant (Brown r = -0.22, Care r = -0.35, DSM-IV r = -0.25). This provides support for the hypothesis that impulse control is negatively associated with adult attention deficit.

Hypothesis 16: The correlations between total EQ and adult attention deficit were statistically significant (Brown r = -0.39, Care r = -0.39, DSM-IV r = -0.44). This provides support for the hypothesis that impulse control is negatively associated with adult attention deficit.

Multivariate Exploration

A multivariate exploration of the relationship between the dimensions of EQ and a composite measure of AAD, derived by adding the standardized scores from the three measures of AAD (Brown, CARE and DSM-V) is contained in table 2. After simultaneously entering all the dimensions of EQ into the regression, self-actualization (p=0.03), reality-testing (p=0.00), happiness (p=0.00) and stress tolerance (p=0.05) remained significant. This suggests that the EQ dimensions of self-actualization, reality-testing, happiness and stress tolerance demonstrate independent and significant associations with AAD [52-57].

Discussion

The global measure of EQ (Bar-on EQI) is significantly negatively correlated with adult attention deficit. In general, this suggests that adult attention deficit is associated with a variety of emotional challenges ranging from assertiveness to self-actualization. The EQ dimensions of self-regard, self-actualization, reality-testing and stress-tolerance displayed the strongest univariate correlations, and all remained significant except for self-regard (p=0.08) when simultaneously controlling for the influence of the other dimensions of EQ. The results suggest that AAD is a potential constraint within the process of developing accurate representations of external situations (reality testing) and may evoke defensive reality distorting processes arising from experienced threats to self-efficacy. The significant association with poor stress tolerance supports the increased likelihood of performance disruption and reduction in self-efficacy. From a higher-order perspective, the results suggest that AAD is associated with experiential challenges including difficulty attaining general happiness and achieving a sense of self-actualization. In general, the results suggest that AAD is associated with both lower order challenges like situational assessment and stress management, and experiential challenges like general happiness and self-actualization. The directionality of the relationship between EQ and AAD requires further exploration to determine potential types of intervention and assistance. To the extent that EQ contributes to AAD, activities that emphasize improvement in emotional competency may assist in reducing or constraining the symptoms and associated performance challenges associated with AAD. To the extent that AAD contributes to EQ, addressing the key dimensions of AAD like difficulty activating to work and sustaining attention on required tasks may support significant emotional dynamics like stress tolerance, and ultimately contribute to greater self-actualization and happiness.

To Know More About Global Journal of Intellectual & Developmental Disabilities Please click on:
https://juniperpublishers.com/gjidd/index.php

For more Open Access Journals in Juniper Publishers please click on: 
https://juniperpublishers.com/index.php

Comparative Reflections on the Acquisition of Language in Hearing and Deaf Children: A Case of Natural Learning of Mexican Sign Language - Juniper Publishers

Intellectual & Developmental Disabilities - Juniper Publishers Abstract Sign languages are visual and iconic languages used by Deaf c...