An estimated 1 in 36 children are diagnosed with autism spectrum disorder (ASD) according to the CDC’s Autism and Developmental Disabilities Monitoring (ADDM) Network. Most of these children are developing deep adaptive coping mechanisms as they mature, making autism harder to identify in adulthood, with undiagnosed, untreated individuals still struggling. The goal of this project is to develop a web application that allow individuals to assess their ASD symptoms. It is a very good Solution with the integration of Computer vision technologies, GazeCloud API and OpenCV were used for accurate eye-tracking. Flask framework has been widely utilized throughout this work. The users were forced to finish a 50-question autism screening test, and then will perform a series of tasks that are supposed to test their social skills. One task involves playing the role of a user of a well-known user interface, such as Facebook or Amazon, while another requires reading text and paragraphs. The proposed solution was characterized by three features: it used a regular webcam for eye-tracking, so removed the need for expensive equipment, it included a simple UI which would enable it to be used by anyone.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5091179
GazeCloud API An open-source eye-tracking software module that may be easily included in online applications is called GazeCloud. Through the use of a JavaScript file, it provides real-time data extraction and gaze data gathering. Integration An API key is generated for a specified domain, and the GazeCloud API is accessible via their official website. For this proposed work, we used a semi-dedicated.com domain instead of a dedicated one, integrating GazeCloud’s free edition.
This paper presents a developed web application for autism spectrum disorder (ASD) detection using eye-tracking and machine learning algorithms that are accurate up to a promising number of 83% on test sets but indeed much more work needs to be done to make it better for real-life applications. The major limitation is the amount of readily available dataset size and homogeneity. We need to enrich the feature set beyond gaze location and basic eye movements. Adding features like pupil dilation, blink rate, fixation duration and saccade characteristics (amplitude, velocity, latency) can capture subtler ASD indicators. Although we should explore more advanced machine learning architectures (especially deep learning models like RNNs and CNNs) this is future work. RNNs (because they can handle sequential data) can model the temporal dynamics of eye movements; CNNs can find spatial patterns in the gaze data. We must integrate the developed model into a clinical decision support system (CDSS) to translate the research into clinical practice. A well-designed CDSS would seamlessly incorporate the eye-tracking tool into existing clinical workflows, providing clinicians with an interface to interpret the model’s output and other diagnostic assessments. Data privacy and security (within the CDSS framework) is crucial and we need to comply with HIPAA and other relevant regulations.