10th International Conference on Data Mining and Applications (DMA 2024)

April 20 ~ 21, 2024, Melbourne, Australia

Accepted Papers


Automating Test Scripts for Android Ui Testing

Anon, University of Waikato, New Zealand

ABSTRACT

The Espresso capture/replay testing tool for Android applications creates tests that are prone to test fragility, in that when small changes occur to the user interface, tests break and are unable to be rerun. To reduce the fragility and its impact inherent in Espresso tests, we take a model-driven development approach to test generation. Using interaction sequence models as the basis for generation, we are able to create test scripts that can be run in Android Studio identically to manually recorded tests. This process simplifies scripts when compared with those generated by recording and reduces the time required by developers to create and maintain the test suite, resulting in higher quality testing and validation of Android user interfaces.

KEYWORDS

Capture/Replay Testing, Android, Model-driven development, User interfaces.


Root Based 3d Reconstruction of Multiple Human Pose

Ayushya Rao, Sumer Raravikar, Tech Mahindra, Pune, Maharashtra, India

ABSTRACT

3D pose estimation and representation have gained significant attention in recent years due to their crucial roles in virtual reality, computer-aided design, and motion capture applications. In this paper, we will discuss the current state-of-the-art techniques in 3D pose estimation, which involve detecting the location of key body joints and estimating their orientations in 3D space. To address the representation of 3D poses in a virtual environment, we will focus on the conversion of the estimated 3D poses into formats suitable for incorporation into the virtual environment. This process may involve transforming the 3D pose coordinates into a different coordinate system or scaling the pose data to match the size and proportions of the virtual avatar. In the paper, we will present a novel pipeline to convert 2D images representing the poses to 3D humanoids in a virtual environment. Additionally, we will evaluate the performance of various techniques for 3D pose estimation and representation in terms of their accuracy, speed, and scalability. To demonstrate the effectiveness of our proposed techniques, we will present results comparing them to state-of-the-art methods in the field. Our goal in this paper is to summarize the main findings of our study and highlight the potential of our proposed techniques for advancing the field of 3D pose estimation and representation in virtual environments. By providing a comprehensive review and experimental evaluation of 3D pose estimation and representation techniques, this paper aims to serve as a valuable resource for researchers, developers, and practitioners in the fields of computer vision, AI, and virtual reality.

KEYWORDS

3D pose estimation , Virtual environment, Representation techniques, Performance evaluation, Virtual reality.


Privacy-preserving Iot Intrusion Detection: Challenges and Solutions in Implementing the Csai-4-cps Model

Hebert Silva1, 2 and Regina Moraes1, 3, 1Universidade Estadual de Campinas, Limeira, Brazil, 2National Industrial Training Service, SENAI, Sao Paulo, Brazil, 3University of Coimbra, CISUC, DEI, Coimbra, Portugal

ABSTRACT

The CSAI-4-CPS model, which was briefly described in previous work, leverages federated learning to collaboratively train machine learning models, providing accurate and up-to-date results while preserving data privacy. This approach is particularly beneficial in complex and dynamic Cyber-Physical Systems (CPS) environments where traditional centralized machine learning models may fall short. This paper presents and describes the expanded model with its particularities, and it also introduces the first validation of the proposal using an implemented framework. The expansion includes the real-time detection of new threats, the verification and validation of results at nodes benefiting from federated learning, false positive consideration, and comparing the results with and without the adoption of the model. The model was implemented in an IoT system because such systems often represent the most challenging scenarios in CPS cybersecurity. In most cases, IoT devices are part of a more complex CPS framework, where they are typically more vulnerable assets. The application of CSAI-4-CPS to predict malicious traffic in Internet of Things (IoT) networks appears promising. The results demonstrate that the model effectively detects intrusions within these datasets. By employing federated learning and a self-adaptive architecture, the model maintains its accuracy and relevance as new data emerges.

KEYWORDS

Federation Learning, Data Privacy, Cybersecurity, CPS, IoT.


Combination of Industrial Engineering Principles Into the Art Industry: Optimization Movie Production Planning by Genetic Algorithm

AtaollahShahpanah1, Mostafa Shahpanah2, Syed Ahmad Helmi3, AzanizawatiMa’aram41Faulty of Mechanical Engineering, University Teknologi Malaysia, 2Accademia Di Bella Arti, Rome, Italy, 3School of Engineering Education, Purdue University, 4Faulty of Mechanical Engineering, University Teknologi Malaysia

ABSTRACT

Today, with the industrialization of all aspects of human life, industrial engineering is the best solution for productivity and growth in all aspects of human life. By using the principles of industrial engineering in all production and service systems that are related to human daily life, we can help meet all human needs with the highest quality and at the lowest possible cost for the manufacturer or service provider. In this article, one of the sectors that, despite industrialization, has the least benefit from the implementation of the principles of industrial engineering, is the art and entertainment industry. In this article, the production process of a short film, which was in the pre-production stage by AM Production in Italy, was chosen as a case study and it was tried to optimize the production in this project by applying the principles of industrial engineering in the production planning part of this project. In this article, by simulating the process of making this film by genetic algorithm and finding the optimal production point based on planning, and comparing it with the current planning of this project, it proves that industrial engineering can reduce the production costs of artistic projects and increase in profit. And by applying these principles, we have been able to reduce the cost of film production from approximately 60,000 dollars to nearly 40,000 dollars and offer a more regular planning that will be able to have a good impact even on the quality of work.One of the important novelties of this article is to implement the principles of industrial engineering and optimize this project without interfering and changing the approach and disrupting the artistic path of this project.

KEYWORDS

Industrial Engineering; Production Planning; Supply Chain; Optimization; Genetic Algorithm; Art Industry; Production Management; Industrialization.


Evolution of the Research Topic of Lis in China -Based on Content Analysis of Cssci Journal Papers From 2006 to 2021

Yaqing Wang1 and Ruili Geng2 and Xinying An1, 1Institute of Medical Information, Chinese Academy of Medical Sciences, Beijing, China, 2School of Information Management, Zhengzhou University, Zhengzhou, China

ABSTRACT

It is of great significance to explore the influence of the rapid development and application of new technology and the complexity of social problems on library and information research as well as the logic mechanism behind it to analyze the evolution path of the subject and capture the production speed of new scientific research. [Method/Process] This study uses sampling method to select 11,965 library and information science journal papers collected by CSSCI in 2006, 2012, 2018 and 2021 for content analysis, and analyzes the changes and development of research topics in library and information science. [Result/Conclusion] Pictorial practice is an important research theme in this field over the years. "Public culture", "digital humanities", "emergencies", "social media analysis" and other sub-topics were almost zero in 2006, but took up a large proportion in 2018 and 2021. "Emergency" is the latest hot topic of research; In general, the changes of library and information subject research topics are logically related to new technology and social background. While traditional topics are declining, emerging topics are rising, and cross-merging research topics with other disciplines are constantly emerging.

KEYWORDS

LIS; Research Theme; Discipline Development; Content Analysis.


The Impact of Ai Tool on Engineering at Anz Bank an Emperical Study on Github Copilot Within Coporate Environment

Sayan Chatterjee1, Ching Louis Liu2, Gareth Rowland, Tim Hogarth, The Australia and New Zealand Banking Group Limited, Melbourne, Australia

ABSTRACT

The increasing popularity of AI, particularly Large Language Models (LLMs), has significantly impacted various domains, including Software Engineering. This study explores the integration of AI tools in software engineering practices within a large organization. We focus on ANZ Bank, which employs over 5000 engineers covering all aspects of the software development life cycle. This paper details an experiment conducted using GitHub Copilot, a notable AI tool, within a controlled environment to evaluate its effectiveness in real-world engineering tasks. Additionally, this paper shares initial findings on the productivity improvements observed after GitHub Copilot was adopted on a large scale, with about 1000 engineers using it. ANZ Bank's six-week experiment with GitHub Copilot included two weeks of preparation and four weeks of active testing. The study evaluated participant sentiment and the tool's impact on productivity, code quality, and security. Initially, participants used GitHub Copilot for proposed use-cases, with their feedback gathered through regular surveys. In the second phase, they were divided into Control and Copilot groups, each tackling the same Python challenges, and their experiences were again surveyed. Results showed a notable boost in productivity and code quality with GitHub Copilot, though its impact on code security remained inconclusive. Participant responses were overall positive, confirming GitHub Copilot's effectiveness in large-scale software engineering environments. Early data from 1000 engineers also indicated a significant increase in productivity and job satisfaction.

KEYWORDS

Copilot, GitHub, ANZ Bank, Code Suggestions, Code Debugging, Experiment, Software Engineering, AI


Relevance Theory and Interpretation of Reflexive Anaphora in Vp-ellipsis by Chinese-speakinglearners of English

Hong GuangYing, English Department, University of Colorado Denver,Denver, USA

ABSTRACT

The experiment reported in this study presented evidence that the relevance-theoretic comprehension procedure (Carston, 2000; Wilson, 2000) constrained the way the second Language (L2) learners process reflexive anaphora in VP-ellipsis. Rather than make comparisons among different interpretations, the L2 learners, following a path of least effort in computing cognitive effects, plowed ahead with the single most accessible “sloppy” interpretation. The finding suggests that there may be a relevance-based comprehension module (Sperber, 2000), which seems to be hardwired, domain-specific and information encapsulated.

KEYWORDS

Relevance Theory, Reflexive Anaphora, VP-Ellipsis, Strict-sloppy ambiguity


In-Context Learning for Scalable and Online Hallucination Detection in RAGS

Nicolò Cosimo Albanese, Amazon Web Services (AWS), Milan, Italy

ABSTRACT

This study presents a novel method for online, real-time, and sentence-level factuality evaluation in Retrieval Augmented Generation (RAG) systems. Our lightweight and accurate approach enables online application in the absence of a ground truth, utilizing in-context learning to assess Large Language Models' (LLMs) response reliability. We validate our method against existing frameworks, including prompt-based (RAGAS) and similarity-based (BERTScore) approaches. Our goal is to equip end users with tools for adopting trustworthy LLM-based solutions, grounded in proprietary documents and providing transparency on generated sentences without requiring model retraining or ground truth answers. This framework offers a simple and practical solution for evaluating RAG-generated response factuality, adaptable across domains for question answering and fact-checking applications.

KEYWORDS

Large Language Models, Hallucinations, Prompt Engineering, Generative AI, Responsible AI.


Cross-dialect Sentence Transformation: a Comparative Analysis of Language Models for Adapting Sentences to British English

Shashwat Mookherjee and Shruti Dutta, Indian Institute of Technology Madras, India

ABSTRACT

This study explores linguistic distinctions among American, Indian, and Irish English dialects and assesses various Language Models (LLMs) in their ability to generate British English translations from these dialects. Using cosine similarity analysis, the study measures the linguistic proximity between original British English translations and those produced by LLMs for each dialect. The findings reveal that Indian and Irish English translations maintain notably high similarity scores, suggesting strong linguistic alignment with British English. In contrast, American English exhibits slightly lower similarity, reflecting its distinct linguistic traits. Additionally, the choice of LLM significantly impacts translation quality, with Llama-2-70b consistently demonstrating superior performance. The study underscores the importance of selecting the right model for dialect translation, emphasizing the role of linguistic expertise and contextual understanding in achieving accurate translations.


In-context Learning for Online Hallucination Detection in Rag Solutions

Nicolò Cosimo Albanese, Amazon Web Services (AWS), Milan, Italy

ABSTRACT

This study presents a novel method for online, real-time, and sentence-level factuality evaluation in Retrieval Augmented Generation (RAG) systems. Our lightweight and accurate approach enables online application in the absence of a ground truth, utilizing in-context learning to assess Large Language Models' (LLMs) response reliability. We validate our method against existing frameworks, including prompt-based (RAGAS) and similarity-based (BERTScore) approaches. Our goal is to equip end users with tools for adopting trustworthy LLM-based solutions, grounded in proprietary documents and providing transparency on generated sentences without requiring model retraining or ground truth answers. This framework offers a simple and practical solution for evaluating RAG-generated response factuality, adaptable across domains for question answering and fact-checking applications.

KEYWORDS

Large Language Models, Hallucinations, Prompt Engineering, Generative AI, Responsible AI.


Unraveling the Threads of Deskillization: Navigating the Intersection of Ai, Human Agency, and Societal Values in the Contemporary Technological Landscape

Steffen Turnbull, Institute for AI Safety & Security, DLR German Aerospace Center, Sankt Augustin

ABSTRACT

This paper examines the multifaceted impact of artificial intelligence on human agency in contemporary society. It investigates how AI's integration has led to the mechanization of tasks, erosion of human competence, and complicity in deskilling. Drawing on parallels with Ivan Illich's critique of industrial institutions, the paper explores the historical patterns of technological growth and the prioritization of efficiency over human agency. It underscores the transformative potential of personalized interactions in countering the dehumanizing effects of automation. Furthermore, the paper offers starting point for further research towards a practical guide for reclaiming and preserving human agency in the AI era, emphasizing mindful engagement with technology, resisting total automation, reimagining education, prioritizing human-centered design, fostering ethical AI frameworks, cultivating empathy, and encouraging informed participation. It calls for a balanced approach to AI that leverages technology to augment human abilities while safeguarding our values and humanity. In general, this paper provides an alternative but comprehensive exploration of AI's challenges and opportunities, inviting individuals and societies to navigate the evolving human-machine relationship while preserving the essence of human agency.

KEYWORDS

Artificial Intelligence, Society, Post-phenomenology, Agency, Human-machine Interactions.


Prediction of Gender and Handedness From Offline Handwriting Using Convolutional Neural Network With Canny Edge Detection

Donata D. Acula12, John Angelo C. Algarne1, Jasmine Joy D. Lam1, Lester John O. Quilaman1 and Leira Marie D. Teodoro1, 1Department of Computer Computer Science, University of Santo Tomas, Philippines, 2Research Center for Natural and Applied Sciences, University of Santo Tomas, Philippines

ABSTRACT

Handwriting classification based on a writer's demographics such as gender and handedness has been an important discipline in the field of forensic science, and biometric security. Although there are already experts in forensic science called Forensics Document Examiners, their work can be af ected due to a lack of ef iciency and the risk of human errors. As there are only limited studies on handwriter demographics problem using Convolutional Neural Networks (CNN), this research implemented a system that predicts the gender, handedness, and combined gender-and-handedness of of line handwritten images from the IAM Handwriting iDatabase 3.0 using 2-Layer and 3-Layer CNN with Canny Edge Detection (CED). The researchers found out that the base model 2L-CNN without CED had the best performance in the binary classes, gender and handedness, with an overall accuracy of 68.5% and 89.75%, respectively. On the other hand, 3L-CNN without CED had the best average accuracy of 51.36% in the combined gender-and-handedness class. It was observed that Canny Edge Detection is not an ef ective preprocessing technique in handwriting classification as it worsened the performance of its counterpart, without CED, in most of the models.

KEYWORDS

Neural Networks, Edge Detection, Of line Handwriting, Machine Learning, Deep Learning, Preprocessing.


Development of Human-robot Interaction System Based on Artificial Intelligence

Nguyen Minh Trieu and Nguyen Truong Thinh, Institute of Intelligent and Interactive Technology, University of Economics Ho Chi Minh City Ho Chi Minh City, Vietnam

ABSTRACT

Nowadays, service robots are becoming more popular so research on human-robot interaction systems plays an important role. This study presents an interactive system for a humanoid robot, aiming to enable natural communication with humans. The first step involves identifying the interactor's identity and emotions to facilitate subsequent communication. To achieve this, the VGG19 model is fined-tuned for emotion recognition and used a pre-trained model to identify the user's identity. The system recognizes the speaker's speech and categorizes it into either knowledge text or dialogue text. Text classification relies on the CNN-BiLSTM model, which achieves an impressive accuracy of 95.02%. Responses from the knowledge text processing are generated by a large language model, while responses from the dialogue text processing rely on a trained artificial intelligence model. The objective of this research is to create a humanoid robot model capable of effectively interacting with humans. The AI algorithms with the movements of the head, two arms, and the robot's base are combined. In the experimental results, the user's satisfaction level was found to be 78%, and the robot's awareness level reached 83%. Notably, the robot exhibited a minimal error rate of only 10%. In the emotion recognition experiment, the robot achieved an accuracy of 84.14%, with no conflicts arising during the experiment.

KEYWORDS

Robotics, Human-Robot Interaction, Artificial Intelligence, Computer Vision, Natural Language Processing, Humanoid Robot.


Predicting Diabetes With Machine Learning Analysis of Income and Health Factors

Fariba Jafari Horestani, M. Mehdi Owrang O, Department of Computer Science, American University, Washington, DC 20016

ABSTRACT

In this study, we delve into the intricate relationships between diabetes and a range of health indicators, with a particular focus on the newly added variable of income. Utilizing data from the 2015 Behavioral Risk Factor Surveillance System (BRFSS), we analyze the impact of various factors such as blood pressure, cholesterol, BMI, smoking habits, and more on the prevalence of diabetes. Our comprehensive analysis not only investigates each factor in isolation but also explores their interdependencies and collective influence on diabetes.A novel aspect of our research is the examination of income as a determinant of diabetes risk, which to the best of our knowledge has been relatively underexplored in previous studies. We employ statistical and machine learning techniquesto unravel the complex interplay between socio-economic status and diabetes, providing new insights into how financial well-being influences health outcomes.Our research reveals a discernible trend where lower income brackets are associated with a higher incidence of diabetes. In analyzing a blend of 33 variables, including health factors and lifestyle choices, we identified that features such as high blood pressure, high cholesterol, cholesterol checks, income, and Body Mass Index (BMI) are of considerable significance. These elements stand out among the myriad of factors examined, suggesting that they play a pivotal role in the prevalence and management of diabetes.


Automating Software Regression Testing Through Log Analysis: a Statistical Comparisonapproach Using Nlp Techniques and Hypothesis Testing

Yassine Elhallaoui, Department of Computer Science and Mathematics, University of Liverpool JohnMoores, UK

ABSTRACT

This study investigates the enhancement of software regression testing through the integration of Natural Language Processing (NLP) and advanced statistical techniques, including T-tests, F-tests, Z-tests, Chi- squared tests, ANOVA, and the Turkey-Kramer test. Utilizing Topics Modeling and OpenAI's GPT-3.5Turbo Large Language Model (LLM), it analyzes Android system logs across three major versions togain deeper insights into software performance disparities. Results highlight variations in error andwarning rates across Android versions, supported by various statistical tests. Employing the Non- Negative Matrix Factorization (NMF) algorithm facilitates ef ective topic modeling and systematic interpretation of complex log data. Leveraging the GPT-3.5 Turbo LLM's NLP capabilities transforms raw log data into an accessible format, enhancing NLP's utility in regression testing. This integratedapproach of ers detailed insights into Android system behavior, emphasizing the significance of statistical methods in software testing. It demonstrates the synergy of NLP, particularly topic modeling and LLM, with statistical analysis for more ef ective regression testing, suggesting implications for future researchin software performance analytics and testing.

KEYWORDS

Software Regression Testing, Natural Language Processing (NLP), Large Language Model (LLM), Statistical Analysis (T-test; F-tests; Z-tests; Chi-squared Tests; ANOVA; Turkey-Kramer Test), Topic Modeling, Software.


Cybernetics and the Origin of Life; the Origin of Matter and Black Holes

Gihan Soliman, The Linnaean Society for Mineral Cybernetics, Leadhills, Scotland, ML12 1YA

ABSTRACT

Since abandoning Linnaeus' Kingdom Minerals as a living system, the philosophy of science has been fragmented into specialities and knowledge domains that fail to communicate across effectively. Between the living and the so-called non-living systems, as well as social organisations, the value of Cybernetics as a reconciliatory medium tawards a theory of everything has never been more significant. Each to their own jargon, biases and conflicting perspectives, cross-disciplinary science communication has become almost futile. Sciences claiming to present objective views, such as the famous E=mc^2, present the reality in flat linear formulas, while living systems are five dimensional, with spacetime and the position and role of the observer in realising a three dimensional structure, let alone, the four dimensional interactive designs. This paper postulates the origin of life and matter from a Cybernetic perspective, uniting the laws of physics, and the Big bang theory with the String Theory. E=mc^2 as popularly presented, doesn't refer to the role of information in the inter-reversibility of energy and matter. Information according to the theory of information is a message, a sender and a receiver. Overlooking the role of the observer is overlooking the role of information in system processes, presenting a flat snapshot of reality. This paper explores the origin of life, conservation of matter, dark matter, and the fabric of spacetime while postulating a theory of everything from a Cybernetic perspective.

KEYWORDS

Cybernetics, reconciliation, origin of life, theory of everything, holistic, Linnaeus, observer, minerals.


EFFICIENT EQUALITY TEST TECHNIQUE USING IDENTITY-BASED ENCRYPTION FOR TELEMEDICINE SYSTEMS

Chenguang Wang and Huiyan Chen, Department of Cryptozoology and Technology, Beijing Electronic Science and Technology Institute, Beijing, China

ABSTRACT

Telemedicine systems play an important role in early HIV screening, but data privacy in the medical system has always been a challenging issue. For data privacy in the medical system, using identity encryption-based equivalence testing schemes to protect private data and screen for early AIDS has important prospects. There are already a large number of studies on equivalence test schemes using identity encryption, but these schemes all have problems such as low efficiency and failure to support revocability and quantum resistance. In response to the shortcomings in current research, this paper proposes a novel identity encryption scheme. This scheme is the first equivalent test scheme that supports both revocable encryption and quantum resistance, and has higher memory and computational performance than other schemes.

KEYWORDS

Telemedicine Systems, Equality test, Identity Encryption.


Prüm Ii and the Epris Index in Europe: an Attempt to Balance People's Security and Privacy?

Ramona Cavalli, Department of Foreign Languages and Literatures, University of Verona, Verona, Italy

ABSTRACT

The need to protect European citizens from organized crime, which is increasingly globalized, has made it necessary to use artificial intelligence tools, including facial recognition of people. However, this needs for security conflicts with the need to protect the personal data of the citizens themselves.

KEYWORDS

Prüm, Epris, digital, security, privacy.


A Novel Unconditionally Secure and Lightweight Bipartite Key Agreement Protocol

Jun Liu, University of North Dakota, United States of America

ABSTRACT

This paper introduces a new bipartite key agreement (2PKA) protocol which provides unconditionally security and lightweight. The unconditional security is stemmed from the known impossibility of distinguishing a particular solution from all possible solutions of an underdetermined system of equations. The indistinguishability prevents an adversary from inferring to the common secret-key even with the access to an unlimited amount of computing capability. This new 2PKA protocol is also lightweight because that the calculation of a common secret-key only makes use of simple modular arithmetic. This information-theoretic 2PKA scheme provides the desired features of Key Confirmation (KC), Session Key (SK) security, Know-Key (KK) security, protection of individual privacy, and uniformly distributed value of a common key under prime modulus.