Anon, University of Waikato, New Zealand
The Espresso capture/replay testing tool for Android applications creates tests that are prone to test fragility, in that when small changes occur to the user interface, tests break and are unable to be rerun. To reduce the fragility and its impact inherent in Espresso tests, we take a model-driven development approach to test generation. Using interaction sequence models as the basis for generation, we are able to create test scripts that can be run in Android Studio identically to manually recorded tests. This process simplifies scripts when compared with those generated by recording and reduces the time required by developers to create and maintain the test suite, resulting in higher quality testing and validation of Android user interfaces.
Capture/Replay Testing, Android, Model-driven development, User interfaces.
Ayushya Rao, Sumer Raravikar, Tech Mahindra, Pune, Maharashtra, India
3D pose estimation and representation have gained significant attention in recent years due to their crucial roles in virtual reality, computer-aided design, and motion capture applications. In this paper, we will discuss the current state-of-the-art techniques in 3D pose estimation, which involve detecting the location of key body joints and estimating their orientations in 3D space. To address the representation of 3D poses in a virtual environment, we will focus on the conversion of the estimated 3D poses into formats suitable for incorporation into the virtual environment. This process may involve transforming the 3D pose coordinates into a different coordinate system or scaling the pose data to match the size and proportions of the virtual avatar. In the paper, we will present a novel pipeline to convert 2D images representing the poses to 3D humanoids in a virtual environment. Additionally, we will evaluate the performance of various techniques for 3D pose estimation and representation in terms of their accuracy, speed, and scalability. To demonstrate the effectiveness of our proposed techniques, we will present results comparing them to state-of-the-art methods in the field. Our goal in this paper is to summarize the main findings of our study and highlight the potential of our proposed techniques for advancing the field of 3D pose estimation and representation in virtual environments. By providing a comprehensive review and experimental evaluation of 3D pose estimation and representation techniques, this paper aims to serve as a valuable resource for researchers, developers, and practitioners in the fields of computer vision, AI, and virtual reality.
3D pose estimation , Virtual environment, Representation techniques, Performance evaluation, Virtual reality.
Hebert Silva1, 2 and Regina Moraes1, 3, 1Universidade Estadual de Campinas, Limeira, Brazil, 2National Industrial Training Service, SENAI, Sao Paulo, Brazil, 3University of Coimbra, CISUC, DEI, Coimbra, Portugal
The CSAI-4-CPS model, which was briefly described in previous work, leverages federated learning to collaboratively train machine learning models, providing accurate and up-to-date results while preserving data privacy. This approach is particularly beneficial in complex and dynamic Cyber-Physical Systems (CPS) environments where traditional centralized machine learning models may fall short. This paper presents and describes the expanded model with its particularities, and it also introduces the first validation of the proposal using an implemented framework. The expansion includes the real-time detection of new threats, the verification and validation of results at nodes benefiting from federated learning, false positive consideration, and comparing the results with and without the adoption of the model. The model was implemented in an IoT system because such systems often represent the most challenging scenarios in CPS cybersecurity. In most cases, IoT devices are part of a more complex CPS framework, where they are typically more vulnerable assets. The application of CSAI-4-CPS to predict malicious traffic in Internet of Things (IoT) networks appears promising. The results demonstrate that the model effectively detects intrusions within these datasets. By employing federated learning and a self-adaptive architecture, the model maintains its accuracy and relevance as new data emerges.
Federation Learning, Data Privacy, Cybersecurity, CPS, IoT.
Yaqing Wang1 and Ruili Geng2 and Xinying An1, 1Institute of Medical Information, Chinese Academy of Medical Sciences, Beijing, China, 2School of Information Management, Zhengzhou University, Zhengzhou, China
It is of great significance to explore the influence of the rapid development and application of new technology and the complexity of social problems on library and information research as well as the logic mechanism behind it to analyze the evolution path of the subject and capture the production speed of new scientific research. [Method/Process] This study uses sampling method to select 11,965 library and information science journal papers collected by CSSCI in 2006, 2012, 2018 and 2021 for content analysis, and analyzes the changes and development of research topics in library and information science. [Result/Conclusion] Pictorial practice is an important research theme in this field over the years. "Public culture", "digital humanities", "emergencies", "social media analysis" and other sub-topics were almost zero in 2006, but took up a large proportion in 2018 and 2021. "Emergency" is the latest hot topic of research; In general, the changes of library and information subject research topics are logically related to new technology and social background. While traditional topics are declining, emerging topics are rising, and cross-merging research topics with other disciplines are constantly emerging.
LIS; Research Theme; Discipline Development; Content Analysis.
Sayan Chatterjee1, Ching Louis Liu2, Gareth Rowland, Tim Hogarth, The Australia and New Zealand Banking Group Limited, Melbourne, Australia
The increasing popularity of AI, particularly Large Language Models (LLMs), has significantly impacted various domains, including Software Engineering. This study explores the integration of AI tools in software engineering practices within a large organization. We focus on ANZ Bank, which employs over 5000 engineers covering all aspects of the software development life cycle. This paper details an experiment conducted using GitHub Copilot, a notable AI tool, within a controlled environment to evaluate its effectiveness in real-world engineering tasks. Additionally, this paper shares initial findings on the productivity improvements observed after GitHub Copilot was adopted on a large scale, with about 1000 engineers using it. ANZ Bank's six-week experiment with GitHub Copilot included two weeks of preparation and four weeks of active testing. The study evaluated participant sentiment and the tool's impact on productivity, code quality, and security. Initially, participants used GitHub Copilot for proposed use-cases, with their feedback gathered through regular surveys. In the second phase, they were divided into Control and Copilot groups, each tackling the same Python challenges, and their experiences were again surveyed. Results showed a notable boost in productivity and code quality with GitHub Copilot, though its impact on code security remained inconclusive. Participant responses were overall positive, confirming GitHub Copilot's effectiveness in large-scale software engineering environments. Early data from 1000 engineers also indicated a significant increase in productivity and job satisfaction.
Copilot, GitHub, ANZ Bank, Code Suggestions, Code Debugging, Experiment, Software Engineering, AI
Hong GuangYing, English Department, University of Colorado Denver,Denver, USA
The experiment reported in this study presented evidence that the relevance-theoretic comprehension procedure (Carston, 2000; Wilson, 2000) constrained the way the second Language (L2) learners process reflexive anaphora in VP-ellipsis. Rather than make comparisons among different interpretations, the L2 learners, following a path of least effort in computing cognitive effects, plowed ahead with the single most accessible “sloppy” interpretation. The finding suggests that there may be a relevance-based comprehension module (Sperber, 2000), which seems to be hardwired, domain-specific and information encapsulated.
Relevance Theory, Reflexive Anaphora, VP-Ellipsis, Strict-sloppy ambiguity
Steffen Turnbull, Institute for AI Safety & Security, DLR German Aerospace Center, Sankt Augustin
This paper examines the multifaceted impact of artificial intelligence on human agency in contemporary society. It investigates how AI's integration has led to the mechanization of tasks, erosion of human competence, and complicity in deskilling. Drawing on parallels with Ivan Illich's critique of industrial institutions, the paper explores the historical patterns of technological growth and the prioritization of efficiency over human agency. It underscores the transformative potential of personalized interactions in countering the dehumanizing effects of automation. Furthermore, the paper offers starting point for further research towards a practical guide for reclaiming and preserving human agency in the AI era, emphasizing mindful engagement with technology, resisting total automation, reimagining education, prioritizing human-centered design, fostering ethical AI frameworks, cultivating empathy, and encouraging informed participation. It calls for a balanced approach to AI that leverages technology to augment human abilities while safeguarding our values and humanity. In general, this paper provides an alternative but comprehensive exploration of AI's challenges and opportunities, inviting individuals and societies to navigate the evolving human-machine relationship while preserving the essence of human agency.
Artificial Intelligence, Society, Post-phenomenology, Agency, Human-machine Interactions.
Donata D. Acula12, John Angelo C. Algarne1, Jasmine Joy D. Lam1, Lester John O. Quilaman1 and Leira Marie D. Teodoro1, 1Department of Computer Computer Science, University of Santo Tomas, Philippines, 2Research Center for Natural and Applied Sciences, University of Santo Tomas, Philippines
Handwriting classification based on a writer's demographics such as gender and handedness has been an important discipline in the field of forensic science, and biometric security. Although there are already experts in forensic science called Forensics Document Examiners, their work can be af ected due to a lack of ef iciency and the risk of human errors. As there are only limited studies on handwriter demographics problem using Convolutional Neural Networks (CNN), this research implemented a system that predicts the gender, handedness, and combined gender-and-handedness of of line handwritten images from the IAM Handwriting iDatabase 3.0 using 2-Layer and 3-Layer CNN with Canny Edge Detection (CED). The researchers found out that the base model 2L-CNN without CED had the best performance in the binary classes, gender and handedness, with an overall accuracy of 68.5% and 89.75%, respectively. On the other hand, 3L-CNN without CED had the best average accuracy of 51.36% in the combined gender-and-handedness class. It was observed that Canny Edge Detection is not an ef ective preprocessing technique in handwriting classification as it worsened the performance of its counterpart, without CED, in most of the models.
Neural Networks, Edge Detection, Of line Handwriting, Machine Learning, Deep Learning, Preprocessing.
Copyright © SIGL 2024