Arpad Kiss, GreenEyes Artificial Intelligence Services, LLC, Lewes, Delaware, USA
This research report provides a comprehensive analysis of Compact Composite Descriptors (CCDs) as a highly ef icient alternative to deep learning embeddings for Content-Based Image Retrieval (CBIR) in resource-constrained environments. While Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) of er superior semantic performance, their computational overhead and storage requirements—often exceeding 8KB per image—limit their applicability in Edge AI and IoT scenarios. In contrast, engineered descriptors such as the Color and Edge Directivity Descriptor (CEDD), Fuzzy Color and Texture Histogram (FCTH), and Joint Composite Descriptor (JCD) utilize fuzzy inference systems to encode visual features into ultra-compact vectors ranging from 54 to 72 bytes. The study explores the algorithmic foundations of these descriptors, their implementation within the LIRE (Lucene Image Retrieval) framework, and benchmarks demonstrating their competitive retrieval accuracy against MPEG-7 standards. Finally, the report highlights the strategic utility of CCDs for privacy-preserving, low-bandwidth visual search on edge devices, proposing hybrid architectures that leverage the speed of fuzzy composites with the semantic power of neural re-ranking.
Computer Vision, Cloud Computing, Embedded Systems, Content-based Image Retrieval Systems
Awatef Balobaid and R.Y. Aburasain, Jazan University, KSA
This research suggests a new technique to detect and categorize student performance that will assistschools in improving outcomes. A regression-based technique estimates student performance, and aclassification model classifies students by performance. It begins with a regression model that predictsstudent performance. It then utilizes gradient descent to refine the model over and over again to generatebetter predictions. The model is then cross-validated and retrained on the complete set of data to make itmore accurate and helpful in different circumstances. The system organizes students by predictedperformance using the regression model. To increase classification accuracy, further optimization isutilized to determine the appropriate option limit for splitting performance groups. We assess themethod's efficiency in terms of accuracy, response time, scalability, and resource utilization. The findingsdemonstrate the new procedure is superior to the old ones. This strategy is robust, versatile, and cost-effective for educational organizations since it can generate correct predictions 95% of the time, reactmore rapidly, utilize resources economically, and be employed on a big scale. It helps instructors knowhow their pupils are doing so they may intervene early and make better decisions to support them. Data-based analysis can enhance educational results by utilizing the system's power and ability to adapt to newdata
Adaptability, Classification, Data-driven, Educational institutions, Optimization, Performance prediction,Regression, Resource utilization, Scalability, Student outcomes
Jean-Marie Kabasele Tenday, University ND Kasayi(UKA), Belgium
Traditional threat modelling techniques often focus on theoretical or system-specific threats without grounding them in empirical adversarial behaviour. Conversely, frameworks such as MITRE ATT&CK provide rich, intelligence-based taxonomies of real-world attacker tactics, techniques, and procedures (TTPs), but are rarely integrated into early software design phases. This paper proposes a methodology for linking misuse cases—UML-based representations of malicious system interactions—with MITRE ATT&CK techniques, enabling traceability between system-level threats and empirically observed attacks. The proposed framework enhances the relevance, completeness, and operational value of misuse case–based threat modelling. A structured mapping template and example implementation demonstrate how software architects can enrich their security design processes using ATT&CK-informed misuse cases.
Misuse case, Mitre ATT&CK, Threat Analysis, Threat Modelling, Cybersecurity, Secure Design.
Burak Enes Beygog1, Ahmet Burak Can 2, 1Aselsan Inc., Ankara, Türkiye, 2 Hacettepe University, Ankara, Türkiye
While containerization has significantly simplified web application deployment, it has simultaneously introduced security blind spots that traditional testing methodologies often fail to address. This study examines the effectiveness of Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and container scanning tools in identifying vulnerabilities within containerized environments through empirical testing of five open-source tools against three vulnerable applications (DVWA, Juice Shop, and VulnerableApp). Results demonstrate that reliance on any single tool presents substantial risk, with individual tools failing to detect up to 91% of existing vulnerabilities, while each tool category exhibited distinct limitations. Trivy uniquely identified critical infrastructure and supply chain risks, whereas DAST tools including Nikto and OWASP ZAP proved essential for detecting runtime misconfigurations. Notably, authenticated scanning emerged as particularly impactful, enhancing vulnerability detection rates by over 1,400%, thereby underscoring the necessity of implementing a Defense-in-Depth security strategy. Through strategic orchestration of Trivy for infrastructure assessment, authenticated DAST for runtime analysis, and SonarQube for static code analysis, security teams can substantially reduce their vulnerability miss rate to approximately 32%, achieving comprehensive coverage across code, infrastructure, and runtime configuration layers.
Container Security, DevSecOps, Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), Software Composition Analysis (SCA), Supply Chain Security
Binisa Giri1, Hashmath Fathima, Kelechi Nwachukwu 2 and Kofi Nyarko, Department of Electrical and Computer Engineering, Morgan State University, Baltimore, USA
Cyber Shield is an automated graph augmented abusive language and interaction detection system designed to identify harmful content including toxic interaction, hate speech, and general negative sentiment that is prevalent on social media platforms. As part of integrating a robust sentiment component into the system, we evaluated four widely used sentiment analysis models: BERT, RoBERTa, VADER, and TextBlob based on their complementary strengths and methodological diversity. BERT and RoBERTa represent string transformers architectures capable of capturing contextual meaning in noisy social media texts. VADER provides a lexicon based model optimized for informal online communication, of ering a lightweight alternative to transformers. TextBlob is a traditional NLP baseline to benchmark improvements of ered by more contemporary models. Together, this combination allows for a comprehensive comparison across model families, ensuring evidence based model selection for the CyberShield project. These models were evaluated on a Kaggle dataset containing social media comments labeled with three sentiment classes (negative, positive, neutral) serving as the ground truth. Each model’s performance was measured using confusion matrices, accuracy, macro F1, weighted F1, and per class F1 scores. Our findings show that with an initial sample of 3000 texts, classical lexicon based models (i.e. VADER) and the traditional NLP baseline model (i.e., TextBlob), significantly outperformed transformer based models. TextBlob achieved the strongest performance results in this phase, underscoring the challenges of applying general pre-trained transformers to real world sentiment classification without domain specific fine tuning. However, after expanding the dataset to 18,318 samples per sentiment class and rerunning the evaluation with the updated RoBERTa sentiment model, the performance of trend shifted. The updated RoBERTa model demonstrated substantial improvement and outperformed the earlier transformer results.
Abusive Language Detection, Sentiment Analysis, Transformer Models, Lexicon-based models, Social Media Moderation, Performance Metrics