PhD Theses
About
- This is a curated and experimental site, offering historical and visual index of all collected sources.
- The site does not host any of the files. It only provides an index and links to full text files.
- Full text of many sources is behind a paywall. You may need to obtain a subscription to read full text.
Latest Issue
2025
| ![]() | Latest Issue:
|
History of Ideas
This report analyzes trends and shifts in research and development, as reflected in article titles from various publications spanning from 2022 to 2025. The insights are presented chronologically, highlighting the evolution of key themes and emerging areas of focus within the specified period.
Analysis of Research Trends: 2022-2025
2022: Foundations of Scalability, Robustness, and Human-Centric AI
The year 2022 laid significant groundwork, emphasizing core advancements in AI/Machine Learning, coupled with a strong emerging focus on ethical and human-centric considerations. Research was heavily invested in making intelligent systems more practical, reliable, and socially aware.
Key Themes and Trends:
- Core AI/ML Advancements: The drive for "efficient," "scalable," "robust," and "interpretable" machine learning models was paramount. Titles like "Efficient Utilization of Heterogeneous Compute" and "Learning with Graph Structures and Neural Networks" indicate a focus on optimizing computational performance and exploring novel architectures. "Representation Learning" was a significant underlying area, aimed at developing more effective ways for machines to understand data.
- Emergence of Human-Centric AI and Ethics: A notable shift began toward integrating human considerations into AI design. "User-Centered Deep Learning," "Evidence-based AI Ethics," and "Informing the Design and Refinement of Privacy and Security" explicitly show a rising awareness of societal impact, fairness, and transparency.
- Robotics and Physical Systems: Research continued to advance robotic capabilities, particularly in "Scalable Robot Learning," "Manipulation and Perception Policies for Robot Mechanics," and achieving "Autonomous Maintenance." The goal was to enable robots to operate more effectively in complex environments.
- System Efficiency and Security: Optimizing hardware and software for better performance and security remained a crucial area. Titles such as "Memory-Centric Architectures for Big Data Applications" and "IoT security and privacy assessment using software" highlight efforts to build secure and resource-efficient computing infrastructure across various platforms, including cloud, edge, and IoT.
- Biomedical and Healthcare Applications: Machine learning was increasingly applied to critical areas like "Deep Learning in Cancer Biology," "Representation Learning on Brain Data," and "Passive Health Monitoring with RadioWaves," aiming to enhance diagnosis, treatment, and monitoring.
Notable Shifts and Continuities:
- Continuity: The fundamental pursuit of efficient, scalable, and robust AI/ML algorithms remained a core driver. Robotics continued its trajectory towards greater autonomy and dexterity.
- Shift: A clear, conscious effort began to explicitly address the human element in AI design, moving beyond purely technical metrics to consider ethical implications, privacy, and user experience. This marked the start of "responsible AI" becoming a significant focus.
2023: Refining AI for Real-World Deployment, Deepening Human-AI Interaction, and Fortifying Security
Building on the foundations of 2022, the year 2023 saw a refinement of AI/ML techniques, with a strong emphasis on practical deployment in real-world, often imperfect, conditions. Human-AI collaboration deepened, and cybersecurity research intensified to counter increasingly sophisticated threats.
Key Themes and Trends:
- Practical AI/ML for Imperfect Data: Research focused on making AI models work robustly in challenging real-world scenarios. Titles like "Robust Learning from Uncurated Data," "Tackling Imperfections in Data," and "Out-of-Distribution Generalization of Deep Learning" indicate a push to handle noisy, limited, or biased data. "Federated Learning" emerged as a key approach for "Privacy-preserving distributed machine learning."
- Deepening Human-AI Interaction and Explainability (XAI): The conversation shifted from just human-centered design to more explicit interaction and understanding between humans and AI. "Interactive Machine Learning from Humans," "Explainable Mechanism Design," and "Designing Visual Explanations of Deep Learning Classifiers" show a concerted effort to make AI decisions transparent and controllable.
- Advanced Cybersecurity and Privacy: Security and privacy continued to be critical, with titles like "Secure and Verified Cryptographic Implementations," "Privacy-preserving biometric recognition systems," and "Detecting Mimicry Attacks in Windows Malware." There was a strong emphasis on "blockchain" for trust and distributed systems.
- Robotics in Complex Environments: Robotic research continued its momentum, focusing on "safe control for mobile robots," "manipulation among movable objects," and "language-guided navigation," pushing towards more adaptive and intelligent autonomous systems capable of handling dynamic and uncertain situations.
- Optimization Across the Stack: From "Optimization for Machine Learning" to "Resource Allocation in Future Radio Access Networks" and "Optimizing Data Movement Through Software Control," the theme of efficient resource utilization and algorithmic optimization was pervasive across diverse computing domains.
- Multimodality and Data Synthesis: An increasing number of titles explored "Multimodal deep learning systems" for integrating different data types (e.g., audio, visual, text) and "Generative models for synthesis" to create synthetic data for training.
Notable Shifts and Continuities:
- Continuity: Robustness, interpretability, and security remained central. The application of AI in healthcare and smart systems continued to grow.
- Shift: The practical challenges of deploying AI in "the wild" became a more explicit focus. The concept of "hybrid AI" (combining traditional models with deep learning) gained traction. There was a clear move towards not just building human-friendly AI, but enabling active "human-AI collaboration" and ensuring "trust" in the interaction. The sheer volume of research on AI, security, and human interaction highlights their growing interdependence.
2024: The Rise of Large Language Models and Holistic Trustworthy AI Systems
The year 2024 marked a significant turning point with the widespread influence of Large Language Models (LLMs) and foundation models across various domains. The focus shifted towards not just making AI functional, but also ensuring its "alignment" with human values and its practical "utility" in complex, real-world applications.
Key Themes and Trends:
- Large Language Models (LLMs) & Foundation Models Dominance: This was the breakout star. Titles explicitly mentioning "Large Language Models," "Foundation Models," and "Generative AI" surged. Research focused on "enhancing reasoning," "improving reliability," "efficiency" (e.g., "SpecInfer: Accelerating Generative Large Language"), and crucial "alignment" with human objectives (e.g., "Empowering Responsible Use of Large Language Models").
- Human-AI Alignment and Trust Beyond Explainability: The concept of "Human-AI Alignment" became a core research pillar, transcending mere explainability. Titles like "Building Blocks for Human-AI Alignment: Specify, Identify, Train" and "Human Mental Models of Self, Others, and AI Agents" indicate a move towards deeper cognitive integration and shared understanding.
- Robustness, Reliability, and Safety as Integrated Design Principles: This theme solidified as an inherent requirement rather than an add-on. "Provably Robust Machine Learning," "Safe and Trustworthy Decision Making through Reinforcement Learning," and "Formal Verification of Vision-Based AI-Controlled Systems" highlight a comprehensive approach to building dependable AI.
- Efficiency Across the Computing Spectrum: Optimization for efficiency continued to be critical, from "Energy-Efficient Hardware Design for Machine Learning" and "Efficient Deep Learning" to "Accelerating Data Movement at Different Granularities" and "Cloud-based IoT." The goal was to democratize access and reduce environmental impact.
- Autonomous Systems and Digital Twins: Advancements in "autonomous driving" ("Towards Safe, Human-Centered Autonomous Driving"), "robot learning" ("Learning Generalizable Robot Policies"), and "robot manipulation" were robust. The concept of "Digital Twins" gained significant traction for simulation, design, and management in manufacturing and smart cities.
- Advanced Privacy and Distributed Ledger Technologies: "Differential Privacy," "Secure Multiparty Computation," and "Blockchain" were frequently explored for "secure distributed applications," "data privacy," and "trustless systems," particularly in areas like finance, healthcare, and IoT.
- Computational Science and Physics-Informed AI Matures: The synergy between AI and scientific domains deepened, with "AI for Scientists: Accelerating Discovery" and "Physics-Informed Neural Networks" becoming more sophisticated for modeling complex physical phenomena.
Notable Shifts and Continuities:
- Continuity: All major themes from 2023 (robustness, human-centricity, security, optimization, robotics, biomedical applications) remained strong and continued to evolve.
- Major Shift: The undeniable emergence and pervasive application of LLMs and foundation models represent the most significant new trend, reshaping NLP, vision-language tasks, and human-AI interaction paradigms.
- Deepening: The focus on "alignment" moved from abstract concern to concrete research objectives. "Digital Twins" became a more mature area of application. The integration of AI with fundamental scientific principles became more explicit and sophisticated.
2025: AI at Evolutionary Scale, Expert Intelligence, and Seamless Human-AI Partnerships
As we look to 2025, the research landscape suggests an ongoing rapid evolution of AI, particularly Large Language Models, which are anticipated to reach unprecedented scales of intelligence and application. The emphasis will be on integrating these advanced AIs into societal and industrial structures in a profoundly reliable, safe, and truly collaborative manner.
Key Themes and Trends:
- Pervasive, Expert-Level Large Language Models (LLMs): LLMs are projected to achieve "evolutionary scale" and be elevated to "expert intelligence," implying a transition from powerful tools to near-human or superhuman capabilities in specific domains. Titles like "LLM-A*: Large Language Model Enhanced Incremental" and "Elevating Large Language Models to Expert Intelligence" reflect this. Research will focus on making them "data-efficient" and "explainable."
- Mutual Human-AI Understanding and Collaboration: The concept of human-AI interaction evolves into true "partnerships" and "mutual theory of mind." "Supporting Diagnosis of Pathologists with Human-AI," "Human-AI Partnerships in Gesture-Controlled Interaction," and "Personalized and Situation-Specific Decision Making" indicate a seamless, adaptive collaboration where AI understands and anticipates human needs and vice versa.
- Intrinsic Trustworthiness, Safety, and Robustness: These are no longer just features but foundational requirements embedded into every layer of AI design and deployment. "Privacy meets Robustness," "Safe, High-performance Motion Planning Under Uncertainty," and "Behavioral Robustness of Software System Designs" highlight a proactive approach to ensuring AI integrity in critical applications.
- Real-World Integration and Domain-Specific Optimization: AI solutions are deeply embedded into practical, complex systems across industries. Examples include "Optimizing Road Network Resilience," "Leveraging Large Language Models to Enable Drug Safety," "Learning for Robot-centric Autonomy," and "Manufacturing-Aware Reconstruction." The focus is on optimizing processes and decision-making for tangible real-world impact.
- Advanced Perception and Computational Imaging: Techniques continue to refine how AI perceives and interprets the physical world. "Deep learning-enabled computational imaging," "Machine Learning Techniques for LiDAR Sensor-Based," and "Language-Guided Vision Models for Perception and Robotics" point to highly sophisticated sensory and interpretative capabilities.
- Hardware-Software Co-design for Ubiquitous Efficiency: The relentless pursuit of efficiency continues with deep integration between algorithms and hardware. "Hardware Accelerator Design for Scientific Computing" and "Efficient Machine Learning for Edge Computing" suggest tailored solutions for deploying powerful AI even on resource-constrained devices.
- Multimodal Integration and Structured Representations: The fusion of various data types, especially language with other modalities (vision, time series), is central. "Multimodal Models of Time Series and Text" and "Geometric and Compositional Semantic Structures" underscore the drive for AI to understand and process diverse information sources holistically.
Notable Shifts and Continuities:
- Continuity: The core themes of AI/ML advancement, human-AI collaboration, security, and efficiency persist and deepen. Robotics, healthcare applications, and optimization remain key areas of application.
- Evolutionary Leap: The most striking shift is the anticipated maturation of LLMs to "expert" and "evolutionary" scales, indicating a qualitative jump in their capabilities and integration across domains.
- Holistic Safety and Responsibility: Trustworthiness and safety become inherent design philosophies, indicating a mature understanding of AI's societal implications.
- Seamless Integration: The vision for human-AI interaction is less about simple tools and more about dynamic, personalized, and mutually intelligent partnerships.
Conclusion: A Trajectory Towards Intelligent Autonomy and Human-AI Symbiosis
The progression from 2022 to 2025 reveals a clear trajectory in AI and computing research. Initially, the focus was on establishing robust, efficient, and ethical foundations for AI, marking a critical pivot towards responsible development. This quickly evolved in 2023 to address the practical challenges of deploying AI in complex, imperfect real-world environments, with a growing emphasis on explicit human-AI interaction and advanced cybersecurity.
By 2024, the transformative impact of Large Language Models and Foundation Models became undeniably central, driving research into sophisticated reasoning, alignment, and trustworthy systems. Looking ahead to 2025, the ambition is to scale these intelligent systems to an "evolutionary" and "expert" level, fostering seamless human-AI partnerships built on mutual understanding and inherent reliability. The overarching narrative is one of increasingly sophisticated AI moving from theoretical promise to ubiquitous, trustworthy, and deeply integrated real-world applications, profoundly changing how humans interact with technology and solve complex challenges.
A searchable index (by theme and year) of all 26 PhD Theses cover pages (from 2000 to present).

Loading articles...
Loading articles and authors...
A searchable selection of 0 interesting, educative, thought-provoking, or contraversial quotes.
Here you will find word clouds for each decade. Word clouds are generated from terms used in titles of articles.
Contact
![]() | This site is curated and maintained by Zeljko Obrenovic (@zeljko_obren). Contact him for any questions or suggestions. |
