Graph databases have become a cornerstone of modern data management, enabling complex queries, pattern discovery, and knowledge inference in highly connected datasets. From social networks to biomedical research, supply chain logistics, and financial fraud detection, graph databases power a wide range of applications. The increasing complexity and scale of graph data and knowledge require innovative approaches in storage, querying, reasoning, and integration with machine learning and AI-driven systems.
Scope
This special issue welcomes new and original research contributions on theoretical foundations, systems, and applications of graph databases. We encourage submissions that address fundamental challenges, propose innovative solutions, and demonstrate impactful use cases. Topics include, but are not limited to:
- Graph Data Models, Query Languages, Schema Languages
- Advances in graph data modelling
- Graph query language design and optimisation
- Hybrid graph-relational database systems
- Integration and interoperability of graph data models
- Schema constraints (e.g., PG-Schema and SHACL) and reasoning for (knowledge) graph management
- Continuous graph processing and incremental computation
- Spatial and temporal (knowledge) graphs
- Machine Learning and AI for Graph Data Management
- Graph embeddings for query answering and graph exploration
- ML-based cardinality estimation and query cost prediction
- Learning-based query optimisation and indexing
- Integration of ML models (e.g., GNNs, LLMs) in graph data management systems
- Neuro-symbolic methods combining reasoning and learning for graphs
- Graph Data Management System Architectures and Storage Techniques
- Scalable storage and indexing for large-scale graphs
- Distributed and cloud-based graph database architectures
- High-performance graph transaction management
- Query optimisation for complex graph workloads
- LLMs for (knowledge) graph data management
- Performance, Scalability, and Benchmarking
- Benchmarking frameworks for graph databases
- Real-time processing of dynamic graphs
- Efficient handling of large-scale heterogeneous graph data
- Efficient graph query processing
- Compression techniques for graphs and knowledge
- Security, Privacy, Quality, Responsibility, and Trust
- Access control and security models for graph databases
- Synthetic graph data generation
- Differential privacy and anonymisation techniques for graph data
- Techniques for increasing graph processing reliability
- Graph-based blockchain and ledger technologies
- Explainability, responsibility, and fairness in graph data management
- Methods to evaluate and improve the quality of graph databases
- Graph Data Management In-Use and Experience Reports (✱)
- Domain-specific graph databases for knowledge management and semantic web
- Industrial applications: healthcare, finance, cybersecurity, logistics, etc.
- Graphs for managing multimedia (text and images) data
- Graph-based AI for digital twins and IoT
- Usability in graph management: interaction, exploration, and visualisation
- Graph data management education and training
Submission
We solicit submissions which present new results on (knowledge) graph data management. Submissions which do not present results in data management will be desk rejected. Submissions must be in scope, i.e., align with at least one of the above topic areas and situate itself within the state of the art of current and past research in the database community in general and within the selected topic(s) in particular. For example, submissions that purely advance machine learning approaches not relating to any data management aspects (e.g. scalability and efficiency) would not be considered in scope. Submissions will undergo a standard peer review process and need to follow TGDK instructions. Though there are no fixed upper or lower page limits, we expect the submission to be in the range of 10–20 pages using TGDK’s single column LaTeX template (see Author Instructions).
(✱) Note that “Graph Data Management In-Use and Experience Reports” will be evaluated using the unique novelty, relevance, clarity, technical soundness, adoption, and insights review criteria for Use-case articles.
While not required, authors are welcome to include supplementary materials—such as datasets, formal definitions, theorems, or tools—that help validate or support their submission. In addition to research articles, we also encourage submissions that focus on community resources (e.g., benchmarks, datasets, software systems) that may foster more impactful research. Authors interested in submitting a resource-focused paper are invited to write to the write to the Editors-in-Chief with a brief description before submission.
As a Diamond Open Access journal, official versions of accepted TGDK papers (as accessible via DOI) are published by Dagstuhl Publishing and made available for free online without fees for authors nor readers.
Transparency and Availability
We encourage the authors to follow the good practice on artifact availability put forward by other data management venues (e.g., https://www.vldb.org/pvldb/volumes/19/submission). Authors must submit links to supplemental material (e.g., code, data) to support the reproducibility of their results. Reviewers will evaluate this material and assess its openness, permanence, and usability. Supplemental material should be hosted in publicly accessible repositories (e.g., GitHub, Figshare, Dryad), not personal websites. If such material cannot be shared, authors must provide a valid justification.
Guest Editors
- Stefania Dumbrava, ENSIIE & Télécom SudParis
- George Fletcher, Eindhoven University of Technology
- Olaf Hartig, Linköping University
- Matteo Lissandrini, University of Verona
- Riccardo Tommasini, INSA Lyon & LIRIS CNRS
Important Dates
- Submissions: July 1, 2025
- Author Notifications: September 10, 2025
- Revisions: October 15, 2025
- Author Notifications: October 31, 2025
- Publication: Q4 2025
Note that all dates are AoE.
For inquiries, please contact the guest editors.
We look forward to your submissions!