These are my notes that together form a semantic knowledge graph built with Logseq. They document my professional expertise, projects, skills, and career history. The knowledge graph is automatically exported to RDF, validated using SHACL shapes, and published as a static website, PDF CV and RDF data. The RDF data is available both as ttl serialization and through SPARQL endpoint.
- Website: expertise.matdata.eu
- SPARQL Endpoint: jena.mkwadraat.be/expertise
- CV (PDF): expertise.matdata.eu/static/Mathias_Vanden_Auweele_CV.pdf
- RDF Data:
- matdata-expertise.ttl (cleaned & validated)
- matdata-expertise-raw.ttl (raw export)
- matdata-expertise-shacl.ttl (validation shapes)
This repository contains my personal knowledge management system, structured as a semantic knowledge graph using Logseq as the authoring environment. The graph models:
- Projects: Professional work and initiatives
- Techniques & Tools: Technologies, programming languages, frameworks
- Roles: Professional roles and responsibilities
- Jobs: Employment history with timeline
- Companies: Organizations I've worked with
- Talks & Presentations: Public speaking engagements
- Categories: SKOS-based taxonomies for organizing expertise
All content is authored in Logseq markdown pages with structured properties, automatically converted to RDF using the logseq-rdf-export tool, and validated against SHACL shapes to ensure data quality.
Logseq Pages (Markdown)
↓
logseq-rdf-export
↓
matdata-expertise-raw.ttl
↓
clean-graph.py (normalization)
↓
matdata-expertise.ttl
↓
validate-graph.py (SHACL validation)
↓
├─→ Static Website (Logseq SPA)
├─→ SPARQL Endpoint (Apache Jena Fuseki)
└─→ CV Generation (RenderCV)
- Logseq: Knowledge authoring with structured properties
- RDF Export: Custom tool to convert Logseq graph to RDF/Turtle
- Graph Cleaning: Python script to normalize predicates and map categories
- SHACL Validation: Ensures data quality and consistency
- CI/CD Pipeline: GitLab/GitHub CI automates the entire workflow
- Publishing: Multi-channel output (website, SPARQL, PDF CV)
- Python 3.12+
- Docker (for RDF export)
- Git
- Clone the repository:
git clone <repository-url>
cd Expertise- Create a Python virtual environment:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\Activate.ps1
pip install -r kg-actions/requirements.txtRun the complete validation workflow locally:
# 1. Export Logseq to RDF
docker run -it -v ./:/data mathiasvda/logseq-rdf-export logseq-rdf-export matdata-expertise-raw.ttl --directory /data/
# 2. Clean and normalize the graph
python ./kg-actions/clean-graph.py
# 3. Validate against SHACL shapes
python ./kg-actions/validate-graph.pyExpected output for successful validation:
Number of triples in data graph: 2900
Number of triples in shacl graph: 168
Validation Report
Conforms: True
Edit pages in the pages/ directory following these conventions:
Projects:
public:: true
type:: [[Project]]
description:: Brief description
has-category:: Strategy
has-tagged-techniques:: #Python, #RDF
has-tagged-roles:: #Developer
is-featured:: Yes
during-job:: #[[Job: Independent railway data freelancer]]Techniques:
public:: true
type:: [[Technique]]
self-estimated-proficiency:: Proficient
is-featured:: Yes
has-category:: Programming languagesSee kg-actions/shapes-explanation.md for complete validation rules.
The GitLab CI pipeline consists of 5 stages:
- RDF Export: Convert Logseq to RDF using Docker image
- Validate: Clean graph and run SHACL validation
- CV: Generate PDF CV using RenderCV
- Pages: Build static website using Logseq Publish SPA
- Upload: Sync RDF data to SPARQL endpoint
Pipeline runs automatically on commits to the main branch.
The knowledge graph uses these main entity types:
d:Project- Professional projectsd:Technique- Skills, tools, technologiesd:Role- Professional rolesd:Job- Employment historyd:Company- Organizationsd:Talk- Presentations and talksskos:Concept- Taxonomy categoriesskos:ConceptScheme- Category schemes
Key predicates:
d:has-category- Link to SKOS categoryd:has-tagged-techniques- Technologies used in projectd:has-tagged-roles- Roles performed in projectd:self-estimated-proficiency- Skill leveld:is-featured- Highlight important itemsd:during-job- Job context for projects
SHACL shapes enforce:
- Required properties (labels, categories, dates)
- Cardinality constraints (min/max counts)
- Data type validation (strings, booleans, dates, IRIs)
- Category membership in correct SKOS concept schemes
- Proficiency level consistency
- Relationship integrity
See kg-actions/matdata-expertise-shacl.ttl for complete shapes.
- clean-graph.py: Normalizes RDF graph by mapping properties and merging SKOS concepts
- validate-graph.py: Validates RDF graph against SHACL shapes
- formatter.py: Utilities for RDF formatting
- complete-cv.py: Generates CV YAML from RDF data for RenderCV
- logseq-rdf-export - Export tool for Logseq to RDF
- Yasgui - SPARQL GUI for querying the endpoint
- RenderCV - CV generation from YAML