kora 4 meses atrás
pai
commit
442adef136

+ 412 - 2
Basic_version/Main.ipynb

@@ -1,16 +1,426 @@
 {
  "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## PARSER ONTOLOGIE"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### INTRODUZIONE\n",
+    "Questo notebook (testato con Python 3.12) contiene una implementazione di base del codice che ci serve.\n",
+    "    - Input: un file .xlsx\n",
+    "    - Output: un file .rdf\n",
+    "\n",
+    "L'output finale è una definizione strutturata dell'ontologia descritta 'informalmente' nel file .xlsx iniziale. L'output è in formato RDF/XML invece che TTL per un discorso essenzialmente storico di implementazione. Il file .rdf può essere facilmente convertito in .ttl con strumenti esistenti, ad esempio noi lo abbiamo ottenuto usando il servizio pubblico WEBVOWL:\n",
+    "\n",
+    "https://service.tib.eu/webvowl/\n",
+    "\n",
+    "importando il file .rdf e riesportandolo come .ttl.\n",
+    "\n",
+    "Una breve nota riguardo alla lingua: i commenti nel codice ed i nomi delle variabili sono in inglese per un discorso di (eventuale) portabilità. I commenti e le descrizioni interne ai file sono invece in italiano per praticità, dato che tutto il personale coinvolto nel lavoro è italiano."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Il primo passo è definire alcuni import e variabili globali, in particolare i folder che devono contenere input e output e il nome file per l'ontologia."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 11,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import csv\n",
+    "import json\n",
+    "\n",
+    "# BASIC CONFIGURATION\n",
+    "DATA_FOLDER = './data/'\n",
+    "OUTPUT_FOLDER = './output/'\n",
+    "ONTO_FILENAME = 'man_draft' # No extension!\n",
+    "\n",
+    "ent_filename = ONTO_FILENAME + '_entities.csv'\n",
+    "rel_filename = ONTO_FILENAME + '_relations.csv'"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### PARTE I: acquisizione dati\n",
+    "Il passo successivo è la conversione del file .xlsx in due file .csv.\n",
+    "\n",
+    "Per descrivere l'ontologia in modo semplice, abbiamo adottato una descrizione 'a grafo': ci aspettiamo che il file .xlsx contenga (almeno) due fogli, uno che descrive le Entità (i nodi del grafo) ed uno le Relazioni (gli spigoli del grafo).\n",
+    "\n",
+    "La descrizione è forse ingenua, ma è semplice da capire anche per i non esperti e funzionale allo scopo.\n",
+    "\n",
+    "Le schede con Entità e Relazioni vengono ricercate in base al nome, che è configurabile. Tutto il loro contenuto viene esportato in file .csv senza nessuna elaborazione. La lettura usa la libreria **openpyxl**, che è l'unico pacchetto non-standard usando nel Notebook."
+   ]
+  },
   {
    "cell_type": "code",
-   "execution_count": null,
+   "execution_count": 12,
    "metadata": {},
    "outputs": [],
+   "source": [
+    "# PART I: parse xlsx to (multiple) csv\n",
+    "\n",
+    "# CONFIGURATION\n",
+    "XLSX_FILENAME = ONTO_FILENAME + '.xlsx'\n",
+    "ENTITIES_SHEETNAME = 'Entità'\n",
+    "RELATIONS_SHEETNAME = 'Relazioni'\n",
+    "\n",
+    "# Import xlsx through openpyxl\n",
+    "import openpyxl as op\n",
+    "\n",
+    "input_data = op.load_workbook(DATA_FOLDER + XLSX_FILENAME)\n",
+    "\n",
+    "entities_sheet = input_data[ENTITIES_SHEETNAME]\n",
+    "relations_sheet = input_data[RELATIONS_SHEETNAME]\n",
+    "\n",
+    "# Export sheet data to csv\n",
+    "with open(DATA_FOLDER + ent_filename, 'w', encoding='utf-8') as out_file:\n",
+    "    writer = csv.writer(out_file)\n",
+    "    writer.writerows(entities_sheet.values)\n",
+    "\n",
+    "with open(DATA_FOLDER + rel_filename, 'w', encoding='utf-8') as out_file:\n",
+    "    writer = csv.writer(out_file)\n",
+    "    writer.writerows(relations_sheet.values)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### PARTE II: pre-processing dei dati\n",
+    "Qui i dati dei CSV vengono letti e raccolti in una struttura leggermente più elaborata (più facile da riprocessare) che poi, per maggior visibilità, viene salvata in formato JSON. Il JSON in ogni caso è un file intermedio di lavoro che non ha nessuna importanza fondamentale, e può essere via via modificato in base alle esigenze.\n",
+    "\n",
+    "Non tutti i dati del CSV vengono letti, ma solo quelli corrispondenti a specifiche colonne, che possono essere definite all'inizio del parsing. I CSV possono avere o meno una intestazione (il booleano HEADER_ROW va settato di conseguenza).\n",
+    "\n",
+    "Se si preferisce, i CSV possono essere forniti direttamente dall'utente al posto del file .xlsx, e l'elaborazione può partire da qui saltando la Parte I."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
    "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 13,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# PART II: collect CSV data into a 'pre-ontology' structure\n",
+    "\n",
+    "# Read csv files produced by xlsx processing back in (OPTIONALLY: provide the csv files directly and start directly from here)\n",
+    "# From here on, the main objects will be the 'entities' and 'relations' lists of dicts which arise from CSV re-reading.\n",
+    "# To facilitate further processing, they will be arranged in a nested structure, which will be exported as a JSON file\n",
+    "\n",
+    "HEADER_ROW = True\n",
+    "\n",
+    "# Not difficult to add more keys (column names)\n",
+    "ENTITIES_COLUMN_LABEL = 'ENTITÀ'\n",
+    "ATTRIBUTES_COLUMN_LABEL = 'ATTRIBUTO (LITERAL)'\n",
+    "SAMEAS_COLUMN_LABEL = 'SAME AS'\n",
+    "#\n",
+    "RELATION_FIRST_COLUMN_LABEL = 'ENTITÀ 1'\n",
+    "RELATION_SECOND_COLUMN_LABEL = 'ENTITÀ 2'\n",
+    "RELATION_NAME_COLUMN_LABEL = 'NOME RELAZIONE'\n",
+    "INVERSE_RELATION_COLUMN_LABEL = 'NOME RELAZIONE INVERSA'\n",
+    "#\n",
+    "\n",
+    "\n",
+    "with open(DATA_FOLDER + ent_filename, 'r', encoding='utf-8') as in_file:\n",
+    "    if HEADER_ROW:\n",
+    "        reader = csv.DictReader(in_file)\n",
+    "    else:\n",
+    "        reader = csv.DictReader(in_file, fieldnames=[ENTITIES_COLUMN_LABEL, ATTRIBUTES_COLUMN_LABEL, SAMEAS_COLUMN_LABEL])\n",
+    "    entities = [row for row in reader]\n",
+    "\n",
+    "with open(DATA_FOLDER + rel_filename, 'r', encoding='utf-8') as in_file:\n",
+    "    if HEADER_ROW:\n",
+    "        reader = csv.DictReader(in_file)\n",
+    "    else:\n",
+    "        reader = csv.DictReader(in_file, fieldnames=[RELATION_FIRST_COLUMN_LABEL, RELATION_SECOND_COLUMN_LABEL, RELATION_NAME_COLUMN_LABEL, INVERSE_RELATION_COLUMN_LABEL])\n",
+    "    relations = [row for row in reader]\n",
+    "\n",
+    "\n",
+    "# Main function for this Part\n",
+    "def dict_lists_to_json(entities_local, relations_local):\n",
+    "\n",
+    "    entity = {}\n",
+    "\n",
+    "    current_entity = None\n",
+    "    for row in entities_local:\n",
+    "        entity_name = row.get(ENTITIES_COLUMN_LABEL)\n",
+    "        attribute_name = row.get(ATTRIBUTES_COLUMN_LABEL)\n",
+    "        same_as_row = row.get(SAMEAS_COLUMN_LABEL)\n",
+    "        same_as_list = same_as_row.split(',') if same_as_row else []\n",
+    "\n",
+    "        if entity_name:\n",
+    "            current_entity = entity_name\n",
+    "            entity[current_entity] = {}\n",
+    "\n",
+    "        if current_entity and attribute_name:\n",
+    "            if not entity[current_entity].get('Attributi'):\n",
+    "                entity[current_entity]['Attributi'] = []\n",
+    "            entity[current_entity]['Attributi'].append(attribute_name)\n",
+    "\n",
+    "        if current_entity and same_as_list:\n",
+    "            entity[current_entity]['Sinonimi'] = [s.strip() for s in same_as_list]\n",
+    "\n",
+    "    # Add subclass information\n",
+    "    for row in relations_local:\n",
+    "        entity1 = row.get(RELATION_FIRST_COLUMN_LABEL)\n",
+    "        entity2 = row.get(RELATION_SECOND_COLUMN_LABEL)\n",
+    "        label = row.get(RELATION_NAME_COLUMN_LABEL)\n",
+    "\n",
+    "        if label == \"is_subclass_of\":\n",
+    "            if entity1 in entity:\n",
+    "                entity[entity1][\"Sottoclasse di\"] = entity2\n",
+    "\n",
+    "    # Construct relations\n",
+    "    entity_relations = []\n",
+    "    for row in relations_local:\n",
+    "        if row[RELATION_NAME_COLUMN_LABEL] != \"is_subclass_of\":\n",
+    "            relation = {\n",
+    "                \"Entità 1\": row[RELATION_FIRST_COLUMN_LABEL],\n",
+    "                \"Entità 2\": row[RELATION_SECOND_COLUMN_LABEL],\n",
+    "                \"Etichetta\": row[RELATION_NAME_COLUMN_LABEL],\n",
+    "                \"Inversa\": row[INVERSE_RELATION_COLUMN_LABEL]\n",
+    "            }\n",
+    "            entity_relations.append(relation)\n",
+    "\n",
+    "    # Create final object structure\n",
+    "    data = {\n",
+    "        \"Entità\": entity,\n",
+    "        \"Relazioni\": entity_relations\n",
+    "    }\n",
+    "\n",
+    "    return data"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 14,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Do it!\n",
+    "json_data = dict_lists_to_json(entities, relations)\n",
+    "\n",
+    "# Export data\n",
+    "with open(OUTPUT_FOLDER + ONTO_FILENAME + '.json', 'w') as out_json:\n",
+    "    json.dump(json_data, out_json, indent=2, ensure_ascii=False)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### PARTE III: produzione del file RDF\n",
+    "L'ultima parte dell'elaborazione usa l'oggetto strutturato prodotto dalla Parte II per creare un file RDF/XML valido (validato usando un'istanza pubblica di WEBVOWL) che descrive formalmente l'ontologia in corso di validazione.\n",
+    "\n",
+    "L'approccio è molto basico/ingenuo: si utilizza un template pre-generato (con l'aiuto del software Protegé) e si sostituiscono specifiche stringhe con le informazione ricavate dal JSON. L'idea era di produrre un esempio minimale funzionante, più che qualcosa di definitivo o particolarmente elaborato."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 15,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "{'#any'}\n"
+     ]
+    }
+   ],
+   "source": [
+    "# PART III: from the structured entity/relation data obtained from part II, generate a RDF file describing the ontology\n",
+    "\n",
+    "# Re-read the data and do a consistency check\n",
+    "entity_set = set(json_data['Entità'].keys())\n",
+    "\n",
+    "entity_relations_set = {ent for rel in json_data['Relazioni'] for ent in [rel['Entità 1'], rel['Entità 2']]}\n",
+    "\n",
+    "# The check: ideally, all entities mentioned in the 'relations' object should be present in the 'entities' object, and the output of the following block should be null\n",
+    "if not entity_relations_set.issubset(entity_set):\n",
+    "    print(entity_relations_set.difference(entity_set))\n",
+    "\n",
+    "# However, we actually used an extra placeholder in 'relations', '#any', which stands for any entity (any class); the expected output is thus a list containing only '#any'"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 16,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# RDF Templates\n",
+    "RDF_MAIN_TEMPLATE = 'template.rdf'\n",
+    "with open(DATA_FOLDER + RDF_MAIN_TEMPLATE, 'r') as in_file:\n",
+    "    RAW_RDF = in_file.read()\n",
+    "\n",
+    "# RDF snippets; info will replace placeholder tags (in uppercase between '#')\n",
+    "ENTITY_TEMPLATE = '''\n",
+    "    <!-- http://www.h2iosc.it/onto##NAME# -->\n",
+    "\n",
+    "    <owl:Class rdf:about=\"&h2iosc;#NAME#\">\n",
+    "        <rdfs:label>#LABEL#</rdfs:label>\n",
+    "        <rdfs:subClassOf>#PARENT#</rdfs:subClassOf>\n",
+    "    </owl:Class>\n",
+    "'''\n",
+    "SUBCLASS_STRING = \"        <rdfs:subClassOf>#PARENT#</rdfs:subClassOf>\\n\"\n",
+    "\n",
+    "OBJECT_PROPERTY_TEMPLATE = '''\n",
+    "    <!-- http://www.h2iosc.it/onto##NAME# -->\n",
+    "\n",
+    "    <owl:ObjectProperty rdf:about=\"&h2iosc;#NAME#\">\n",
+    "        <rdfs:label>#LABEL#</rdfs:label>\n",
+    "        <rdfs:range rdf:resource=\"&h2iosc;#RANGE#\"/>\n",
+    "        <rdfs:domain rdf:resource=\"&h2iosc;#DOMAIN#\"/>\n",
+    "    </owl:ObjectProperty>\n",
+    "'''\n",
+    "\n",
+    "OBJECT_PROPERTY_INVERSE_TEMPLATE = '''\n",
+    "    <!-- http://www.h2iosc.it/onto##NAME# -->\n",
+    "\n",
+    "    <owl:ObjectProperty rdf:about=\"&h2iosc;#NAME#\">\n",
+    "        <rdfs:label>#LABEL#</rdfs:label>\n",
+    "        <owl:inverseOf rdf:resource=\"&h2iosc;#INV#\"/>\n",
+    "        <rdfs:range rdf:resource=\"&h2iosc;#RANGE#\"/>\n",
+    "        <rdfs:domain rdf:resource=\"&h2iosc;#DOMAIN#\"/>\n",
+    "    </owl:ObjectProperty>\n",
+    "'''\n",
+    "\n",
+    "DATATYPE_PROPERTY_TEMPLATE = '''\n",
+    "    <!-- http://www.h2iosc.it/onto##NAME# -->\n",
+    "\n",
+    "    <owl:DatatypeProperty rdf:about=\"&h2iosc;#NAME#\">\n",
+    "        <rdfs:label>#LABEL#</rdfs:label>\n",
+    "        <rdfs:domain rdf:resource=\"&h2iosc;#DOMAIN#\"/>\n",
+    "    </owl:DatatypeProperty>\n",
+    "'''"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 17,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Utility\n",
+    "def normalize_label(label):\n",
+    "    return label.lower().replace(' ', '_').replace('à', 'a').replace('è', 'e').replace('é', 'e').replace('ì', 'i').replace('ò', 'o').replace('ù', 'u')\n",
+    "\n",
+    "\n",
+    "# Main function for this part\n",
+    "def create_rdf(data):\n",
+    "    entities_rdf_list = []\n",
+    "    datatype_properties_rdf_list = []\n",
+    "    for label, ent in data['Entità'].items():\n",
+    "        \n",
+    "        entity_name = normalize_label(label)\n",
+    "        entity_rdf = ENTITY_TEMPLATE.replace('#LABEL#', label).replace('#NAME#', entity_name)\n",
+    "\n",
+    "        # Subclasses\n",
+    "        if 'Sottoclasse di' in ent.keys():\n",
+    "            parent = ent['Sottoclasse di']\n",
+    "            data['Relazioni'].append({\"Entità 1\": label,\n",
+    "                \"Entità 2\": parent,\n",
+    "                \"Etichetta\": \"is_subclass_of\", \"Inversa\": \"is_superclass_of\"})\n",
+    "            entity_rdf = entity_rdf.replace('#PARENT#', normalize_label(parent))\n",
+    "        else:\n",
+    "            entity_rdf = entity_rdf.replace(SUBCLASS_STRING, '')\n",
+    "\n",
+    "        entities_rdf_list.append(entity_rdf)\n",
+    "        \n",
+    "        if not ent.get('Attributi'):\n",
+    "            continue\n",
+    "        for datatype_label in ent['Attributi']:\n",
+    "            datatype_name = normalize_label(datatype_label)\n",
+    "            datatype_properties_rdf_list.append(\n",
+    "                DATATYPE_PROPERTY_TEMPLATE.replace('#LABEL#', datatype_label).replace(\n",
+    "                    '#NAME#', datatype_name\n",
+    "                ).replace('#DOMAIN#', entity_name)\n",
+    "            )\n",
+    "\n",
+    "    relations_rdf_list = []\n",
+    "    for rel in data['Relazioni']:\n",
+    "        label = rel['Etichetta']\n",
+    "        inverse_label = rel['Inversa']\n",
+    "        domain = normalize_label(rel['Entità 1'])\n",
+    "        range1 = normalize_label(rel['Entità 2'])\n",
+    "        name = domain + '_' + normalize_label(label) + '_' + range1\n",
+    "        inverse_name = range1 + '_' + normalize_label(inverse_label) + '_' + domain\n",
+    "        #\n",
+    "        relation_rdf = OBJECT_PROPERTY_TEMPLATE.replace('#NAME#', name).replace('#LABEL#', label).replace('#DOMAIN#', domain).replace('#RANGE#', range1)\n",
+    "        #\n",
+    "        relation_inverse_rdf = OBJECT_PROPERTY_INVERSE_TEMPLATE.replace('#NAME#', inverse_name).replace('#LABEL#', inverse_label).replace('#DOMAIN#', range1).replace('#RANGE#', domain).replace('#INV#', name)\n",
+    "        #\n",
+    "        relation_full_rdf = relation_rdf + '\\n\\n\\n' + relation_inverse_rdf\n",
+    "        relations_rdf_list.append(relation_full_rdf)\n",
+    "    \n",
+    "    to_out = RAW_RDF.replace(ENTITY_TEMPLATE, '\\n\\n\\n'.join(entities_rdf_list)).replace(DATATYPE_PROPERTY_TEMPLATE, '\\n\\n\\n'.join(datatype_properties_rdf_list)\n",
+    "    ).replace(OBJECT_PROPERTY_INVERSE_TEMPLATE, '\\n\\n\\n'.join(relations_rdf_list))\n",
+    "\n",
+    "    return to_out\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 18,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Do it!\n",
+    "rdf_data = create_rdf(json_data)\n",
+    "\n",
+    "# Export\n",
+    "with open(OUTPUT_FOLDER + ONTO_FILENAME + '.rdf', 'w') as out_file:\n",
+    "    out_file.write(rdf_data)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### PARTE IV: il file turtle\n",
+    "Non abbiamo scritto codice per creare il .ttl -- questo resta tra i desiderata -- ma un file turtle può essere abbastanza facilmente generato usando software di terze parti, ad esempio l'istanza pubblica di WEBVOWL presente all'indirizzo\n",
+    "\n",
+    "https://service.tib.eu/webvowl/ ,\n",
+    "\n",
+    "il che serve anche per validare l'output prodotto."
+   ]
   }
  ],
  "metadata": {
+  "kernelspec": {
+   "display_name": "venv",
+   "language": "python",
+   "name": "python3"
+  },
   "language_info": {
-   "name": "python"
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.12.3"
   }
  },
  "nbformat": 4,

+ 460 - 0
CIDOC_version/Notebook_CIDOC.ipynb

@@ -0,0 +1,460 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## PARSER ONTOLOGIE"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### INTRODUZIONE\n",
+    "Questo notebook (testato con Python 3.12) contiene una implementazione di base del codice che ci serve.\n",
+    "    - Input: un file .xlsx\n",
+    "    - Output: un file .rdf\n",
+    "\n",
+    "L'output finale è una definizione strutturata dell'ontologia descritta 'informalmente' nel file .xlsx iniziale. L'output è in formato RDF/XML invece che TTL per un discorso essenzialmente storico di implementazione. Il file .rdf può essere facilmente convertito in .ttl con strumenti esistenti, ad esempio noi lo abbiamo ottenuto usando il servizio pubblico WEBVOWL:\n",
+    "\n",
+    "https://service.tib.eu/webvowl/\n",
+    "\n",
+    "importando il file .rdf e riesportandolo come .ttl.\n",
+    "\n",
+    "Una breve nota riguardo alla lingua: i commenti nel codice ed i nomi delle variabili sono in inglese per un discorso di (eventuale) portabilità. I commenti e le descrizioni interne ai file sono invece in italiano per praticità, dato che tutto il personale coinvolto nel lavoro è italiano."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Il primo passo è definire alcuni import e variabili globali, in particolare i folder che devono contenere input e output e il nome file per l'ontologia."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 9,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import csv\n",
+    "import json\n",
+    "\n",
+    "# BASIC CONFIGURATION\n",
+    "DATA_FOLDER = './data/'\n",
+    "OUTPUT_FOLDER = './output/'\n",
+    "ONTO_FILENAME = 'man_draft_CIDOC' # No extension!\n",
+    "\n",
+    "ent_filename = ONTO_FILENAME + '_entities.csv'\n",
+    "rel_filename = ONTO_FILENAME + '_relations.csv'"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### PARTE I: acquisizione dati\n",
+    "Il passo successivo è la conversione del file .xlsx in due file .csv.\n",
+    "\n",
+    "Per descrivere l'ontologia in modo semplice, abbiamo adottato una descrizione 'a grafo': ci aspettiamo che il file .xlsx contenga (almeno) due fogli, uno che descrive le Entità (i nodi del grafo) ed uno le Relazioni (gli spigoli del grafo).\n",
+    "\n",
+    "La descrizione è forse ingenua, ma è semplice da capire anche per i non esperti e funzionale allo scopo.\n",
+    "\n",
+    "Le schede con Entità e Relazioni vengono ricercate in base al nome, che è configurabile. Tutto il loro contenuto viene esportato in file .csv senza nessuna elaborazione. La lettura usa la libreria **openpyxl**, che è l'unico pacchetto non-standard usando nel Notebook."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 10,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# PART I: parse xlsx to (multiple) csv\n",
+    "\n",
+    "# CONFIGURATION\n",
+    "XLSX_FILENAME = ONTO_FILENAME + '.xlsx'\n",
+    "ENTITIES_SHEETNAME = 'Entità'\n",
+    "RELATIONS_SHEETNAME = 'Relazioni'\n",
+    "\n",
+    "# Import xlsx through openpyxl\n",
+    "import openpyxl as op\n",
+    "\n",
+    "input_data = op.load_workbook(DATA_FOLDER + XLSX_FILENAME)\n",
+    "\n",
+    "entities_sheet = input_data[ENTITIES_SHEETNAME]\n",
+    "relations_sheet = input_data[RELATIONS_SHEETNAME]\n",
+    "\n",
+    "# Export sheet data to csv\n",
+    "with open(DATA_FOLDER + ent_filename, 'w', encoding='utf-8') as out_file:\n",
+    "    writer = csv.writer(out_file)\n",
+    "    writer.writerows(entities_sheet.values)\n",
+    "\n",
+    "with open(DATA_FOLDER + rel_filename, 'w', encoding='utf-8') as out_file:\n",
+    "    writer = csv.writer(out_file)\n",
+    "    writer.writerows(relations_sheet.values)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### PARTE II: pre-processing dei dati\n",
+    "Qui i dati dei CSV vengono letti e raccolti in una struttura leggermente più elaborata (più facile da riprocessare) che poi, per maggior visibilità, viene salvata in formato JSON. Il JSON in ogni caso è un file intermedio di lavoro che non ha nessuna importanza fondamentale, e può essere via via modificato in base alle esigenze.\n",
+    "\n",
+    "Non tutti i dati del CSV vengono letti, ma solo quelli corrispondenti a specifiche colonne, che possono essere definite all'inizio del parsing. I CSV possono avere o meno una intestazione (il booleano HEADER_ROW va settato di conseguenza).\n",
+    "\n",
+    "Se si preferisce, i CSV possono essere forniti direttamente dall'utente al posto del file .xlsx, e l'elaborazione può partire da qui saltando la Parte I."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 11,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# PART II: collect CSV data into a 'pre-ontology' structure\n",
+    "\n",
+    "# Read csv files produced by xlsx processing back in (OPTIONALLY: provide the csv files directly and start directly from here)\n",
+    "# From here on, the main objects will be the 'entities' and 'relations' lists of dicts which arise from CSV re-reading.\n",
+    "# To facilitate further processing, they will be arranged in a nested structure, which will be exported as a JSON file\n",
+    "\n",
+    "HEADER_ROW = True\n",
+    "\n",
+    "# Not difficult to add more keys (column names)\n",
+    "ENTITIES_COLUMN_LABEL = 'ENTITÀ'\n",
+    "ATTRIBUTES_COLUMN_LABEL = 'ATTRIBUTO (LITERAL)'\n",
+    "SAMEAS_COLUMN_LABEL = 'SAME AS'\n",
+    "#\n",
+    "RELATION_FIRST_COLUMN_LABEL = 'ENTITÀ 1'\n",
+    "RELATION_SECOND_COLUMN_LABEL = 'ENTITÀ 2'\n",
+    "RELATION_NAME_COLUMN_LABEL = 'NOME RELAZIONE'\n",
+    "INVERSE_RELATION_COLUMN_LABEL = 'NOME RELAZIONE INVERSA'\n",
+    "#\n",
+    "CIDOC_COLUMN_LABEL = 'CIDOC-LINK'\n",
+    "\n",
+    "with open(DATA_FOLDER + ent_filename, 'r', encoding='utf-8') as in_file:\n",
+    "    if HEADER_ROW:\n",
+    "        reader = csv.DictReader(in_file)\n",
+    "    else:\n",
+    "        reader = csv.DictReader(in_file, fieldnames=[ENTITIES_COLUMN_LABEL, ATTRIBUTES_COLUMN_LABEL, SAMEAS_COLUMN_LABEL, CIDOC_COLUMN_LABEL])\n",
+    "    entities = [row for row in reader]\n",
+    "\n",
+    "with open(DATA_FOLDER + rel_filename, 'r', encoding='utf-8') as in_file:\n",
+    "    if HEADER_ROW:\n",
+    "        reader = csv.DictReader(in_file)\n",
+    "    else:\n",
+    "        reader = csv.DictReader(in_file, fieldnames=[RELATION_FIRST_COLUMN_LABEL, RELATION_SECOND_COLUMN_LABEL, RELATION_NAME_COLUMN_LABEL, INVERSE_RELATION_COLUMN_LABEL, CIDOC_COLUMN_LABEL])\n",
+    "    relations = [row for row in reader]\n",
+    "\n",
+    "\n",
+    "# Main function for this Part\n",
+    "def dict_lists_to_json(entities_local, relations_local):\n",
+    "\n",
+    "    entity = {}\n",
+    "\n",
+    "    current_entity = None\n",
+    "    for row in entities_local:\n",
+    "        entity_name = row.get(ENTITIES_COLUMN_LABEL)\n",
+    "        attribute_name = row.get(ATTRIBUTES_COLUMN_LABEL)\n",
+    "        same_as_row = row.get(SAMEAS_COLUMN_LABEL)\n",
+    "        same_as_list = same_as_row.split(',') if same_as_row else []\n",
+    "        cidoc_link = row.get(CIDOC_COLUMN_LABEL)\n",
+    "\n",
+    "        if entity_name:\n",
+    "            current_entity = entity_name\n",
+    "            entity[current_entity] = {}\n",
+    "            if cidoc_link:\n",
+    "                entity[current_entity]['Classe CIDOC proposta'] = cidoc_link\n",
+    "\n",
+    "        if current_entity and attribute_name:\n",
+    "            if not entity[current_entity].get('Attributi'):\n",
+    "                entity[current_entity]['Attributi'] = []\n",
+    "            attribute = {'Nome': attribute_name}\n",
+    "            if cidoc_link:\n",
+    "                attribute['Classe CIDOC proposta'] = cidoc_link\n",
+    "            entity[current_entity]['Attributi'].append(attribute)\n",
+    "\n",
+    "        if current_entity and same_as_list:\n",
+    "            entity[current_entity]['Sinonimi'] = [s.strip() for s in same_as_list]\n",
+    "\n",
+    "    # Add subclass information\n",
+    "    for row in relations_local:\n",
+    "        entity1 = row.get(RELATION_FIRST_COLUMN_LABEL)\n",
+    "        entity2 = row.get(RELATION_SECOND_COLUMN_LABEL)\n",
+    "        label = row.get(RELATION_NAME_COLUMN_LABEL)\n",
+    "\n",
+    "        if label == \"is_subclass_of\":\n",
+    "            if entity1 in entity:\n",
+    "                entity[entity1][\"Sottoclasse di\"] = entity2\n",
+    "\n",
+    "    # Construct relations\n",
+    "    entity_relations = []\n",
+    "    for row in relations_local:\n",
+    "        if row[RELATION_NAME_COLUMN_LABEL] != \"is_subclass_of\":\n",
+    "            relation = {\n",
+    "                \"Entità 1\": row[RELATION_FIRST_COLUMN_LABEL],\n",
+    "                \"Entità 2\": row[RELATION_SECOND_COLUMN_LABEL],\n",
+    "                \"Etichetta\": row[RELATION_NAME_COLUMN_LABEL],\n",
+    "                \"Inversa\": row[INVERSE_RELATION_COLUMN_LABEL],\n",
+    "                \"Proprietà CIDOC proposta\": row.get(CIDOC_COLUMN_LABEL)\n",
+    "            }\n",
+    "            entity_relations.append(relation)\n",
+    "\n",
+    "    # Create final object structure\n",
+    "    data = {\n",
+    "        \"Entità\": entity,\n",
+    "        \"Relazioni\": entity_relations\n",
+    "    }\n",
+    "\n",
+    "    return data"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 12,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Do it!\n",
+    "json_data = dict_lists_to_json(entities, relations)\n",
+    "\n",
+    "# Export data\n",
+    "with open(OUTPUT_FOLDER + ONTO_FILENAME + '.json', 'w') as out_json:\n",
+    "    json.dump(json_data, out_json, indent=2, ensure_ascii=False)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### PARTE III: produzione del file RDF\n",
+    "L'ultima parte dell'elaborazione usa l'oggetto strutturato prodotto dalla Parte II per creare un file RDF/XML valido (validato usando un'istanza pubblica di WEBVOWL) che descrive formalmente l'ontologia in corso di validazione.\n",
+    "\n",
+    "L'approccio è molto basico/ingenuo: si utilizza un template pre-generato (con l'aiuto del software Protegé) e si sostituiscono specifiche stringhe con le informazione ricavate dal JSON. L'idea era di produrre un esempio minimale funzionante, più che qualcosa di definitivo o particolarmente elaborato."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 13,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "{'#any'}\n"
+     ]
+    }
+   ],
+   "source": [
+    "# PART III: from the structured entity/relation data obtained from part II, generate a RDF file describing the ontology\n",
+    "\n",
+    "# Re-read the data and do a consistency check\n",
+    "entity_set = set(json_data['Entità'].keys())\n",
+    "\n",
+    "entity_relations_set = {ent for rel in json_data['Relazioni'] for ent in [rel['Entità 1'], rel['Entità 2']]}\n",
+    "\n",
+    "# The check: ideally, all entities mentioned in the 'relations' object should be present in the 'entities' object, and the output of the following block should be null\n",
+    "if not entity_relations_set.issubset(entity_set):\n",
+    "    print(entity_relations_set.difference(entity_set))\n",
+    "\n",
+    "# However, we actually used an extra placeholder in 'relations', '#any', which stands for any entity (any class); the expected output is thus a list containing only '#any'"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 14,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# RDF Templates\n",
+    "RDF_MAIN_TEMPLATE = 'template.rdf'\n",
+    "with open(DATA_FOLDER + RDF_MAIN_TEMPLATE, 'r') as in_file:\n",
+    "    RAW_RDF = in_file.read()\n",
+    "\n",
+    "# RDF snippets; info will replace placeholder tags (in uppercase between '#')\n",
+    "ENTITY_TEMPLATE = '''\n",
+    "    <!-- http://www.h2iosc.it/onto##NAME# -->\n",
+    "\n",
+    "    <owl:Class rdf:about=\"&h2iosc;#NAME#\">\n",
+    "        <rdfs:label>#LABEL#</rdfs:label>\n",
+    "        <rdfs:subClassOf>#PARENT#</rdfs:subClassOf>\n",
+    "        <rdfs:isDefinedBy rdf:resource=\"#URI#\"/>\n",
+    "    </owl:Class>\n",
+    "'''\n",
+    "SUBCLASS_STRING = \"        <rdfs:subClassOf>#PARENT#</rdfs:subClassOf>\\n\"\n",
+    "CLASS_DEFINED_STRING = '        <rdfs:isDefinedBy rdf:resource=\"#URI#\"/>\\n'\n",
+    "\n",
+    "OBJECT_PROPERTY_TEMPLATE = '''\n",
+    "    <!-- http://www.h2iosc.it/onto##NAME# -->\n",
+    "\n",
+    "    <owl:ObjectProperty rdf:about=\"&h2iosc;#NAME#\">\n",
+    "        <rdfs:label>#LABEL#</rdfs:label>\n",
+    "        <rdfs:range rdf:resource=\"&h2iosc;#RANGE#\"/>\n",
+    "        <rdfs:domain rdf:resource=\"&h2iosc;#DOMAIN#\"/>\n",
+    "        <rdfs:isDefinedBy rdf:resource=\"#URI#\"/>\n",
+    "    </owl:ObjectProperty>\n",
+    "'''\n",
+    "\n",
+    "OBJECT_PROPERTY_INVERSE_TEMPLATE = '''\n",
+    "    <!-- http://www.h2iosc.it/onto##NAME# -->\n",
+    "\n",
+    "    <owl:ObjectProperty rdf:about=\"&h2iosc;#NAME#\">\n",
+    "        <rdfs:label>#LABEL#</rdfs:label>\n",
+    "        <owl:inverseOf rdf:resource=\"&h2iosc;#INV#\"/>\n",
+    "        <rdfs:range rdf:resource=\"&h2iosc;#RANGE#\"/>\n",
+    "        <rdfs:domain rdf:resource=\"&h2iosc;#DOMAIN#\"/>\n",
+    "        <rdfs:isDefinedBy rdf:resource=\"#URI#\"/>\n",
+    "    </owl:ObjectProperty>\n",
+    "'''\n",
+    "OBJECT_DEFINED_STRING = '        <rdfs:isDefinedBy rdf:resource=\"#URI#\"/>\\n'\n",
+    "\n",
+    "DATATYPE_PROPERTY_TEMPLATE = '''\n",
+    "    <!-- http://www.h2iosc.it/onto##NAME# -->\n",
+    "\n",
+    "    <owl:DatatypeProperty rdf:about=\"&h2iosc;#NAME#\" rdf:isDefinedBy=\"#URI#\">\n",
+    "        <rdfs:label>#LABEL#</rdfs:label>\n",
+    "        <rdfs:domain rdf:resource=\"&h2iosc;#DOMAIN#\"/>\n",
+    "        <rdfs:isDefinedBy rdf:resource=\"#URI#\"/>\n",
+    "    </owl:DatatypeProperty>\n",
+    "'''\n",
+    "DATATYPE_DEFINED_STRING = '        <rdfs:isDefinedBy rdf:resource=\"#URI#\"/>\\n'"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 15,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Utility\n",
+    "def normalize_label(label):\n",
+    "    return label.lower().replace(' ', '_').replace('à', 'a').replace('è', 'e').replace('é', 'e').replace('ì', 'i').replace('ò', 'o').replace('ù', 'u')\n",
+    "\n",
+    "\n",
+    "# Main function for this part\n",
+    "def create_rdf(data):\n",
+    "    entities_rdf_list = []\n",
+    "    datatype_properties_rdf_list = []\n",
+    "    for label, ent in data['Entità'].items():\n",
+    "        \n",
+    "        entity_name = normalize_label(label)\n",
+    "        entity_rdf = ENTITY_TEMPLATE.replace('#LABEL#', label).replace('#NAME#', entity_name)\n",
+    "        #\n",
+    "        cidoc_class = ent.get('Classe CIDOC proposta')\n",
+    "        if cidoc_class:\n",
+    "            entity_rdf = entity_rdf.replace('#URI#', cidoc_class)\n",
+    "        else:\n",
+    "            entity_rdf = entity_rdf.replace(CLASS_DEFINED_STRING, '')\n",
+    "\n",
+    "        # Subclasses\n",
+    "        if 'Sottoclasse di' in ent.keys():\n",
+    "            parent = ent['Sottoclasse di']\n",
+    "            data['Relazioni'].append({\"Entità 1\": label,\n",
+    "                \"Entità 2\": parent,\n",
+    "                \"Etichetta\": \"is_subclass_of\", \"Inversa\": \"is_superclass_of\"})\n",
+    "            entity_rdf = entity_rdf.replace('#PARENT#', normalize_label(parent))\n",
+    "        else:\n",
+    "            entity_rdf = entity_rdf.replace(SUBCLASS_STRING, '')\n",
+    "\n",
+    "        entities_rdf_list.append(entity_rdf)\n",
+    "        \n",
+    "        if not ent.get('Attributi'):\n",
+    "            continue\n",
+    "        for datatype in ent['Attributi']:\n",
+    "            datatype_label = datatype['Nome']\n",
+    "            datatype_name = normalize_label(datatype_label)\n",
+    "            #\n",
+    "            datatype_rdf = DATATYPE_PROPERTY_TEMPLATE.replace('#LABEL#', datatype_label).replace('#NAME#', datatype_name).replace('#DOMAIN#', entity_name)\n",
+    "            #\n",
+    "            datatype_cidoc_class = datatype.get('Classe CIDOC proposta')\n",
+    "            if datatype_cidoc_class:\n",
+    "                datatype_rdf = datatype_rdf.replace('#URI#', datatype_cidoc_class)\n",
+    "            else:\n",
+    "                datatype_rdf = datatype_rdf.replace(DATATYPE_DEFINED_STRING, '')\n",
+    "            #\n",
+    "            datatype_properties_rdf_list.append(datatype_rdf)\n",
+    "\n",
+    "    relations_rdf_list = []\n",
+    "    for rel in data['Relazioni']:\n",
+    "        label = rel['Etichetta']\n",
+    "        inverse_label = rel['Inversa']\n",
+    "        domain = normalize_label(rel['Entità 1'])\n",
+    "        range1 = normalize_label(rel['Entità 2'])\n",
+    "        name = domain + '_' + normalize_label(label) + '_' + range1\n",
+    "        inverse_name = range1 + '_' + normalize_label(inverse_label) + '_' + domain\n",
+    "        #\n",
+    "        relation_rdf = OBJECT_PROPERTY_TEMPLATE.replace('#NAME#', name).replace('#LABEL#', label).replace('#DOMAIN#', domain).replace('#RANGE#', range1)\n",
+    "        #\n",
+    "        relation_cidoc_class = rel.get('Proprietà CIDOC proposta')\n",
+    "        if relation_cidoc_class:\n",
+    "            relation_rdf = relation_rdf.replace('#URI#', relation_cidoc_class)\n",
+    "        else:\n",
+    "            relation_rdf = relation_rdf.replace(OBJECT_DEFINED_STRING, '')\n",
+    "        #\n",
+    "        relation_inverse_rdf = OBJECT_PROPERTY_INVERSE_TEMPLATE.replace('#NAME#', inverse_name).replace('#LABEL#', inverse_label).replace('#DOMAIN#', range1).replace('#RANGE#', domain).replace('#INV#', name).replace(CLASS_DEFINED_STRING, '')\n",
+    "        #\n",
+    "        relation_full_rdf = relation_rdf + '\\n\\n\\n' + relation_inverse_rdf\n",
+    "        relations_rdf_list.append(relation_full_rdf)\n",
+    "    \n",
+    "    to_out = RAW_RDF.replace(ENTITY_TEMPLATE, '\\n\\n\\n'.join(entities_rdf_list)).replace(DATATYPE_PROPERTY_TEMPLATE, '\\n\\n\\n'.join(datatype_properties_rdf_list)\n",
+    "    ).replace(OBJECT_PROPERTY_INVERSE_TEMPLATE, '\\n\\n\\n'.join(relations_rdf_list))\n",
+    "\n",
+    "    return to_out\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 16,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Do it!\n",
+    "rdf_data = create_rdf(json_data)\n",
+    "\n",
+    "# Export\n",
+    "with open(OUTPUT_FOLDER + ONTO_FILENAME + '.rdf', 'w') as out_file:\n",
+    "    out_file.write(rdf_data)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### PARTE IV: il file turtle\n",
+    "Non abbiamo scritto codice per creare il .ttl -- questo resta tra i desiderata -- ma un file turtle può essere abbastanza facilmente generato usando software di terze parti, ad esempio l'istanza pubblica di WEBVOWL presente all'indirizzo\n",
+    "\n",
+    "https://service.tib.eu/webvowl/ ,\n",
+    "\n",
+    "il che serve anche per validare l'output prodotto."
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "venv",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.12.3"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

+ 1 - 1
CIDOC_version/ontology_parser_CIDOC.py

@@ -263,7 +263,7 @@ def create_rdf(data):
         else:
             relation_rdf = relation_rdf.replace(OBJECT_DEFINED_STRING, '')
         #
-        relation_inverse_rdf = OBJECT_PROPERTY_INVERSE_TEMPLATE.replace('#NAME#', inverse_name).replace('#LABEL#', inverse_label).replace('#DOMAIN#', range1).replace('#RANGE#', domain).replace('#INV#', name)
+        relation_inverse_rdf = OBJECT_PROPERTY_INVERSE_TEMPLATE.replace('#NAME#', inverse_name).replace('#LABEL#', inverse_label).replace('#DOMAIN#', range1).replace('#RANGE#', domain).replace('#INV#', name).replace(CLASS_DEFINED_STRING, '')
         #
         relation_full_rdf = relation_rdf + '\n\n\n' + relation_inverse_rdf
         relations_rdf_list.append(relation_full_rdf)

+ 12 - 35
CIDOC_version/output/man_draft_CIDOC.rdf

@@ -38,6 +38,7 @@
         <rdfs:label>is_composed_by</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;unita_codicologica"/>
         <rdfs:domain rdf:resource="&h2iosc;manoscritto"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P46_is_composed_of"/>
     </owl:ObjectProperty>
 
 
@@ -50,7 +51,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;manoscritto_is_composed_by_unita_codicologica"/>
         <rdfs:range rdf:resource="&h2iosc;manoscritto"/>
         <rdfs:domain rdf:resource="&h2iosc;unita_codicologica"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -74,7 +74,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;manoscritto_originated_in_origine"/>
         <rdfs:range rdf:resource="&h2iosc;manoscritto"/>
         <rdfs:domain rdf:resource="&h2iosc;origine"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -98,7 +97,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;manoscritto_has_been_in_provenienza"/>
         <rdfs:range rdf:resource="&h2iosc;manoscritto"/>
         <rdfs:domain rdf:resource="&h2iosc;provenienza"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -110,6 +108,7 @@
         <rdfs:label>is_composed_by</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;bifoglio"/>
         <rdfs:domain rdf:resource="&h2iosc;unita_codicologica"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P46_is_composed_of"/>
     </owl:ObjectProperty>
 
 
@@ -122,7 +121,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;unita_codicologica_is_composed_by_bifoglio"/>
         <rdfs:range rdf:resource="&h2iosc;unita_codicologica"/>
         <rdfs:domain rdf:resource="&h2iosc;bifoglio"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -134,6 +132,7 @@
         <rdfs:label>is_composed_by</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;foglio"/>
         <rdfs:domain rdf:resource="&h2iosc;unita_codicologica"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P46_is_composed_of"/>
     </owl:ObjectProperty>
 
 
@@ -146,7 +145,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;unita_codicologica_is_composed_by_foglio"/>
         <rdfs:range rdf:resource="&h2iosc;unita_codicologica"/>
         <rdfs:domain rdf:resource="&h2iosc;foglio"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -170,7 +168,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;manoscritto_consists_of_supporto"/>
         <rdfs:range rdf:resource="&h2iosc;manoscritto"/>
         <rdfs:domain rdf:resource="&h2iosc;supporto"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -182,6 +179,7 @@
         <rdfs:label>is_composed_by</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;foglio"/>
         <rdfs:domain rdf:resource="&h2iosc;bifoglio"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P46_is_composed_of"/>
     </owl:ObjectProperty>
 
 
@@ -194,7 +192,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;bifoglio_is_composed_by_foglio"/>
         <rdfs:range rdf:resource="&h2iosc;bifoglio"/>
         <rdfs:domain rdf:resource="&h2iosc;foglio"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -206,6 +203,7 @@
         <rdfs:label>is_composed_by</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;pagina"/>
         <rdfs:domain rdf:resource="&h2iosc;foglio"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P46_is_composed_of"/>
     </owl:ObjectProperty>
 
 
@@ -218,7 +216,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;foglio_is_composed_by_pagina"/>
         <rdfs:range rdf:resource="&h2iosc;foglio"/>
         <rdfs:domain rdf:resource="&h2iosc;pagina"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -230,6 +227,7 @@
         <rdfs:label>contains</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;illustrazione"/>
         <rdfs:domain rdf:resource="&h2iosc;pagina"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P56_bears_feature"/>
     </owl:ObjectProperty>
 
 
@@ -242,7 +240,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;pagina_contains_illustrazione"/>
         <rdfs:range rdf:resource="&h2iosc;pagina"/>
         <rdfs:domain rdf:resource="&h2iosc;illustrazione"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -266,7 +263,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;pagina_is_written_by_mano"/>
         <rdfs:range rdf:resource="&h2iosc;pagina"/>
         <rdfs:domain rdf:resource="&h2iosc;mano"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -290,7 +286,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;pagina_is_described_by_organizzazione_della_pagina"/>
         <rdfs:range rdf:resource="&h2iosc;pagina"/>
         <rdfs:domain rdf:resource="&h2iosc;organizzazione_della_pagina"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -302,6 +297,7 @@
         <rdfs:label>represents</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;#any"/>
         <rdfs:domain rdf:resource="&h2iosc;illustrazione"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P62_depicts"/>
     </owl:ObjectProperty>
 
 
@@ -314,7 +310,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;illustrazione_represents_#any"/>
         <rdfs:range rdf:resource="&h2iosc;illustrazione"/>
         <rdfs:domain rdf:resource="&h2iosc;#any"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -326,6 +321,7 @@
         <rdfs:label>has_time_span</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;periodo"/>
         <rdfs:domain rdf:resource="&h2iosc;evento"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P4_has_time-span"/>
     </owl:ObjectProperty>
 
 
@@ -338,7 +334,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;evento_has_time_span_periodo"/>
         <rdfs:range rdf:resource="&h2iosc;evento"/>
         <rdfs:domain rdf:resource="&h2iosc;periodo"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -350,6 +345,7 @@
         <rdfs:label>happens_in</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;luogo"/>
         <rdfs:domain rdf:resource="&h2iosc;evento"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P7_took_place_at"/>
     </owl:ObjectProperty>
 
 
@@ -362,7 +358,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;evento_happens_in_luogo"/>
         <rdfs:range rdf:resource="&h2iosc;evento"/>
         <rdfs:domain rdf:resource="&h2iosc;luogo"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -386,7 +381,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;persona_is_active_in_periodo"/>
         <rdfs:range rdf:resource="&h2iosc;persona"/>
         <rdfs:domain rdf:resource="&h2iosc;periodo"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -398,6 +392,7 @@
         <rdfs:label>transmits</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;testo"/>
         <rdfs:domain rdf:resource="&h2iosc;manoscritto"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P128_carries"/>
     </owl:ObjectProperty>
 
 
@@ -410,7 +405,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;manoscritto_transmits_testo"/>
         <rdfs:range rdf:resource="&h2iosc;manoscritto"/>
         <rdfs:domain rdf:resource="&h2iosc;testo"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -422,6 +416,7 @@
         <rdfs:label>is_author_of</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;testo"/>
         <rdfs:domain rdf:resource="&h2iosc;persona"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P14_carried_out_by"/>
     </owl:ObjectProperty>
 
 
@@ -434,7 +429,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;persona_is_author_of_testo"/>
         <rdfs:range rdf:resource="&h2iosc;persona"/>
         <rdfs:domain rdf:resource="&h2iosc;testo"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -458,7 +452,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;pagina_includes_aggiunta"/>
         <rdfs:range rdf:resource="&h2iosc;pagina"/>
         <rdfs:domain rdf:resource="&h2iosc;aggiunta"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -482,7 +475,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;pagina_includes_marginalia"/>
         <rdfs:range rdf:resource="&h2iosc;pagina"/>
         <rdfs:domain rdf:resource="&h2iosc;marginalia"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -494,6 +486,7 @@
         <rdfs:label>is_described_by</rdfs:label>
         <rdfs:range rdf:resource="&h2iosc;dimensioni"/>
         <rdfs:domain rdf:resource="&h2iosc;foglio"/>
+        <rdfs:isDefinedBy rdf:resource="http://www.cidoc-crm.org/cidoc-crm/P43_has_dimension"/>
     </owl:ObjectProperty>
 
 
@@ -506,7 +499,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;foglio_is_described_by_dimensioni"/>
         <rdfs:range rdf:resource="&h2iosc;foglio"/>
         <rdfs:domain rdf:resource="&h2iosc;dimensioni"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -530,7 +522,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;ente_is_placed_in_luogo"/>
         <rdfs:range rdf:resource="&h2iosc;ente"/>
         <rdfs:domain rdf:resource="&h2iosc;luogo"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -554,7 +545,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;ente_hosts_attivita"/>
         <rdfs:range rdf:resource="&h2iosc;ente"/>
         <rdfs:domain rdf:resource="&h2iosc;attivita"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -578,7 +568,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;persona_participates_in_attivita"/>
         <rdfs:range rdf:resource="&h2iosc;persona"/>
         <rdfs:domain rdf:resource="&h2iosc;attivita"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -602,7 +591,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;manoscritto_is_identified_by_segnatura"/>
         <rdfs:range rdf:resource="&h2iosc;manoscritto"/>
         <rdfs:domain rdf:resource="&h2iosc;segnatura"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -626,7 +614,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;persona_is_witness_of_evento"/>
         <rdfs:range rdf:resource="&h2iosc;persona"/>
         <rdfs:domain rdf:resource="&h2iosc;evento"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -650,7 +637,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;segnatura_has_prefix_fondo"/>
         <rdfs:range rdf:resource="&h2iosc;segnatura"/>
         <rdfs:domain rdf:resource="&h2iosc;fondo"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -674,7 +660,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;fondo_is_part_of_ente"/>
         <rdfs:range rdf:resource="&h2iosc;fondo"/>
         <rdfs:domain rdf:resource="&h2iosc;ente"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -698,7 +683,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;manoscritto_is_kept_at_ente"/>
         <rdfs:range rdf:resource="&h2iosc;manoscritto"/>
         <rdfs:domain rdf:resource="&h2iosc;ente"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -722,7 +706,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;fascicolo_is_subclass_of_unita_codicologica"/>
         <rdfs:range rdf:resource="&h2iosc;fascicolo"/>
         <rdfs:domain rdf:resource="&h2iosc;unita_codicologica"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -746,7 +729,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;carta_is_subclass_of_foglio"/>
         <rdfs:range rdf:resource="&h2iosc;carta"/>
         <rdfs:domain rdf:resource="&h2iosc;foglio"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -770,7 +752,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;miniatura_is_subclass_of_illustrazione"/>
         <rdfs:range rdf:resource="&h2iosc;miniatura"/>
         <rdfs:domain rdf:resource="&h2iosc;illustrazione"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -794,7 +775,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;decorazione_is_subclass_of_illustrazione"/>
         <rdfs:range rdf:resource="&h2iosc;decorazione"/>
         <rdfs:domain rdf:resource="&h2iosc;illustrazione"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -818,7 +798,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;origine_is_subclass_of_evento"/>
         <rdfs:range rdf:resource="&h2iosc;origine"/>
         <rdfs:domain rdf:resource="&h2iosc;evento"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -842,7 +821,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;provenienza_is_subclass_of_evento"/>
         <rdfs:range rdf:resource="&h2iosc;provenienza"/>
         <rdfs:domain rdf:resource="&h2iosc;evento"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
 
 
@@ -866,7 +844,6 @@
         <owl:inverseOf rdf:resource="&h2iosc;attivita_is_subclass_of_evento"/>
         <rdfs:range rdf:resource="&h2iosc;attivita"/>
         <rdfs:domain rdf:resource="&h2iosc;evento"/>
-        <rdfs:isDefinedBy rdf:resource="#URI#"/>
     </owl:ObjectProperty>
     
 

+ 3 - 0
readme.md

@@ -0,0 +1,3 @@
+# Parser Ontologie
+
+Questa repo contiene due versioni -- una di base ed una contenente riferimenti al CIDOC -- di un parser Python che accetta in input una descrizione informale di un'ontologia in formato xlsx (il foglio deve contenere due schede, una per le classi/entità e una per le proprietà/relazioni dell'ontologia) e produce una descrizione formale in formato RDF/XML.