logo isagog

We build tools for a responsible use of AI that drives real value.

The name "Isagog" recalls the ancient Greek term Isagoge, which means "introduction": a preliminary guide, a conceptual map designed to navigate complex territories.

In the treatise Isagoge, the philosopher Porphyry (Tyrus, 233 AD – Rome, circa 305 AD) offers a systematic access to Aristotelian thought through a drawing: the famous Tree of Porphyry. This hierarchical and branching representation of abstract and concrete concepts has profoundly marked the history of Western thought, becoming a model for organizing knowledge — from medieval philosophy to modern information science.

Today, that ancient "knowledge tree" lives on in Knowledge Graphs: structures that represent relationships between concepts and entities, whether abstract or concrete, in a formalized and accessible way. This is a semantic infrastructure that allows for organizing, connecting, and contextualizing information, making it accessible, interpretable, and verifiable.

The integration between generative models (Language Models) and Knowledge Graphs represents one of the most interesting developments in AI. This is precisely the area in which ISAGOG operates: we leverage the expressive and computational power of language models to efficiently build formal knowledge structures — thus making transparent and verifiable what remains opaque in neural networks. Our neuro-symbolic solutions are effective because they combine the flexibility of machine learning with the precision and interpretability of symbolic representation.

Learn more
We design neuro-symbolic AI systems to manage high volumes of diverse data and understand semantic nuance, tone and domain specific knowledge.
  • Guido Vetere

    Guido Vetere

    General Manager

    Ex Research Director, IBM Italy, AI Professor, G. Marconi University, Founder of ISAGOG

  • Rober J. Alexander

    Rober J. Alexander

    Scientific Director

    Business Dev. Executive Health and research, IBM, Design Thinking coach, Distinguished Architect, Open Group

  • David Valente

    David Valente

    Technical Director

    Automation Engineer, HCLSoftware Data Scientist BSc Mathematics & Philosophy

  • Luca De Biase

    Luca De Biase

    Journalist and writer Prof. of Future History, LUISS Prof. Knowledge Management, La Scuola Superiore Sant'Anna and Stanford University

  • Independence

    We develop artificial intelligence with a critical spirit and in complete autonomy. We create open, transparent, and reliable solutions.

    We are part of a network of innovators in AI who believe collaboration is the true engine of progress.

  • Responsibility

    AI is ethical when it is responsibly integrated into society.

    We create solutions that respect those who use them and protect those who are impacted by them.

  • Impact

    We nurture exchange within the AI community, to anticipate the challenges of the future together.

    We design and implement sustainable solutions that bring real value to those who use them.

  • Transparency

    Machines cant tell truth from lie. Together with our clients we develop conceptual models that allow us to verify AI's applied reasoning. .

  • Security

    We use open, easily adaptable technologies to generate value without compromising privacy.

  • Experience

    Together, we bring over 50 years of experience in AI research and development.

We have worked in tech, healthcare, research, and culture. Together, we bring over 50 years of experience in AI research and development.
Tree