Building My Own Mindapp with AI-Based Functionality

Parviz Deyhim
2 min readJun 21, 2024

--

I’m currently working on a side project to build a mind-map tool infused with some AI coolness: https://mindmap.deyhim.io/

I’ve always been fascinated by mind maps, which are invaluable tools for understanding the relationships between various topics and organizing my thoughts. As someone who thrives on visual aids, I've always been fascinated by visualizing information through mind maps.

My interest in mind mapping goes way back. As a kid, I was so captivated by the concept that I even took a class. This skill proved to be incredibly helpful throughout my school years. Fast-forward to today, and I still rely on mind maps when taking notes on large and complex topics.

Given my long-standing passion for mind mapping, I’ve always wanted a tool that could assist me in this process. While there are several tools available in the market, I wanted something more advanced — an intelligent tool that could automatically create mindmaps from prompts, books, articles, and YouTube …

With the recent surge in interest around Large Language Models (LLMs), I’ve been experimenting with various models, training them, and exploring techniques like Retrieval-Augmented Generation (RAG). This sparked an idea: why not build a tool that leverages LLMs to automate the creation of mindmaps?

There are indeed AI-infused mind-mapping tools available today, but they lack several key features I envisioned. For instance, a tool that not only helps in creating mindmaps but also serves as a brainstorming aid. Imagine building a mindmap of a topic you’re exploring, and when you hit a roadblock, the LLM suggests the next ideas and steps.

Combining my love for mind mapping with the fun of experimenting with LLMs, I decided to create something from scratch. I’m excited to introduce the very first version of my app. At this stage, it’s essentially a wrapper around OpenAI’s API. For now, it takes a prompt and converts the response into a mindmap. It has bugs and will break. For instance, if OpenAI doesn’t respond back in the correct format (correct JSON schema), my app will break. I’m actually excited to see how I can get a consistent and reliable response from LLM models.

In my spare time, I plan to add more features to the app. Some ideas include:

  • Integrating my own model to reduce costs, as using OpenAI isn’t cheap.
  • Continuing to use OpenAI for content generation but employing a small local model to convert the responses into a JSON format that my UI can consume. This conversion is quite verbose and might not require a powerful model — or perhaps it does; I’m still figuring it out. On paper, though, it seems feasible to use a smaller model for most of the JSON output.

This is just the beginning of my app. I hope it evolves into something useful for others as well. Even if it doesn’t, I’m certain I’ll learn something valuable along the way.

Stay tuned for updates, and feel free to share your thoughts and feedback!

Thank you for joining me on this journey. I’d love to hear your suggestions or ideas.

--

--

Parviz Deyhim

Data lover and cloud architect @databricks (ex-google, ex-aws)