- Neomodel is a Pythonic OGM for Neo4j, offering class-based models, schema enforcement, and a rich query API on top of the official driver.
- Current releases follow SemVer, support modern Python and Neo4j versions, and introduce stricter cardinality checks, better config, and batch merge controls.
- The library provides both sync and async APIs, automatic schema tools, Django integration, and a flexible escape hatch to raw Cypher for complex queries.
- Now part of Neo4j Labs, neomodel benefits from active maintenance, integration tests, and real-world production feedback from enterprise deployments.

Neomodel is a Python Object-Graph Mapper (OGM) designed to make working with Neo4j feel as natural as writing regular Python code. Instead of manually crafting Cypher queries all the time, you describe your graph domain with classes, fields, and relationships, and let neomodel handle the mapping between Python objects and Neo4j nodes and relationships. It is built on top of the official Neo4j Python driver, with only a thin abstraction layer, so you get high-level convenience without sacrificing much performance.
As part of the Neo4j Labs ecosystem, neomodel is actively maintained, fully supports modern Python and Neo4j versions, and offers both synchronous and asynchronous APIs. It brings familiar, Django-like model definitions, a rich query API, schema enforcement via cardinality, built-in transactions, and tight integration with Django through django_neomodel. At the same time, it stays close to the metal: you can always drop down to raw Cypher when performance or query complexity demands it.
What neomodel is and why it matters
Neomodel is an Object Graph Mapper for the Neo4j graph database, bridging Python classes and graph structures. Rather than manually creating nodes and relationships through Cypher strings, you define Python classes that represent your domain entities, and neomodel turns them into labeled nodes with indexed properties and constraints in Neo4j. It builds on top of the official neo4j-python-driver, so its behavior is aligned with what you would do using the driver directly.
The library focuses on a familiar, class-based modeling style with robust inheritance, hooks, and validation. This approach makes it especially comfortable for developers used to Django ORM or other Python ORMs: attributes on your model classes correspond to properties in Neo4j, while special relationship fields capture graph edges. With this setup, graph traversal becomes a matter of following attributes on objects instead of writing verbose Cypher every time.
Under the hood, neomodel offers a powerful query API that covers common graph access patterns without forcing you into raw Cypher right away. You can filter, order, traverse relationships, slice node sets, and perform advanced operations through a Pythonic interface. When necessary, you still have access to a cypher_query helper to execute custom queries and work with the returned results directly.
Another central feature is schema enforcement through cardinality rules on relationships and property constraints. By specifying cardinality (for example, zero-or-more, one-or-more, or one) directly on relationship fields, you can enforce structural expectations in your graph and let neomodel help you avoid inconsistent data. Indexes and constraints are created automatically based on model definitions, and there are CLI utilities to apply or remove them from the database in a controlled way.
Neomodel also fully supports transactional work and is safe for use in multi-threaded environments. Transactions can be opened and committed in a predictable manner, and because the wrapping around the official driver is intentionally thin, the performance overhead is small. Benchmarks with tools like Locust show that neomodel’s abstraction layer adds minimal latency, even under concurrent load.
Version support, SemVer and configuration
Modern neomodel releases follow semantic versioning (SemVer) using the classic major.minor.patch pattern. This means that breaking changes are introduced only with major version bumps, new features without breaking behavior come as minor releases, and bug fixes are shipped as patch versions. This versioning strategy makes it easier to plan upgrades, especially for production systems.
In the 6.x series, neomodel targets up-to-date Python and Neo4j versions to match what most serious deployments are running. Specifically, neomodel 6.x requires Python 3.10 or newer and supports Neo4j 5.x, Neo4j 4.4 LTS, and the newer Neo4j 2025.x.x line. Both Neo4j Community, Neo4j Enterprise, and Neo4j Aura (the hosted service) are supported, giving you flexibility in how and where you host the database.
For older environments, prior neomodel branches still cover legacy Python and Neo4j combinations. The 5.x line supports Python 3.8+ with Neo4j 5.x and 4.4 (LTS), while the 4.x line covers Python 3.7 through 3.10 and Neo4j 4.x, including 4.4 LTS when using neomodel 4.0.10. This compatibility story makes it possible to migrate forward gradually while keeping your existing setups running.
Starting with neomodel 6, configuration is handled via a modern, type-annotated dataclass with runtime validation and environment variable support. Instead of scattered ad-hoc settings, configuration fields are validated on update, including type checks and logical constraints. Environment variables can be used to override configuration effortlessly, which plays nicely with containerized deployments and cloud environments.
The 6.0 release also introduces explicit breaking changes and behavioral fixes to make the API more predictable. For example, list resolution from Cypher now returns the expected depth: a query like RETURN collect(node) will map to results[0][0] instead of the previous, unintuitive results[0][0][0] structure. Cardinality checks are stricter and enabled by default, and several standalone helper functions have been moved into the central Database() and AsyncDatabase() singleton classes.
Installation and setup
The recommended way to install neomodel is directly from PyPI using your preferred package manager. You can add it to a virtual environment with a simple installation command and then manage upgrades through your usual dependency tooling. If you need the very latest changes or want to contribute, it’s also possible to install directly from the GitHub repository.
Before running any neomodel code, you must configure the connection URL so the library knows how to reach your Neo4j instance. This setup typically includes the scheme (Bolt or Neo4j), host, port, username, password, and optional database name. For Django projects, this configuration is usually placed in settings.py so it is initialized as soon as the application starts.
If your Neo4j server is newly installed, you should change the default password using the Neo4j browser or admin panel. By default, that panel is accessible at http://localhost:7474. Once you have updated the password and confirmed dbms.security.auth_enabled=true in the database configuration, you are ready to connect from neomodel.
For development and testing, it is common to use separate Neo4j databases and dedicated credentials. Neomodel’s own test suite expects a Neo4j 4+ database and relies on specific environment variables to connect. If you run the tests on a brand-new database, the test suite will set the password to test by default; if it detects an existing dataset, it will refuse to continue unless you explicitly pass a reset flag, helping you avoid accidental data loss.
When you want to test neomodel across multiple Python and Neo4j versions, Docker and docker-compose can orchestrate everything automatically. The project provides configuration to spin up a matrix of interpreter versions and Neo4j releases so that integration tests can be executed consistently. This is especially useful if you are contributing features that should work across several supported versions.
Core features: models, schema and indexes
Neomodel’s heart lies in its class-based model definitions that map directly to Neo4j node labels and relationships. You typically derive your node classes from StructuredNode, and relationship classes from StructuredRel. Node fields are defined using neomodel-specific property types, which control how data is stored and validated in Neo4j.
Each model class becomes a label in Neo4j, and neomodel automatically manages indexes and constraints based on your definitions. This means that uniqueness, required properties, and indexed fields can all be specified in Python without having to manually craft the Cypher commands for schema creation. Behind the scenes, neomodel translates your model metadata into appropriate Neo4j schema operations.
Relationships are attached to node classes using special descriptors like RelationshipTo, RelationshipFrom, and Relationship. These descriptors define the relationship type, cardinality, and traversal direction. RelationshipTo and RelationshipFrom express uni-directional navigation from the Python point of view, while Relationship is used when you want to treat the relationship as navigable in both directions from code, even though Neo4j itself always stores relationships with a direction.
When relationships are logically bidirectional, the recommended practice is to avoid creating two mirrored fields and use a single Relationship instead. Doing so keeps your model clean and consistent while still allowing traversal in both directions in your Python code. Neo4j will still store a directed relationship under the hood, but neomodel’s abstraction hides that detail when traversing.
For scenarios where node structures are not fully known in advance, neomodel offers a SemiStructuredNode base class. Classes derived from this type can hold “ad-hoc” properties that were not explicitly defined in the model. This is particularly handy when your graph schema evolves frequently or when you need to attach occasional extra attributes without refactoring the model every time.
Cardinality rules enforce the number of allowable relationships between nodes and are now backed by stricter checks in neomodel 6. Soft cardinality checks are available for all relationship cardinalities, and strict checking is enabled by default. If your data violates the configured relationship rules, neomodel will surface that issue rather than silently letting an inconsistent structure persist.
Automatic schema management and inspection
Once you define or update your models, you need to apply the corresponding constraints and indexes to the Neo4j database. Neomodel ships with a script called neomodel_install_labels that scans your models and creates or updates the required indexes and constraints. After altering your schema, you should run this script and review the reported number of processed classes to confirm everything is in sync.
If you ever need to wipe out constraints and indexes managed by neomodel, there is a complementary command called neomodel_remove_labels. This script automatically drops all existing constraints and indexes that neomodel previously installed. It also prints what has been removed so you clearly see the impact of the operation.
Both schema-management commands support a --db argument and default to the NEO4J_BOLT_URL environment variable when not provided. This behavior helps keep credentials and connection details out of the command line history and enables simple configuration through environment variables. It also makes automation and deployment scripts easier to manage.
On top of schema creation, neomodel includes a database inspection tool that can reverse-engineer an existing graph and generate a model file. Using the inspect command (which requires APOC procedures installed in Neo4j), you can scan a database and produce a models.py file under a target directory such as yourapp. The generated file includes imports, node class definitions, and relationship definitions that match the detected graph structure.
The inspection process can be tuned for large graphs by skipping relationship properties and cardinality scanning. For databases with hundreds of thousands of nodes and more than a million relationships, the full scan may take dozens of seconds. Options like --no-rel-props and --no-rel-cardinality speed things up by omitting detailed analysis, still generating relationship fields but defaulting cardinality to a generic value like ZeroOrMore.
Working with the neomodel Query API
Neomodel’s Query API lets you perform rich graph queries via Python methods on your model classes rather than writing Cypher directly. Each model exposes a .nodes manager-like attribute that represents a set of nodes with the corresponding label. From there, you can count, filter, order, slice, and fetch the underlying graph data.
Calling len(MyModel.nodes) triggers a Cypher query that counts the nodes with the label corresponding to MyModel. This offers an intuitive way to get counts without leaving Python syntax. If your node set is already filtered, the count will only reflect the nodes that match those filters, matching the behavior you would expect from a typical ORM.
Slicing is supported directly on node sets, which is extremely useful when you want to work with batched results. Expressions like MyModel.nodes[0:10] return a sliced node set that you can iterate over or further chain with additional filters. The slice does not return a raw list immediately but another node set object, so you can build up complex queries step by step.
Node sets support iteration and boolean checks, although length and truthiness operations are terminal. Once you evaluate len() or use a node set in a boolean context, you are effectively triggering an evaluation step that returns a concrete result rather than another chainable query object. This design balances Python idioms with the lazy nature of query building.
For retrieving actual objects, you typically use methods like .all() and .get() on the .nodes manager. These methods can receive a lazy=True argument to return just node IDs instead of full objects and all of their properties. This is helpful if you want to minimize data transfer or perform follow-up queries manually based on IDs.
Create, update, delete and relationships
Creating nodes with neomodel is as simple as instantiating your model class and calling save(). Once you have defined your properties and defaults, you can construct an instance with the desired field values, invoke save, and neomodel will create or update the corresponding node in Neo4j. This is analogous to how most ORMs handle persistence.
Updating nodes follows the same pattern: fetch an instance, assign new values to its attributes, and save it again. Neomodel takes care of generating the right Cypher to modify only the changed properties on the existing node. This approach keeps your code straightforward and keeps the details of update operations out of your business logic.
Deleting a node is also direct: once you have an instance, you call its delete() method. This removes the node and, depending on your relationship configuration and database constraints, may also remove attached relationships. Pre-delete and post-delete hooks can be defined for more advanced behavior or logging.
Relationships between nodes are managed through relationship fields and convenience methods such as connect(). Once you have two nodes, you can call something like actor.movies.connect(movie) to create an appropriate relationship instance in the graph. Relationship properties can be modeled via StructuredRel-based classes, giving you room to store attributes on edges as well.
More complex graph traversals can be achieved by following relationship attributes or combining query filters across relationships. For example, you might start from an Entity node set, filter by some property, and then traverse out to related nodes to filter on their attributes too. This gradually builds a Cypher query under the hood, which neomodel executes on your behalf.
Async neomodel and transpiled sync API
Neomodel includes asynchronous support built on top of the async capabilities of the official Neo4j Python driver. This means you can integrate Neo4j operations into modern async Python frameworks and services, taking full advantage of concurrency for workloads that involve many I/O-bound operations.
Performance testing with tools like Locust has shown that async neomodel, when used concurrently, outperforms both serial queries and concurrently executed synchronous neomodel calls. Because many graph operations involve network I/O and waiting for database responses, letting the event loop handle multiple queries at once yields better throughput and resource utilization.
Internally, neomodel keeps the async and sync APIs aligned by using a transpilation step that converts async code into its synchronous equivalent. A dedicated library is used to automatically strip await keywords, rename classes (for example, removing Async prefixes), and perform targeted replacements such as changing adb to db or mark_async_test to mark_sync_test. This approach avoids maintaining two entirely separate codebases.
When contributing, you primarily work on the async implementation under neomodel/async_ and then run the provided transpilation script to generate the sync variant. You can also rely on pre-commit hooks to automate this step and ensure that both versions stay in sync. In many cases, your business logic only needs to be written once in the async layer.
Some functionality may be intended only for async or only for sync usage, and neomodel exposes a utility pattern (inspired by the official Neo4j driver) to separate those code paths. This lets you define behaviors that differ between the two modes while keeping your overall API surface coherent. Test modules, such as those covering the match API, demonstrate how async code is transpiled and how the resulting sync code behaves.
Database and AsyncDatabase singletons
In neomodel 6, the Database() and AsyncDatabase() classes are implemented as true singletons to clarify how global operations are handled. Rather than scattering standalone utility functions, neomodel now groups database-wide operations into these singleton instances, making the API more discoverable and consistent.
Several legacy functions were moved into the Database() class and removed from the global namespace. Examples include change_neo4j_password, clear_neo4j_database, drop_constraints, drop_indexes, remove_all_labels, install_labels, and install_all_labels. The async counterparts are accessible from the AsyncDatabase() singleton, usually referenced as adb in the async context.
This redesign simplifies mental models around database-level operations and avoids ambiguity in how configuration and global state are handled. By ensuring that both sync and async modes share a similar structure, it also becomes easier to reason about when you can safely switch from one approach to the other or run them side by side in different parts of a larger application.
In addition, the 6.0 release introduced a merge_by parameter for batch operations, providing more control over how nodes and relationships are merged. You can customize which labels and property keys define uniqueness for batch merges, which is critical when handling large amounts of data ingestion or synchronization tasks.
Django integration and real-world usage
Neomodel integrates cleanly with Django through the django_neomodel package, enabling you to treat your graph models as part of a Django project. With this integration, configuration typically lives in settings.py, and your node and relationship classes coexist with the rest of your Django ecosystem, including apps, middleware, and views.
A concrete example is a multi-part Django tutorial that uses neomodel to explore and search a Paradise Papers-style graph database. In the first parts, you set up the Django project and integrate neomodel; in later parts, you build a fetch_api app, define models that reflect entities, relationships, and properties in the graph, and then gradually build utilities and views on top of them.
Within such a project, you can use neomodel models directly inside Django views, serializers, or helper modules. A common approach is to create a utils.py file where you define convenience functions that call into the Query API. For example, you might implement count_nodes, fetch_nodes, and fetch_node_details helpers that ingest filters, pagination parameters, and model names dynamically.
Some data, such as lists of countries, jurisdictions, or data sources, might be expensive to query repeatedly, so you can precompute them using raw Cypher and store them as constants. A constants.py module can execute those Cypher queries once, derive sorted lists like COUNTRIES, JURISDICTIONS, and DATASOURCE, and make them easily importable across your Django app.
To ensure these constants are ready at application startup, you can hook into Django’s app configuration by defining a ready() method in fetch_api/app.py. Inside that method, you import constants.py, which triggers the initial Cypher queries and populates the corresponding lists. This way, subsequent requests can simply read from the already prepared data structures.
Raw Cypher vs OGM for complex queries
While neomodel’s OGM is ideal for everyday CRUD and relationship traversal, there are scenarios in which manually written Cypher queries are more efficient. Deeply nested traversals, second-degree or multi-hop queries, and sophisticated aggregations can sometimes be expressed more clearly and with better performance as raw Cypher than as OGM chains.
A typical example is finding co-actors who have appeared in any film alongside a specific actor, such as determining all people who have worked with Tom Hanks. As a Cypher query, this can be quite direct: you match the actor, traverse to the movies they acted in, and then traverse to other actors in those movies, applying filters and aggregations as needed. The result is a concise, optimized graph pattern.
Replicating that same behavior purely via OGM convenience methods might require an O(n²) style process, looping over movies and related actors at the Python level. This is both less elegant and less efficient than letting Neo4j handle the heavy lifting in a single Cypher statement. It illustrates that OGMs are not a silver bullet for every graph access pattern.
Moreover, when you rely on OGM operations for deep traversals, the shape of the returned data can become quite complex. The generated Cypher will often include the starting node, intermediate relationships, neighboring nodes, and their relationships. This can be beneficial when you need rich context, but it may be overkill when you only want specific aggregated results or a subset of properties.
In situations where performance and clarity are paramount, using cypher_query directly to execute hand-crafted Cypher can be the better option. Neomodel makes this escape hatch intentional: you can mix and match high-level OGM interactions with low-level Cypher in the same project, choosing the right tool for each particular query while still keeping models as the single source of truth for your schema.
Neomodel in Neo4j Labs and project governance
Neomodel’s move into the Neo4j Labs program formalized its status as an actively maintained, community-driven project with clear quality expectations. Neo4j Labs serves as a home for experimental and advanced projects that have real traction but are not part of the core product. Many well-known tools, like graph data science components, the GraphQL library, APOC core, and streaming integrations, have roots in this program.
Belonging to Neo4j Labs means neomodel adheres to baseline standards around testing, security checks, and automated tooling like CI/CD pipelines. The maintenance team runs integration tests against a wide matrix of Python and Neo4j versions, ensuring compatibility as new releases come out. This is part of the reason neomodel can claim full support for all currently supported Python and Neo4j versions, both Community and Enterprise.
The project remains fully open source and community-centric, with GitHub serving as the main hub for issues, discussions, and contributions. The Issues log is once again actively curated, with older items being triaged and summarized as time permits, while the Discussions area is open to anyone and used for announcements and design conversations. There is at least one Neo4j employee acting as maintainer, connecting field experience back into the project.
Real-world production deployments, such as OpenStudyBuilder from Novo Nordisk, play an important role in shaping neomodel’s roadmap. These large-scale, real-life applications provide concrete requirements and feedback that translate into new features and improvements contributed back to the community. This virtuous loop shows how enterprise usage and open-source development can reinforce each other.
Between its Pythonic modeling, strong Neo4j alignment, async and sync APIs, and active Lab-backed evolution, neomodel offers a compelling way to work with graphs from Python in both small projects and demanding production systems. Used thoughtfully—leaning on the OGM for clear domain modeling and typical graph interactions, and reaching for raw Cypher when complex patterns or performance demand it—it can significantly simplify how you design, query, and maintain graph-based applications.