Steven Hart

Knowledge Architect

From document library to intelligence platform

A knowledge graph design for a private equity intelligence platform, modelling the relationships between investors, fund managers, funds, deals, people, events, and editorial content; then showing what those relationships make possible in the product.

Developed while contracted to PEI Group as Senior Information Architect. While not taken into production, the work was reviewed and supported by senior design and data leadership as a potential future direction.

The problem

The existing platform was built on a CMS document repository, which was fine for just publishing content and data, but which made analysis of it very difficult.

The relationships between entities were not made explicit in the data model (relying on manual tagging by editors), so users had to manually piece together data to form the insights and narratives they were looking for.

As a result, key user goals were slow or impossible to achieve, and high-value signals and information contained in the data remained buried and undiscovered.

What my work did

I created a knowledge graph which, instead of organising content around pages and separate CMS tables, structured data around the objects and relationships of the domain:

  • Firms (Fund managers; Investors)

  • Funds and strategies

  • Sectors and regions

  • Events and signals

  • Articles

The knowledge graph created a structure where meaning is defined by connections, not just content. Insights can be derived from the values and properties associated with each object and relationship, and easily queried.

This makes rich insights available to the interface quickly and easily.

For example, in the snippet above, you can see how an 'Event' object can quickly be associated not only with who is attending, and where they work, but what funds their company manages, the funds they invest into, and articles that mention the fund.

Data modelling and UX informing each other

PEI's data, and the way it needs to be experienced, is relational: content and data are relevant to users only their relationship to investment strategies, sectors and regions.

The new data model captured that context and ensured all objects were related back to strategy, sector and region objects.

Once I had mapped objects at the data level to match known user thinking styles, I could use that to unlock a simple and powerful on the interface to allow quick journeys to relevant, contextual content and data.

This became a design anchor of the new pages, each starting with a 'Market Focus' selector tool:

What the graph unlocks for customers

Instead of having to do a lot of reading and bouncing back and forth between pages to find answers, users can now easily see answers to their key questions like:

  • Which LPs are increasing exposure to a specific sector?

  • How is a GP’s strategy shifting over time?

  • Where are emerging clusters of investment activity?

Previously, there were either manual, time-intensive tasks, or simply not feasible. Whereas the graph allows:

  • Dynamic exploration of investment activity

  • Identification of behavioural patterns

  • Aggregation of signals across multiple sources

An example: Detecting strategy drift

Every fund has a declared focus: the strategy, sector, and region it says it will invest in.

Every fund also has an actual behaviour: the strategies, sectors, and regions where its portfolio companies actually operate.

These two things are often different, and the difference reveals whether a fund is drifting from its stated mandate, entering new territory quietly, or shifting focus in response to market conditions.

In the old model, detecting strategy drift took up analyst time and effort, forcing them to:

  • Read multiple articles

  • Manually track firms, sectors, and activity

  • Build a mental model over time

But the graph enables instant comparison of declared or intended focus, and actual investment behaviours. A single graph query returns:

  • Relevant entities

  • Associated strategies

  • Recent signals and actual behaviours

Patterns across the dataset emerge immediately.

What this means for the product

The graph is not just a backend model, its design and structure actively drives key product features such as:

  • Dashboards showing GP activity and positioning

  • Article enrichment with network intelligence modules providing rich context and multiple windows onto the data ecosystem

  • Signal detection across entities and markets

This connects information architecture directly to user-facing value.

Challenges

Designing the model exposed (and solved) some non-trivial problems:

  • Entity resolution
    Identifying when different references describe the same real-world entity

  • Data consistency
    Applying controlled vocabularies across varied and messy source content

  • Scalability
    Ensuring the model can evolve as new data and use cases emerge

  • Synthetic vs real data
    Early validation required assumptions that would need testing in production

Outcome

This work created a foundation for:

  • Faster, more reliable insight generation

  • New product capabilities based on querying and aggregation

  • A shift in user behaviours: from searching for content, to discovering intelligence.

My role

  • Defined the information architecture

  • Designed the graph schema (entity and relationship model)

  • Established controlled vocabularies

  • Shaped how the model translates into product features

Reflection

Data-rich products often fail their users: not because they lack data, but because they lack user-centred structure.

This project demonstrated the benefits of applying knowledge architecture at the data layer:

  • Unlocked latent value in existing content

  • Enabled entirely new classes of interaction and insight

  • Turned static data into living information and insight

  • Ordered intelligence around user goals.