Steven Hart

Information Architect

From document library to intelligence platform

A knowledge graph design for a private equity intelligence platform, modelling the relationships between investors, fund managers, funds, deals, people, events, and editorial content; then showing what those relationships make possible in the product.

Developed while contracted to PEI Group as Senior Information Architect. While not taken into production, the work was reviewed and supported by senior design and data leadership as a potential future direction.

The problem

The existing platform was built on a CMS document repository creating a number of structural problems. Content was structured for publishing, not analysis; the relationships between entities were not made explicit in the data model (relying on manual tagging by editors); and users had to manually piece together data to form the insights and narratives they were looking for.

As a result, key user goals were slow or impossible to answer, and high-value signals remained buried in unconnected content.

What my work did

I created a knowledge graph which, instead of organising content around pages and separate CMS tables, structured data around objects and relationships:

  • Firms (Fund managers; Investors)

  • Funds and strategies

  • Sectors and regions

  • Events and signals

  • Articles

The knowledge graph created a structure where meaning is defined by connections, not just content. Insights can be derived from the values and properties associated with each object and relationship, and easily queried.

This makes far richer insights available quickly and easily.

For example, in the snippet above, you can see how an 'Event' object can quickly be associated not only with who is attending, and where they work, but what funds their company manages, the funds they invest into, and articles that mention the fund.

A core UX insight that came from the data schema

PEI's data is inherently relational: users of the site invariably have a focus on particular investment strategies, sectors and regions.

The new data model named each of those dimensions and made explicit connections to and from each, to each other object in the model.

It therefore became possible to unify unstructured content by relevant connections, in a way that supported querying, comparison, and pattern detection.

It also enabled a design anchor on the front end, where a user first selects their strategy, section and region interest, and is then presented with all relevant views.

What the graph unlocks for customers

Instead of having to do a lot of reading and bouncing between pages to find answers, users can now ask:

  • Which LPs are increasing exposure to a specific sector?

  • How is a GP’s strategy shifting over time?

  • Where are emerging clusters of investment activity?

  • Dynamic exploration of investment activity

  • Identification of behavioural patterns

  • Aggregation of signals across multiple sources

Previously, there were either manual, time-intensive tasks, or simply not feasible.

An example: Detecting strategy drift

Every fund has a declared focus: the strategy, sector, and region it says it will invest in.

Every fund also has an actual behaviour: the strategies, sectors, and regions where its portfolio companies actually operate.

These two things are often different, and the difference reveals whether a fund is drifting from its stated mandate, entering new territory quietly, or shifting focus in response to market conditions.

In the old model, detecting strategy drift took up analyst time and effort, forcing them to:

  • Read multiple articles

  • Manually track firms, sectors, and activity

  • Build a mental model over time

But the graph enables instant comparison of declared or intended focus, and actual investment behaviours. A single graph query returns:

  • Relevant entities

  • Associated strategies

  • Recent signals and actual behaviours

Patterns across the dataset emerge immediately.

What this means for the product

The graph is not just a backend model, its design and structure actively drives key product features:

  • Dashboards showing GP activity and positioning

  • Article enrichment with network intelligence modules providing rich context and multiple windows onto the data ecosystem

  • Signal detection across entities and markets

This connects information architecture directly to user-facing value.

Challenges

Designing the model exposed (and solved) some non-trivial problems:

  • Entity resolution
    Identifying when different references describe the same real-world entity

  • Data consistency
    Applying controlled vocabularies across varied and messy source content

  • Scalability
    Ensuring the model can evolve as new data and use cases emerge

  • Synthetic vs real data
    Early validation required assumptions that would need testing in production

Outcome

This work created a foundation for:

  • Faster, more reliable insight generation

  • New product capabilities based on querying and aggregation

  • A shift in user behaviours: from searching for content, to discovering intelligence.

My role

  • Defined the information architecture

  • Designed the graph schema (entity and relationship model)

  • Established controlled vocabularies

  • Shaped how the model translates into product features

Reflection

Data-rich products often fail their users: not because they lack data, but because they lack user-centred structure.

This project demonstrated the benefits of applying knowledge architecture at the data layer:

  • Unlocked latent value in existing content

  • Enabled entirely new classes of interaction and insight

  • Turned static data into living information and insight

  • Ordered intelligence around user goals.


Steven Hart

Information Architect