Injecting area experience into your AI system | by Dr. Janna Lipenkova | Feb, 2025

Find out how to join the dots between AI expertise and actual life

(Supply: Getty Pictures)

When beginning their AI initiatives, many firms are trapped in silos and deal with AI as a purely technical enterprise, sidelining area specialists or involving them too late. They find yourself with generic AI functions that miss {industry} nuances, produce poor suggestions, and shortly change into unpopular with customers. In contrast, AI programs that deeply perceive industry-specific processes, constraints, and choice logic have the next advantages:

  • Elevated effectivity — The extra area data AI incorporates, the much less handbook effort is required from human specialists.
  • Improved adoption — Specialists disengage from AI programs that really feel too generic. AI should converse their language and align with actual workflows to realize belief.
  • A sustainable aggressive moat — As AI turns into a commodity, embedding proprietary experience is the best solution to construct defensible AI programs (cf. this text to be taught concerning the constructing blocks of AI’s aggressive benefit).

Area specialists may also help you join the dots between the technicalities of an AI system and its real-life utilization and worth. Thus, they need to be key stakeholders and co-creators of your AI functions. This information is the primary a part of my collection on expertise-driven AI. Following my psychological mannequin of AI programs, it supplies a structured strategy to embedding deep area experience into your AI.

Overview of the strategies for area data integration

All through the article, we are going to use the use case of provide chain optimisation (SCO) for example these completely different strategies. Trendy provide chains are underneath fixed pressure from geopolitical tensions, local weather disruptions, and risky demand shifts, and AI can present the type of dynamic, high-coverage intelligence wanted to anticipate delays, handle dangers, and optimise logistics. Nonetheless, with out area experience, these programs are sometimes disconnected from the realities of life. Let’s see how we will remedy this by integrating area experience throughout the completely different parts of the AI utility.

AI is simply as domain-aware as the info it learns from. Uncooked information isn’t sufficient — it should be curated, refined, and contextualised by specialists who perceive its that means in the actual world.

Information understanding: Educating AI what issues

Whereas information scientists can construct refined fashions to analyse patterns and distributions, these analyses typically keep at a theoretical, summary stage. Solely area specialists can validate whether or not the info is full, correct, and consultant of real-world circumstances.

In provide chain optimisation, for instance, cargo information could include lacking supply timestamps, inconsistent route particulars, or unexplained fluctuations in transit occasions. An information scientist would possibly discard these as noise, however a logistics professional might have real-world explanations of those inconsistencies. As an illustration, they may be brought on by weather-related delays, seasonal port congestion, or provider reliability points. If these nuances aren’t accounted for, the AI would possibly be taught an excessively simplified view of provide chain dynamics, leading to deceptive threat assessments and poor suggestions.

Specialists additionally play a essential function in assessing the completeness of information. AI fashions work with what they’ve, assuming that each one key components are already current. It takes human experience and judgment to establish blind spots. For instance, in case your provide chain AI isn’t skilled on customs clearance occasions or manufacturing facility shutdown histories, it gained’t be capable of predict disruptions brought on by regulatory points or manufacturing bottlenecks.

Implementation tip: Run joint Exploratory Information Evaluation (EDA) periods with information scientists and area specialists to establish lacking business-critical info, making certain AI fashions work with a whole and significant dataset, not simply statistically clear information.

One widespread pitfall when beginning with AI is integrating an excessive amount of information too quickly, resulting in complexity, congestion of your information pipelines, and blurred or noisy insights. As an alternative, begin with a few high-impact information sources and develop incrementally primarily based on AI efficiency and person wants. As an illustration, an SCO system could initially use historic cargo information and provider reliability scores. Over time, area specialists could establish lacking info — equivalent to port congestion information or real-time climate forecasts — and level engineers to these information sources the place it may be discovered.

Implementation tip: Begin with a minimal, high-value dataset (usually 3–5 information sources), then develop incrementally primarily based on professional suggestions and real-world AI efficiency.

AI fashions be taught by detecting patterns in information, however typically, the best studying indicators aren’t but current in uncooked information. That is the place information annotation is available in — by labelling key attributes, area specialists assist the AI perceive what issues and make higher predictions. Think about an AI mannequin constructed to foretell provider reliability. The mannequin is skilled on cargo information, which include supply occasions, delays, and transit routes. Nonetheless, uncooked supply information alone doesn’t seize the complete image of provider threat — there are not any direct labels indicating whether or not a provider is “excessive threat” or “low threat.”

With out extra specific studying indicators, the AI would possibly make the fallacious conclusions. It might conclude that each one delays are equally dangerous, even when some are brought on by predictable seasonal fluctuations. Or it’d overlook early warning indicators of provider instability, equivalent to frequent last-minute order modifications or inconsistent stock ranges.

Area specialists can enrich the info with extra nuanced labels, equivalent to provider threat classes, disruption causes, and exception-handling guidelines. By introducing these curated studying indicators, you’ll be able to be sure that AI doesn’t simply memorise previous tendencies however learns significant, decision-ready insights.

You shouldn’t rush your annotation efforts — as a substitute, take into consideration a structured annotation course of that features the next parts:

  • Annotation tips: Set up clear, standardized guidelines for labeling information to make sure consistency. For instance, provider threat classes needs to be primarily based on outlined thresholds (e.g., supply delays over 5 days + monetary instability = excessive threat).
  • A number of professional assessment: Contain a number of area specialists to scale back bias and guarantee objectivity, significantly for subjective classifications like threat ranges or disruption affect.
  • Granular labelling: Seize each direct and contextual components, equivalent to annotating not simply cargo delays but additionally the trigger (customs, climate, provider fault).
  • Steady refinement: Recurrently audit and refine annotations primarily based on AI efficiency — if predictions persistently miss key dangers, specialists ought to regulate labelling methods accordingly.

Implementation tip: Outline an annotation playbook with clear labelling standards, contain no less than two area specialists per essential label for objectivity, and run common annotation assessment cycles to make sure AI is studying from correct, business-relevant insights.

Up to now, our AI fashions be taught from real-life historic information. Nonetheless, uncommon, high-impact occasions — like manufacturing facility shutdowns, port closures, or regulatory shifts in our provide chain state of affairs — could also be underrepresented. With out publicity to those eventualities, AI can fail to anticipate main dangers, resulting in overconfidence in provider stability and poor contingency planning. Artificial information solves this by creating extra datapoints for uncommon occasions, however professional oversight is essential to make sure that it displays believable dangers reasonably than unrealistic patterns.

Let’s say we wish to predict provider reliability in our provide chain system. The historic information could have few recorded provider failures — however that’s not as a result of failures don’t occur. Reasonably, many firms proactively mitigate dangers earlier than they escalate. With out artificial examples, AI would possibly deduce that provider defaults are extraordinarily uncommon, resulting in misguided threat assessments.

Specialists may also help generate artificial failure eventualities primarily based on:

  • Historic patterns — Simulating provider collapses triggered by financial downturns, regulatory shifts, or geopolitical tensions.
  • Hidden threat indicators — Coaching AI on unrecorded early warning indicators, like monetary instability or management modifications.
  • Counterfactuals — Creating “what-if” occasions, equivalent to a semiconductor provider out of the blue halting manufacturing or a protracted port strike.

Actionable step: Work with area specialists to outline high-impact however low-frequency occasions and eventualities, which could be in focus if you generate artificial information.

Information makes area experience shine. An AI initiative that depends on clear, related, and enriched area information may have an apparent aggressive benefit over one which takes the “quick-and-dirty” shortcut to information. Nonetheless, remember the fact that working with information could be tedious, and specialists have to see the result of their efforts — whether or not it’s bettering AI-driven threat assessments, optimising provide chain resilience, or enabling smarter decision-making. The secret’s to make information collaboration intuitive, purpose-driven, and instantly tied to enterprise outcomes, so specialists stay engaged and motivated.

As soon as AI has entry to high-quality information, the subsequent problem is making certain it generates helpful and correct outputs. Area experience is required to:

  1. Outline clear AI goals aligned with enterprise priorities
  2. Guarantee AI appropriately interprets industry-specific information
  3. Constantly validate AI’s outputs and proposals

Let’s take a look at some widespread AI approaches and see how they’ll profit from an additional shot of area data.

Coaching predictive fashions from scratch

For structured issues like provide chain forecasting, predictive fashions equivalent to classification and regression may also help anticipate delays and recommend optimisations. Nonetheless, to verify these fashions are aligned with enterprise objectives, information scientists and data engineers have to work collectively. For instance, an AI mannequin would possibly attempt to minimise cargo delays in any respect prices, however a provide chain professional is aware of that fast-tracking each cargo by way of air freight is financially unsustainable. They’ll formulate extra constraints on the mannequin, making it prioritise essential shipments whereas balancing price, threat, and lead occasions.

Implementation tip: Outline clear goals and constraints with area specialists earlier than coaching AI fashions, making certain alignment with actual enterprise priorities.

For an in depth overview of predictive AI strategies, please discuss with Chapter 4 of my guide The Artwork of AI Product Administration.

Navigating the LLM triad

Whereas predictive fashions skilled from scratch can excel at very particular duties, they’re additionally inflexible and can “refuse” to carry out some other activity. GenAI fashions are extra open-minded and can be utilized for extremely various requests. For instance, an LLM-based conversational widget in an SCO system can enable customers to work together with real-time insights utilizing pure language. As an alternative of sifting by way of rigid dashboards, customers can ask, “Which suppliers are susceptible to delays?” or “What different routes can be found?” The AI pulls from historic information, reside logistics feeds, and exterior threat components to supply actionable solutions, recommend mitigations, and even automate workflows like rerouting shipments.

However how can you make sure that an enormous, out-of-the-box mannequin like ChatGPT or Llama understands the nuances of your area? Let’s stroll by way of the LLM triad — a development of strategies to include area data into your LLM system.

Determine 2: The LLM triad is a development of strategies for incorporating domain- and company-specific data into your LLM system

As you progress from left to proper, you’ll be able to ingrain extra area data into the LLM — nonetheless, every stage additionally provides new technical challenges (if you’re involved in a scientific deep-dive into the LLM triad, please try chapters 5–8 of my guide The Artwork of AI Product Administration). Right here, let’s concentrate on how area specialists can bounce in at every of the levels:

  1. Prompting out-of-the-box LLMs would possibly seem to be a generic strategy, however with the best instinct and talent, area specialists can fine-tune prompts to extract the additional little bit of area data out of the LLM. Personally, I feel this can be a large a part of the fascination round prompting — it places probably the most highly effective AI fashions instantly into the fingers of area specialists with none technical experience. Some key prompting strategies embrace:
  • Few-shot prompting: Incorporate examples to information the mannequin’s responses. As an alternative of simply asking “What are different delivery routes?”, a well-crafted immediate contains pattern eventualities, equivalent to “Instance of previous state of affairs: A earlier delay on the Port of Shenzhen was mitigated by rerouting by way of Ho Chi Minh Metropolis, lowering transit time by 3 days.”
  • Chain-of-thought prompting: Encourage step-by-step reasoning for advanced logistics queries. As an alternative of “Why is my cargo delayed?”, a structured immediate may be “Analyse historic supply information, climate experiences, and customs processing occasions to find out why cargo #12345 is delayed.”
  • Offering additional background info: Connect exterior paperwork to enhance domain-specific responses. For instance, prompts might reference real-time port congestion experiences, provider contracts, or threat assessments to generate data-backed suggestions. Most LLM interfaces already let you conveniently connect extra information to your immediate.

2. RAG (Retrieval-Augmented Technology): Whereas prompting helps information AI, it nonetheless depends on pre-trained data, which can be outdated or incomplete. RAG permits AI to retrieve real-time, company-specific information, making certain that its responses are grounded in present logistics experiences, provider efficiency information, and threat assessments. For instance, as a substitute of producing generic provider threat analyses, a RAG-powered AI system would pull real-time cargo information, provider credit score scores, and port congestion experiences earlier than making suggestions. Area specialists may also help choose and construction these information sources and are additionally wanted in terms of testing and evaluating RAG programs.

Implementation tip: Work with area specialists to curate and construction data sources — making certain AI retrieves and applies solely probably the most related and high-quality enterprise info.

3. Fantastic-tuning: Whereas prompting and RAG inject area data on-the-fly, they don’t inherently embed provide domain-specific workflows, terminology, or choice logic into your LLM. Fantastic-tuning adapts the LLM to assume like a logistics professional. Area specialists can information this course of by creating high-quality coaching information, making certain AI learns from actual provider assessments, threat evaluations, and procurement choices. They’ll refine {industry} terminology to forestall misinterpretations (e.g., AI distinguishing between “buffer inventory” and “security inventory”). In addition they align AI’s reasoning with enterprise logic, making certain it considers price, threat, and compliance — not simply effectivity. Lastly, they consider fine-tuned fashions, testing AI in opposition to real-world choices to catch biases or blind spots.

Implementation tip: In LLM fine-tuning, information is the essential success issue. High quality goes over amount, and fine-tuning on a small, high-quality dataset may give you wonderful outcomes. Thus, give your specialists sufficient time to determine the best construction and content material of the fine-tuning information and plan for loads of end-to-end iterations of your fine-tuning course of.

Encoding professional data with neuro-symbolic AI

Each machine studying algorithm will get it fallacious every so often. To mitigate errors, it helps to set the “arduous details” of your area in stone, making your AI system extra dependable and controllable. This mixture of machine studying and deterministic guidelines is named neuro-symbolic AI.

For instance, an specific data graph can encode provider relationships, regulatory constraints, transportation networks, and threat dependencies in a structured, interconnected format.

Determine 3: Data graphs explicitly encode relationships between entities, lowering the guesswork in your AI system

As an alternative of relying purely on statistical correlations, an AI system enriched with data graphs can:

  • Validate predictions in opposition to domain-specific guidelines (e.g., making certain that AI-generated provider suggestions adjust to regulatory necessities).
  • Infer lacking info (e.g., if a provider has no historic delays however shares dependencies with high-risk suppliers, AI can assess its potential threat).
  • Enhance explainability by permitting AI choices to be traced again to logical, rule-based reasoning reasonably than black-box statistical outputs.

How are you going to determine which data needs to be encoded with guidelines (symbolic AI), and which needs to be realized dynamically from the info (neural AI)? Area specialists may also help youpick these bits of data the place hard-coding makes probably the most sense:

  • Data that’s comparatively secure over time
  • Data that’s arduous to deduce from the info, for instance as a result of it’s not well-represented
  • Data that’s essential for high-impact choices in your area, so you’ll be able to’t afford to get it fallacious

Normally, this data might be saved in separate parts of your AI system, like choice timber, data graphs, and ontologies. There are additionally some strategies to combine it instantly into LLMs and different statistical fashions, equivalent to Lamini’s reminiscence fine-tuning.

Compound AI and workflow engineering

Producing insights and turning them into actions is a multi-step course of. Specialists may also help you mannequin workflows and decision-making pipelines, making certain that the method adopted by your AI system aligns with their duties. For instance, the next pipeline exhibits how the AI parts we thought of up to now could be mixed right into a workflow for the mitigation of cargo dangers:

Determine 4: A mixed workflow for the evaluation and mitigation of cargo dangers

Specialists are additionally wanted to calibrate the “labor distribution” between people in AI. For instance, when modelling choice logic, they’ll set thresholds for automation, deciding when AI can set off workflows versus when human approval is required.

Implementation tip: Contain your area specialists in mapping your processes to AI fashions and property, figuring out gaps vs. steps that may already be automated.

Particularly in B2B environments, the place staff are deeply embedded of their day by day workflows, the person expertise should be seamlessly built-in with current processes and activity buildings to make sure effectivity and adoption. For instance, an AI-powered provide chain instrument should align with how logistics professionals assume, work, and make choices. Within the improvement part, area specialists are the closest “friends” to your actual customers, and selecting their brains is likely one of the quickest methods to bridge the hole between AI capabilities and real-world usability.

Implementation tip: Contain area specialists early in UX design to make sure AI interfaces are intuitive, related, and tailor-made to actual decision-making workflows.

Guaranteeing transparency and belief in AI choices

AI thinks in another way from people, which makes us people skeptical. Typically, that’s a great factor because it helps us keep alert to potential errors. However mistrust can be one of many greatest boundaries to AI adoption. When customers don’t perceive why a system makes a specific advice, they’re much less more likely to work with it. Area specialists can outline how AI ought to clarify itself — making certain customers have visibility into confidence scores, choice logic, and key influencing components.

For instance, if an SCO system recommends rerouting a cargo, it will be irresponsible on the a part of a logistics planner to simply settle for it. She must see the “why” behind the advice — is it as a consequence of provider threat, port congestion, or gas price spikes? The UX ought to present a breakdown of the choice, backed by extra info like historic information, threat components, and a cost-benefit evaluation.

⚠️ Mitigate overreliance on AI: Extreme dependence of your customers on AI can introduce bias, errors, and unexpected failures. Specialists ought to discover methods to calibrate AI-driven insights vs. human experience, moral oversight, and strategic safeguards to make sure resilience, adaptability, and belief in decision-making.

Implementation tip: Work with area specialists to outline key explainability options — equivalent to confidence scores, information sources, and affect summaries — so customers can shortly assess AI-driven suggestions.

Simplifying AI interactions with out shedding depth

AI instruments ought to make advanced choices simpler, not tougher. If customers want deep technical data to extract insights from AI, the system has failed from a UX perspective. Area specialists may also help strike a stability between simplicity and depth, making certain the interface supplies actionable, context-aware suggestions whereas permitting deeper evaluation when wanted.

As an illustration, as a substitute of forcing customers to manually sift by way of information tables, AI might present pre-configured experiences primarily based on widespread logistics challenges. Nonetheless, professional customers must also have on-demand entry to uncooked information and superior settings when vital. The secret’s to design AI interactions which can be environment friendly for on a regular basis use however versatile for deep evaluation when required.

Implementation tip: Use area professional suggestions to outline default views, precedence alerts, and user-configurable settings, making certain AI interfaces present each effectivity for routine duties and depth for deeper analysis and strategic choices.

Steady UX testing and iteration with specialists

AI UX isn’t a one-and-done course of — it must evolve with real-world person suggestions. Area specialists play a key function in UX testing, refinement, and iteration, making certain that AI-driven workflows keep aligned with enterprise wants and person expectations.

For instance, your preliminary interface could floor too many low-priority alerts, resulting in alert fatigue the place customers begin ignoring AI suggestions. Provide chain specialists can establish which alerts are most beneficial, permitting UX designers to prioritize high-impact insights whereas lowering noise.

Implementation tip: Conduct think-aloud periods and have area specialists verbalize their thought course of when interacting along with your AI interface. This helps AI groups uncover hidden assumptions and refine AI primarily based on how specialists truly assume and make choices.

Vertical AI programs should combine area data at each stage, and specialists ought to change into key stakeholders in your AI improvement:

  • They refine information choice, annotation, and artificial information.
  • They information AI studying by way of prompting, RAG, and fine-tuning.
  • They help the design of seamless person experiences that combine with day by day workflows in a clear and reliable manner.

An AI system that “will get” the area of your customers is not going to solely be helpful and adopted within the short- and middle-term, but additionally contribute to the aggressive benefit of your enterprise.

Now that you’ve realized a bunch of strategies to include domain-specific data, you may be questioning the right way to strategy this in your organizational context. Keep tuned for my subsequent article, the place we are going to contemplate the sensible challenges and methods for implementing an expertise-driven AI technique!

Word: Except famous in any other case, all photographs are the writer’s.