Einstein Trust Layer
In the last blog, we discussed the ability to invoke prompts through the Einstein LLM Gateway Connect API. However, getting generations through prompts doesn't provide us all the advantages of the Einstein Trust Layer. Einstein Trust Layer is built on Hyperforce for secure invocation of LLMs.
On the Einstein1 platform, every prompt template undergoes a series of checks and pre-processing steps before it reaches the LLM. Initially, the platform ensures that any data retrieved for dynamic grounding complies with the appropriate user permissions and access controls within the organization, a process referred to as "secure data retrieval." Following this, dynamic grounding takes place, utilizing Flows, Apex, or MuleSoft API calls to fetch data from various sources, including Data Cloud. Subsequently, any sensitive data retrieved, such as Personally Identifiable Information (PII) and Payment Card Industry (PCI) data, is masked. Finally, additional instruction defences are implemented to guide the model toward producing relevant and secure outputs.
The LLM Gateway acts as an abstraction layer, facilitating the transmission of prompts to various LLMs. When an LLM is accessed via an API, the platform enforces a zero data retention policy to ensure that data is never stored outside of Salesforce. Upon the return journey to Salesforce, toxicity detection is implemented to ensure the safety of the prompt response. Additionally, any previously masked data is demasked, and the entirety of the response, including feedback, is stored in an audit trail for complete transparency regarding the process.
Einstein1 Platform
Within the Einstein1 trusted AI architecture, the top layer encompasses applications and workflows, which leverage Gen AI and the Einstein Copilot Studio. This studio hosts low-code/no-code builders, including the prompt builder. At the bottom of the stack, we have our infrastructure and data layers that power the entire system.
The intermediate layers, "model agility" and "trust layer," represent the Einstein Trust Layer and the LLM Gateway. These layers ensure flexibility in model deployment and enforce trust through various security measures. As a developer utilizing the platform, these intricacies are abstracted and managed for you when interfacing with the APIs. This abstraction enables you to concentrate on crafting exceptional custom applications that harness the power of generative AI.
Furthermore, rather than constructing a separate stack, all of these components are seamlessly integrated into the existing unified metadata framework. This framework securely connects and unlocks data from any application, enhancing interoperability and efficiency across the platform.
Prompt Builder
One of the tools built on top of this trusted architecture is the Prompt Builder, which has now reached General Availability (GA). Prompt Builder is a no-code tool designed to assist users in effortlessly creating prompt templates grounded with relevant data. While this blog won't delve deeply into the specifics of the tool, we will cover the mechanisms available to ground prompt templates with data from the platform. The following mechanisms can be leveraged to ground data for a prompt template.
Input Object and Related Data
The below picture shows the Input Object and related list data that can be leveraged in the prompt template without any customizations. This is the easiest and most efficient mechanism for fetching data for grounding in your prompt template.
Apex
Apex is another mechanism to ground your prompt template. Apex should be leveraged if multiple unrelated objects or complex logic are required to generate the prompt data to return to the template.
The below apex code shows an example of an apex grounding implementation that can be invoked from a specific Promp Template. The key things to note include
@InvocableMethod with attribute CapabilityType which couples the implementation to a specific prompt template.
The Request format should match the inputs provided configured on the prompt template.
The response should include an invocable variable for the prompt being returned.
Prompt Invocation through Invocable Action
Prompt template can also be invoked through Flows using the Prompt Template custom invocable action. Based on the template selected the input parameters are provided to be set using flow variables. Depending on the format for the return of the prompt template an additional parser might be required to parse and use the response in downstream flow actions.
Prompt Invocation through Connect REST API Endpoint
Prompt templates can also be invoked through the following endpoint: /einstein/prompt-template/{{promptTemplateDevName}}/generations. The request includes the same input parameters in a valueMap as the ConnectAPI in Apex, and includes any additional config parameters. The isPreview flag can be used during testing to either send the prompt to the template (isPreview = false) or just check the prompt text output (isPreview = true).
Github and Demos
Repo: link
Demo 1: Prompt Template Invocation Through Apex and Prompt Grounding through Apex
Demo 2: Prompt Template Invocation Through Flow and Prompt Grounding using Related List
Demo 3: Prompt Template Grounding through Flow and Invocation through Connect REST API
Resources
Trust Layer
Prompt Builder
Invocable Actions and Connect API
Comments