Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
"**/.hg": true,
"**/CVS": true,
"**/.DS_Store": true,
"**/node_modules": true,
},
"search.exclude": {
"**/lib": true,
Expand Down
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@

Note: Can be used with `sfdx plugins:install sfdx-hardis@beta` and docker image `hardisgroupcom/sfdx-hardis@beta`

- LLM: Add HuggingFace integration, using LangchainJS provider

## [5.39.1] 2025-06-05

- [hardis:doc:project2markdown](https://sfdx-hardis.cloudity.com/hardis/doc/project2markdown/): Define DO_NOT_OVERWRITE_INDEX_MD=true to avoid overwriting the index.md file in docs folder, useful if you want to keep your own index.md file.
Expand Down
22 changes: 21 additions & 1 deletion docs/salesforce-ai-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,11 +66,12 @@ Currently supported LangchainJS providers:
- OpenAI
- Anthropic
- Google GenAI
- HuggingFace

| Variable | Description | Default |
|-----------------------------|-------------------------------------------------------------------------------------------------|----------------------------------|
| USE_LANGCHAIN_LLM | Set to true to use LangChain integration | `false` |
| LANGCHAIN_LLM_PROVIDER | The LLM provider to use (currently supports `ollama`, `openai`, `anthropic` and `google-genai`) | |
| LANGCHAIN_LLM_PROVIDER | The LLM provider to use (currently supports `ollama`, `openai`, `anthropic`, `google-genai` and `huggingface`) | |
| LANGCHAIN_LLM_MODEL | The model to use with the selected provider (e.g. `gpt-4o`, `qwen2.5-coder:14b`) | |
| LANGCHAIN_LLM_MODEL_API_KEY | API key for the selected provider (required for OpenAI, Anthropic and Gemini) | |
| LANGCHAIN_LLM_TEMPERATURE | Controls randomness (0-1) | |
Expand Down Expand Up @@ -125,6 +126,25 @@ LANGCHAIN_LLM_MODEL=gemini-1.5-pro
LANGCHAIN_LLM_MODEL_API_KEY=your-api-key
```

For HuggingFace:

- Create an account at [HuggingFace](https://huggingface.co/)
- Get your API token from [HuggingFace Tokens page](https://huggingface.co/settings/tokens)
- Choose from thousands of available models on the [HuggingFace Model Hub](https://huggingface.co/models)

```sh
USE_LANGCHAIN_LLM=true
LANGCHAIN_LLM_PROVIDER=huggingface
LANGCHAIN_LLM_MODEL=microsoft/DialoGPT-medium
LANGCHAIN_LLM_MODEL_API_KEY=your-huggingface-token
```

Popular HuggingFace models you can use:
- `microsoft/DialoGPT-medium` - Conversational AI model
- `google/flan-t5-large` - Text-to-text generation model
- `EleutherAI/gpt-neo-2.7B` - GPT-like language model
- `facebook/blenderbot-400M-distill` - Conversational AI model

### With OpenAI Directly

You need to define env variable OPENAI_API_KEY and make it available to your CI/CD workflow.
Expand Down
2 changes: 1 addition & 1 deletion docs/salesforce-project-doc-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,6 @@ If AI Integration is configured, the following parts of the documentation with b
- Lightning Web Components
- Lightning Pages

Configure AI integration following the [related documentation](salesforce-ai-setup.md)
Configure AI integration following [**AI Setup documentation**](salesforce-ai-setup.md)

See the [list of prompts used by sfdx-hardis](salesforce-ai-prompts.md) to enhance documentation with AI, and how to override them.
3 changes: 2 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,9 @@
"@actions/github": "^6.0.1",
"@cparra/apexdocs": "^3.12.1",
"@gitbeaker/node": "^35.8.1",
"@huggingface/inference": "^4.0.2",
"@langchain/anthropic": "^0.3.21",
"@langchain/community": "^0.3.44",
"@langchain/community": "^0.3.45",
"@langchain/core": "^0.3.57",
"@langchain/google-genai": "^0.2.10",
"@langchain/ollama": "^0.2.0",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import { ChatAnthropic } from "@langchain/anthropic";
import { BaseChatModel } from "@langchain/core/language_models/chat_models";
import { AbstractLLMProvider, ModelConfig } from "./langChainBaseProvider.js";
import { AbstractLLMProvider, ModelConfig, SupportedModel } from "./langChainBaseProvider.js";

export class LangChainAnthropicProvider extends AbstractLLMProvider {
constructor(modelName: string, config: ModelConfig) {
Expand All @@ -11,15 +10,15 @@ export class LangChainAnthropicProvider extends AbstractLLMProvider {
this.model = this.getModel();
}

getModel(): BaseChatModel {
getModel(): SupportedModel {
const config = {
modelName: this.modelName,
anthropicApiKey: this.config.apiKey!,
model: this.modelName,
apiKey: this.config.apiKey!,
temperature: this.config.temperature,
maxTokens: this.config.maxTokens,
maxRetries: this.config.maxRetries
};

return new ChatAnthropic(config) as BaseChatModel;
return new ChatAnthropic(config);
}
}
16 changes: 10 additions & 6 deletions src/common/aiProvider/langChainProviders/langChainBaseProvider.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import { BaseChatModel } from "@langchain/core/language_models/chat_models";
import { LLM } from "@langchain/core/language_models/llms";

export interface ModelConfig {
temperature?: number;
Expand All @@ -9,16 +10,19 @@ export interface ModelConfig {
apiKey?: string;
}

export type ProviderType = "ollama" | "openai" | "anthropic";
export type ProviderType = "ollama" | "openai" | "anthropic" | "google-genai" | "huggingface";

// Union type to support both chat models and LLMs
export type SupportedModel = BaseChatModel | LLM;

export interface BaseLLMProvider {
getModel(): BaseChatModel;
getModel(): SupportedModel;
getModelName(): string;
getLabel(): string;
}

export abstract class AbstractLLMProvider implements BaseLLMProvider {
protected model: BaseChatModel;
protected model: SupportedModel;
protected modelName: string;
protected config: ModelConfig;

Expand All @@ -27,13 +31,13 @@ export abstract class AbstractLLMProvider implements BaseLLMProvider {
this.config = config;
}

abstract getModel(): BaseChatModel;
abstract getModel(): SupportedModel;

getModelName(): string {
return this.modelName;
}

getLabel(): string {
return "LangChain connector";
}
}
}
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { BaseChatModel } from "@langchain/core/language_models/chat_models";
import { AbstractLLMProvider, ModelConfig } from "./langChainBaseProvider.js";
import { AbstractLLMProvider, ModelConfig, SupportedModel } from "./langChainBaseProvider.js";

export class LangChainGoogleGenAiProvider extends AbstractLLMProvider {
constructor(modelName: string, config: ModelConfig) {
Expand All @@ -11,7 +10,7 @@ export class LangChainGoogleGenAiProvider extends AbstractLLMProvider {
this.model = this.getModel();
}

getModel(): BaseChatModel {
getModel(): SupportedModel {
const config = {
model: this.modelName,
apiKey: this.config.apiKey!,
Expand All @@ -20,6 +19,6 @@ export class LangChainGoogleGenAiProvider extends AbstractLLMProvider {
maxRetries: this.config.maxRetries
};

return new ChatGoogleGenerativeAI(config) as BaseChatModel;
return new ChatGoogleGenerativeAI(config);
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
import { HuggingFaceInference } from "@langchain/community/llms/hf";
import { AbstractLLMProvider, ModelConfig, SupportedModel } from "./langChainBaseProvider.js";
import { getEnvVar } from "../../../config/index.js";

export class LangChainHuggingFaceProvider extends AbstractLLMProvider {
constructor(modelName: string, config: ModelConfig) {
if (!config.apiKey) {
throw new Error("API key is required for HuggingFace provider. Define it in a secured env var LANGCHAIN_LLM_MODEL_API_KEY");
}
super(modelName, config);
this.model = this.getModel();
}

getModel(): SupportedModel {
const config = {
model: this.modelName,
apiKey: this.config.apiKey!,
temperature: this.config.temperature,
maxTokens: this.config.maxTokens,
// HuggingFace specific configuration
endpointUrl: this.config.baseUrl, // Custom endpoint URL if needed
options: {
provider: getEnvVar("HF_INFERENCE_PROVIDER") || "default",
}
};
return new HuggingFaceInference(config);
}

getLabel(): string {
return "HuggingFace LangChain connector";
}
}
Original file line number Diff line number Diff line change
@@ -1,21 +1,20 @@
import { ChatOllama } from "@langchain/ollama";
import { BaseChatModel } from "@langchain/core/language_models/chat_models";
import { AbstractLLMProvider, ModelConfig } from "./langChainBaseProvider.js";
import { AbstractLLMProvider, ModelConfig, SupportedModel } from "./langChainBaseProvider.js";

export class LangChainOllamaProvider extends AbstractLLMProvider {
constructor(modelName: string, config: ModelConfig) {
super(modelName, config);
this.model = this.getModel();
}

getModel(): BaseChatModel {
getModel(): SupportedModel {
const config = {
model: this.modelName,
baseUrl: this.config.baseUrl || "http://localhost:11434",
temperature: this.config.temperature,
maxRetries: this.config.maxRetries
};

return new ChatOllama(config) as BaseChatModel;
return new ChatOllama(config);
}
}
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import { ChatOpenAI } from "@langchain/openai";
import { BaseChatModel } from "@langchain/core/language_models/chat_models";
import { AbstractLLMProvider, ModelConfig } from "./langChainBaseProvider.js";
import { AbstractLLMProvider, ModelConfig, SupportedModel } from "./langChainBaseProvider.js";

export class LangChainOpenAIProvider extends AbstractLLMProvider {
constructor(modelName: string, config: ModelConfig) {
Expand All @@ -11,15 +10,15 @@ export class LangChainOpenAIProvider extends AbstractLLMProvider {
this.model = this.getModel();
}

getModel(): BaseChatModel {
getModel(): SupportedModel {
const config = {
modelName: this.modelName,
openAIApiKey: this.config.apiKey!,
model: this.modelName,
apiKey: this.config.apiKey!,
temperature: this.config.temperature,
maxTokens: this.config.maxTokens,
maxRetries: this.config.maxRetries
};

return new ChatOpenAI(config) as BaseChatModel;
return new ChatOpenAI(config);
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,9 @@ import { LangChainOllamaProvider } from "./langChainOllamaProvider.js";
import { LangChainOpenAIProvider } from "./langChainOpenAIProvider.js";
import { LangChainAnthropicProvider } from "./langChainAnthropicProvider.js";
import { LangChainGoogleGenAiProvider } from "./langChainGoogleGenAi.js";
import { LangChainHuggingFaceProvider } from "./langChainHuggingFaceProvider.js";

const ALL_PROVIDERS = ["ollama", "openai", "anthropic", "google-genai"];
const ALL_PROVIDERS = ["ollama", "openai", "anthropic", "google-genai", "huggingface"];

export class LangChainProviderFactory {
static createProvider(providerType: ProviderType, modelName: string, config: ModelConfig): BaseLLMProvider {
Expand All @@ -17,6 +18,8 @@ export class LangChainProviderFactory {
return new LangChainAnthropicProvider(modelName, config);
case "google-genai":
return new LangChainGoogleGenAiProvider(modelName, config);
case "huggingface":
return new LangChainHuggingFaceProvider(modelName, config);
default:
throw new Error(`Unsupported LLM provider: ${providerType}. Supported providers are: ${ALL_PROVIDERS.join(", ")}`);
}
Expand Down
45 changes: 32 additions & 13 deletions src/common/aiProvider/langchainProvider.ts
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
import { BaseChatModel } from "@langchain/core/language_models/chat_models";
import { LLM } from "@langchain/core/language_models/llms";
import { AiResponse } from "./index.js";
import { AiProviderRoot } from "./aiProviderRoot.js";
import c from "chalk";
import { uxLog } from "../utils/index.js";
import { PromptTemplate } from "./promptTemplates.js";
import { getEnvVar } from "../../config/index.js";
import { LangChainProviderFactory } from "./langChainProviders/langChainProviderFactory.js";
import { ModelConfig, ProviderType } from "./langChainProviders/langChainBaseProvider.js";
import { ModelConfig, ProviderType, SupportedModel } from "./langChainProviders/langChainBaseProvider.js";

export class LangChainProvider extends AiProviderRoot {
private model: BaseChatModel;
private model: SupportedModel;
private modelName: string;

constructor() {
Expand All @@ -22,11 +23,11 @@ export class LangChainProvider extends AiProviderRoot {
const providerType = provider.toLowerCase() as ProviderType;
const modelName = getEnvVar("LANGCHAIN_LLM_MODEL");
const apiKey = getEnvVar("LANGCHAIN_LLM_MODEL_API_KEY");

if (!modelName) {
throw new Error("LANGCHAIN_LLM_MODEL environment variable must be set to use LangChain integration");
}

this.modelName = modelName;

// Common configuration for all providers
Expand All @@ -47,7 +48,6 @@ export class LangChainProvider extends AiProviderRoot {
public getLabel(): string {
return "LangChain connector";
}

public async promptAi(promptText: string, template: PromptTemplate | null = null): Promise<AiResponse | null> {
// re-use the same check for max ai calls number as in the original openai provider implementation
if (!this.checkMaxAiCallsNumber()) {
Expand All @@ -65,12 +65,23 @@ export class LangChainProvider extends AiProviderRoot {
this.incrementAiCallsNumber();

try {
const response = await this.model.invoke([
{
role: "user",
content: promptText
}
]);
let response: any;

// Check if the model is a BaseChatModel or LLM and call accordingly
if (this.model instanceof BaseChatModel) {
// For chat models, use message format
response = await this.model.invoke([
{
role: "user",
content: promptText
}
]);
} else if (this.model instanceof LLM) {
// For LLMs, use plain string
response = await this.model.invoke(promptText);
} else {
throw new Error("Unsupported model type");
}

if (process.env?.DEBUG_PROMPTS === "true") {
uxLog(this, c.grey("[LangChain] Received prompt response\n" + JSON.stringify(response, null, 2)));
Expand All @@ -83,9 +94,17 @@ export class LangChainProvider extends AiProviderRoot {
model: this.modelName,
};

if (response.content) {
// Handle different response formats
let responseContent: string | undefined;
if (this.model instanceof BaseChatModel && response.content) {
responseContent = typeof response.content === 'string' ? response.content : JSON.stringify(response.content);
} else if (this.model instanceof LLM && typeof response === 'string') {
responseContent = response;
}

if (responseContent) {
aiResponse.success = true;
aiResponse.promptResponse = typeof response.content === 'string' ? response.content : JSON.stringify(response.content);
aiResponse.promptResponse = responseContent;
}

return aiResponse;
Expand Down
Loading
Loading