Skip to content

Latest commit

 

History

History
78 lines (55 loc) · 2.7 KB

File metadata and controls

78 lines (55 loc) · 2.7 KB

@react-native-rag/executorch

This package provides implementations for the Embeddings and LLM interfaces from react-native-rag, using react-native-executorch to run AI models on-device. This enables you to perform inference directly on the user's device, ensuring privacy and offline capabilities.

Installation

npm install @react-native-rag/executorch react-native-executorch

You also need to install a resource fetcher for your setup (e.g. react-native-executorch-expo-resource-fetcher for Expo projects) and call initExecutorch in your app before using any ExecuTorch modules:

import { initExecutorch } from 'react-native-executorch';
import { ExpoResourceFetcher } from 'react-native-executorch-expo-resource-fetcher';

initExecutorch({ resourceFetcher: ExpoResourceFetcher });

Usage

ExecuTorchEmbeddings

This class allows you to use an ExecuTorch-compatible model to generate text embeddings.

import { ALL_MINILM_L6_V2, ALL_MINILM_L6_V2_TOKENIZER } from 'react-native-executorch';
import { ExecuTorchEmbeddings } from '@react-native-rag/executorch';

const embeddings = new ExecuTorchEmbeddings({
  modelSource: ALL_MINILM_L6_V2,
  tokenizerSource: ALL_MINILM_L6_V2_TOKENIZER,
});

ExecuTorchLLM

This class allows you to use an ExecuTorch-compatible language model for text generation.

import {
  LLAMA3_2_1B,
  LLAMA3_2_TOKENIZER,
  LLAMA3_2_TOKENIZER_CONFIG,
} from 'react-native-executorch';
import { ExecuTorchLLM } from '@react-native-rag/executorch';

const llm = new ExecuTorchLLM({
  modelSource: LLAMA3_2_1B,
  tokenizerSource: LLAMA3_2_TOKENIZER,
  tokenizerConfigSource: LLAMA3_2_TOKENIZER_CONFIG,
});

Integration with react-native-rag

You can use these classes directly with the useRAG hook:

import { useRAG } from 'react-native-rag';
import { ExecuTorchLLM, ExecuTorchEmbeddings } from '@react-native-rag/executorch';
import { MemoryVectorStore } from 'react-native-rag';

const App = () => {
  const { isReady, generate } = useRAG({
    llm,
    vectorStore: new MemoryVectorStore({ embeddings }),
  });

  // ... your component logic
};

React Native RAG is created by Software Mansion

Since 2012 Software Mansion is a software agency with experience in building web and mobile apps. We are Core React Native Contributors and experts in dealing with all kinds of React Native issues. We can help you build your next dream product – Hire us.

swm