Skip to content

Commit bd5a742

Browse files
authored
Merge pull request #7 from daviddaytw/readme
Readme
2 parents e954580 + 4abe5eb commit bd5a742

File tree

2 files changed

+186
-93
lines changed

2 files changed

+186
-93
lines changed

README.md

Lines changed: 154 additions & 77 deletions
Original file line numberDiff line numberDiff line change
@@ -4,135 +4,214 @@
44
[![codecov](https://codecov.io/github/daviddaytw/react-native-transformers/graph/badge.svg?token=G3D0Y33SI4)](https://codecov.io/github/daviddaytw/react-native-transformers)
55
[![TypeDoc](https://github.com/daviddaytw/react-native-transformers/actions/workflows/docs.yml/badge.svg)](https://daviddaytw.github.io/react-native-transformers)
66

7-
`react-native-transformers` is a React Native library for running Large Language Models (LLMs) from Hugging Face on your mobile applications locally. It supports both iOS and Android platforms, allowing you to leverage advanced AI models directly on your device without requiring an internet connection.
7+
**Run Hugging Face transformer models directly on your React Native and Expo applications with on-device inference. No cloud service required!**
88

9-
## Features
9+
## Overview
1010

11-
- On-device transformer model support for both text generation and text embedding
12-
- Local inference without internet connectivity
13-
- Compatible with iOS and Android platforms
14-
- Simple API for model loading and inference
15-
- Support for Hugging Face models in ONNX format
16-
- Built on top of ONNX Runtime for efficient model execution
17-
- TypeScript support with full type definitions
11+
`react-native-transformers` empowers your mobile applications with AI capabilities by running transformer models directly on the device. This means your app can generate text, answer questions, and process language without sending data to external servers - enhancing privacy, reducing latency, and enabling offline functionality.
12+
13+
Built on top of ONNX Runtime, this library provides a streamlined API for integrating state-of-the-art language models into your React Native and Expo applications with minimal configuration.
14+
15+
## Key Features
16+
17+
- **On-device inference**: Run AI models locally without requiring an internet connection
18+
- **Privacy-focused**: Keep user data on the device without sending it to external servers
19+
- **Optimized performance**: Leverages ONNX Runtime for efficient model execution on mobile CPUs
20+
- **Simple API**: Easy-to-use interface for model loading and inference
21+
- **Expo compatibility**: Works seamlessly with both Expo managed and bare workflows
1822

1923
## Installation
2024

21-
To use `react-native-transformers`, you need to install `onnxruntime-react-native` as a peer dependency. Follow the steps below:
25+
### 1. Install peer dependencies
2226

23-
### 1. Install the peer dependency:
27+
```sh
28+
npm install onnxruntime-react-native
29+
```
2430

25-
```sh
26-
npm install onnxruntime-react-native
27-
```
31+
### 2. Install react-native-transformers
2832

29-
### 2. Install `react-native-transformers`:
33+
```sh
34+
# React-Native
35+
npm install react-native-transformers
3036

31-
```sh
32-
npm install react-native-transformers
33-
```
37+
# Expo
38+
npx expo install react-native-transformers
39+
```
3440

35-
### 3. Configure React-Native or Expo
41+
### 3. Platform Configuration
3642

3743
<details>
38-
<summary>React Native CLI</summary>
44+
<summary><b>React Native CLI</b></summary>
3945

40-
- Link the `onnxruntime-react-native` library:
46+
Link the `onnxruntime-react-native` library:
4147

42-
```sh
43-
npx react-native link onnxruntime-react-native
44-
```
48+
```sh
49+
npx react-native link onnxruntime-react-native
50+
```
4551
</details>
4652

4753
<details>
48-
<summary>Expo</summary>
54+
<summary><b>Expo</b></summary>
4955

50-
- Install the Expo plugin configuration in `app.json` or `app.config.js`:
56+
Add the Expo plugin configuration in `app.json` or `app.config.js`:
5157

52-
```json
53-
{
54-
"expo": {
55-
"plugins": [
56-
"onnxruntime-react-native"
57-
],
58-
}
59-
}
60-
```
58+
```json
59+
{
60+
"expo": {
61+
"plugins": [
62+
"onnxruntime-react-native"
63+
]
64+
}
65+
}
66+
```
6167
</details>
6268

6369
### 4. Babel Configuration
6470

65-
You need to add the `babel-plugin-transform-import-meta` plugin to your Babel configuration (e.g., `.babelrc` or `babel.config.js`):
71+
Add the `babel-plugin-transform-import-meta` plugin to your Babel configuration:
72+
73+
```js
74+
// babel.config.js
75+
module.exports = {
76+
// ... your existing config
77+
plugins: [
78+
// ... your existing plugins
79+
"babel-plugin-transform-import-meta"
80+
]
81+
};
82+
```
83+
84+
You can follow this [document](https://docs.expo.dev/versions/latest/config/babel/) to create config file, and you need to run `npx expo start --clear` to clear the Metro bundler cache.
85+
86+
### 5. Development Client Setup
87+
88+
For development and testing, it's required to use a development client instead of Expo Go due to the native code of ONNX Runtime and react-native-transformers.
89+
90+
You can set up a development client using one of these methods:
91+
92+
- **[EAS Development Build](https://docs.expo.dev/develop/development-builds/introduction/)**: Create a custom development client using EAS Build
93+
- **[Expo Prebuild](https://docs.expo.dev/workflow/prebuild/)**: Eject to a bare workflow to access native code
6694

67-
```json
68-
{
69-
"plugins": ["babel-plugin-transform-import-meta"]
70-
}
71-
```
7295

7396
## Usage
7497

75-
### Text Generation Example
98+
### Text Generation
7699

77100
```javascript
78-
import React from "react";
101+
import React, { useState, useEffect } from "react";
79102
import { View, Text, Button } from "react-native";
80103
import { Pipeline } from "react-native-transformers";
81104

82105
export default function App() {
83-
const [output, setOutput] = React.useState("");
106+
const [output, setOutput] = useState("");
107+
const [isLoading, setIsLoading] = useState(false);
108+
const [isModelReady, setIsModelReady] = useState(false);
109+
110+
// Load model on component mount
111+
useEffect(() => {
112+
loadModel();
113+
}, []);
84114

85-
// Function to initialize the model
86115
const loadModel = async () => {
87-
await Pipeline.TextGeneration.init("Felladrin/onnx-Llama-160M-Chat-v1", "onnx/decoder_model_merged.onnx");
116+
setIsLoading(true);
117+
try {
118+
// Load a small Llama model
119+
await Pipeline.TextGeneration.init(
120+
"Felladrin/onnx-Llama-160M-Chat-v1",
121+
"onnx/decoder_model_merged.onnx",
122+
{
123+
// The fetch function is required to download model files
124+
fetch: async (url) => {
125+
// In a real app, you might want to cache the downloaded files
126+
const response = await fetch(url);
127+
return response.url;
128+
}
129+
}
130+
);
131+
setIsModelReady(true);
132+
} catch (error) {
133+
console.error("Error loading model:", error);
134+
alert("Failed to load model: " + error.message);
135+
} finally {
136+
setIsLoading(false);
137+
}
88138
};
89139

90-
// Function to generate text
91140
const generateText = () => {
92-
Pipeline.TextGeneration.generate("Hello world", setOutput);
141+
setOutput("");
142+
// Generate text from the prompt and update the UI as tokens are generated
143+
Pipeline.TextGeneration.generate(
144+
"Write a short poem about programming:",
145+
(text) => setOutput(text)
146+
);
93147
};
94148

95149
return (
96-
<View>
97-
<Button title="Load Model" onPress={loadModel} />
98-
<Button title="Generate Text" onPress={generateText} />
99-
<Text>Output: {output}</Text>
150+
<View style={{ padding: 20 }}>
151+
<Button
152+
title={isModelReady ? "Generate Text" : "Load Model"}
153+
onPress={isModelReady ? generateText : loadModel}
154+
disabled={isLoading}
155+
/>
156+
<Text style={{ marginTop: 20 }}>
157+
{output || "Generated text will appear here"}
158+
</Text>
100159
</View>
101160
);
102161
}
103162
```
104163

105-
### Text Embedding Example
164+
### With Custom Model Download
165+
166+
For Expo applications, use `expo-file-system` to download models with progress tracking:
106167

107168
```javascript
108-
import React from "react";
109-
import { View, Text, Button } from "react-native";
169+
import * as FileSystem from "expo-file-system";
110170
import { Pipeline } from "react-native-transformers";
111171

112-
export default function App() {
113-
const [embedding, setEmbedding] = React.useState([]);
172+
// In your model loading function
173+
await Pipeline.TextGeneration.init("model-repo", "model-file", {
174+
fetch: async (url) => {
175+
const localPath = FileSystem.cacheDirectory + url.split("/").pop();
114176

115-
// Function to initialize the model
116-
const loadModel = async () => {
117-
await Pipeline.TextEmbedding.init("Xenova/all-MiniLM-L6-v2");
118-
};
177+
// Check if file already exists
178+
const fileInfo = await FileSystem.getInfoAsync(localPath);
179+
if (fileInfo.exists) {
180+
console.log("Model already downloaded, using cached version");
181+
return localPath;
182+
}
119183

120-
// Function to generate embeddings
121-
const generateEmbedding = async () => {
122-
const result = await Pipeline.TextEmbedding.generate("Hello world");
123-
setEmbedding(result);
124-
};
184+
// Download file with progress tracking
185+
const downloadResumable = FileSystem.createDownloadResumable(
186+
url,
187+
localPath,
188+
{},
189+
(progress) => {
190+
const percentComplete = progress.totalBytesWritten / progress.totalBytesExpectedToWrite;
191+
console.log(`Download progress: ${(percentComplete * 100).toFixed(1)}%`);
192+
}
193+
);
125194

126-
return (
127-
<View>
128-
<Button title="Load Model" onPress={loadModel} />
129-
<Button title="Generate Embedding" onPress={generateEmbedding} />
130-
<Text>Embedding Length: {embedding.length}</Text>
131-
</View>
132-
);
133-
}
195+
const result = await downloadResumable.downloadAsync();
196+
return result?.uri;
197+
}
198+
});
134199
```
135200
201+
## Supported Models
202+
203+
`react-native-transformers` works with ONNX-formatted models from Hugging Face. Here are some recommended models based on size and performance:
204+
205+
| Model | Type | Size | Description |
206+
|-------|------|------|-------------|
207+
| [Felladrin/onnx-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/onnx-Llama-160M-Chat-v1) | Text Generation | ~300MB | Small Llama model (160M parameters) |
208+
| [microsoft/Phi-3-mini-4k-instruct-onnx-web](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx-web) | Text Generation | ~1.5GB | Microsoft's Phi-3-mini model |
209+
| [Xenova/distilgpt2_onnx-quantized](https://huggingface.co/Xenova/distilgpt2_onnx-quantized) | Text Generation | ~165MB | Quantized DistilGPT-2 |
210+
| [Xenova/tiny-mamba-onnx](https://huggingface.co/Xenova/tiny-mamba-onnx) | Text Generation | ~85MB | Tiny Mamba model |
211+
| [Xenova/all-MiniLM-L6-v2-onnx](https://huggingface.co/Xenova/all-MiniLM-L6-v2-onnx) | Text Embedding | ~80MB | Sentence embedding model |
212+
213+
## API Reference
214+
136215
For detailed API documentation, please visit our [TypeDoc documentation](https://daviddaytw.github.io/react-native-transformers/).
137216
138217
## Contributing
@@ -154,6 +233,4 @@ This project is licensed under the MIT License. See the [LICENSE](LICENSE) file
154233
- [Expo Plugins Documentation](https://docs.expo.dev/guides/config-plugins/)
155234
- [ONNX Runtime Documentation](https://onnxruntime.ai/)
156235
- [Hugging Face Model Hub](https://huggingface.co/models)
157-
- [Babel Documentation](https://babeljs.io/)
158-
159-
These links provide additional information on how to configure and utilize the various components used by `react-native-transformers`.
236+
- [ONNX Format Documentation](https://onnx.ai/onnx/intro/)

example/App.tsx

Lines changed: 32 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -25,23 +25,39 @@ export default function App() {
2525
await Pipeline.TextGeneration.init(preset.model, preset.onnx_path, {
2626
verbose: true,
2727
fetch: async (url) => {
28-
console.log("downloading... " + url);
29-
const localpath = FileSystem.cacheDirectory + url.split("/").pop()!;
30-
31-
const downloadResumable = FileSystem.createDownloadResumable(
32-
url,
33-
localpath,
34-
{},
35-
({ totalBytesWritten, totalBytesExpectedToWrite }) => {
36-
setProgress(totalBytesWritten / totalBytesExpectedToWrite);
37-
},
38-
);
39-
const result = await downloadResumable.downloadAsync();
40-
if (result === undefined) {
41-
throw new Error("Download failed.");
28+
try {
29+
console.log("Checking file... " + url);
30+
const fileName = url.split("/").pop()!;
31+
const localPath = FileSystem.documentDirectory + fileName;
32+
33+
// Check if the file already exists
34+
const fileInfo = await FileSystem.getInfoAsync(localPath);
35+
if (fileInfo.exists) {
36+
console.log("File already exists: " + localPath);
37+
return localPath;
38+
}
39+
40+
console.log("Downloading... " + url);
41+
const downloadResumable = FileSystem.createDownloadResumable(
42+
url,
43+
localPath,
44+
{},
45+
({ totalBytesWritten, totalBytesExpectedToWrite }) => {
46+
setProgress(totalBytesWritten / totalBytesExpectedToWrite);
47+
}
48+
);
49+
50+
const result = await downloadResumable.downloadAsync();
51+
if (!result) {
52+
throw new Error("Download failed.");
53+
}
54+
55+
console.log("Downloaded to: " + result.uri);
56+
return result.uri;
57+
} catch (error) {
58+
console.error("Download error:", error);
59+
return null;
4260
}
43-
console.log("downloaded as " + result.uri);
44-
return result.uri;
4561
},
4662
...preset.options,
4763
});

0 commit comments

Comments
 (0)