Style Text WebGL+iOS Stand-alone LLM (+Llama.cpp wrapper)
Transform generic texts into stylized content with this LLM model and wrapper, compatible with WebGL, iOS, and PC platforms.

* This page contains affiliate links, meaning we may earn a small commission if you purchase something through them, at no extra cost. $20
- Category:
- Tools › Ai-ml-integration
- Developer:
- TestedLines
- Price:
- $20
- Favorites:
- 11
- Supported Unity Versions:
- 2022.3.42 or higher
- Current Version:
- 2.0.2
- Download Size:
- 1.45 GB
- Last Update:
- Dec 27, 2024
- Description:
- This package includes a set of mobile-friendly LLM models for text line rewriting, with iOS and WebGL wrappers based on Llama.cpp. The models are quantized to reduce size and improve performance. The package includes three model resolutions: Q4_K_M (~110mb), Q8_0 (~170mb), and original bf16 (~321mb).
You can use the models to transform generic texts into stylized content by providing an input text and a style. The output will be a rewritten text in the specified style.
The models are trained on a corpus of over 0.5 million dialogue lines and can be used for various applications, such as text generation, content creation, and more.
The package also includes a C++ wrapper that allows for asynchronous model execution, logging, and output token manipulation. The wrapper is compatible with WebGL, iOS, Android, Windows x64, and MacOS (ARM).
The models can be used for various styles, including:
* Mood styles: Inquisitive, Emotional, Intellectual, Dynamic, Noble, Light, and Foreboding
* Writing styles: Historical, Modern and Contemporary, Genre-Specific, and Expressive and Creative
The package also includes instructions for building the C++ wrapper code from scratch and notes on integration, including RAM limitations and model sizes.
Please note that the models may output tokens that are unparsable by selected fonts, and may require restarting the postprocessing layer on camera if a NullReferenceException occurs. Additionally, the models were trained on one-liners and single sentences, and may not work well with longer texts. - Technical Details:
- Cross-Platform: Compatible with WebGL, iOS, Android, Windows x64, and MacOS (ARM)
Mobile ahead-of-time compilation friendliness
Async mode uses cross-platform C++20 threads and is compatible with Web platform
The training was done from the open-source model and weights in ORPO mode
The model is fast and slim. Weights are provided in fp16, int8 .gguf format.
Use recommendations:
Please try out to free form style line - be creative, yet to get the most from them think about combining in your style line:
Mood styles to try out:
Inquisitive (Questioning, Curious, Skeptical, Intrigued)
Emotional (Joyful, Melancholic, Indignant, Compassionate, Euphoric, Grieving, Impassioned, Jovial)
Intellectual (Reflective, Pensive, Cynical)
Dynamic (Confident, Resigned, Agitated, Hopeful, Fearful, Optimistic, Defiant, Adventurous, Bewildered, Determined, Hesitant, Mischievous, Overwhelmed, Melodramatic)
Noble (Regal, Dignified)
Light (Lighthearted, Whimsical, Nostalgic)
Foreboding (Sarcastic, Foreboding)
Writing styles to try out:
Historical (Victorian, Gothic, Classic Literature, Folkloric, Pirates)
Modern and Contemporary (Modern, Minimalist, Journalistic, Futuristic Sci-Fi, Dark Futurism, Post-Apocalyptic)
Genre-Specific (Noir, Magical Realism, Dystopian, Epic, Hard-boiled, Pulp Adventure, Steampunk, Romantic Comedy, Surrealist)
Expressive and Creative (Poetic, Lyrical, Beat Generation, Inspirational, Absurdist, Satirical, Whimsical, Mythological)
Keep styles short. They can be phrases or simple words. Using definitions like "Simple" may help debug lines rewritten in too LLMish style.
Integration notes:
Web platform is RAM limited up to 4GB (x86), so only smallish models fit into it.
Web published pages do not work if ram is limmited to 0.5-1GB making them fail to run on some mobile devices.
Instructions for building cpp wrapper code from scratch are provided.
AI LLM model may output tokens unparsable by selected Font
In demo scenes restart the postprocessing layer on camera if you get NullReferenceException PostProcessing AmbientOcclusion IsEnabledAndSupported exception
LLM models have various sizes, that impacts ram requirements on devices you run them on
LLM model was trained on one-liners and single sentences. This is when it works the best way possible.
Online-Docs - see motivation-usage example:
CSharp docs
CPP wrapper docs - Continue »