byLLM is an innovative AI integration framework built for the Jaseci ecosystem, implementing the cutting-edge Meaning Typed Programming (MTP) paradigm. MTP revolutionizes AI integration by embedding prompt engineering directly into code semantics, making AI interactions more natural and maintainable. While primarily designed to complement the Jac programming language, byLLM also provides a powerful Python library interface.
Installation is simple via PyPI:
pip install byllmConsider building an application that translates english to other languages using an LLM. This can be simply built as follows:
def translate_to(language: str, phrase: str) -> str by llm();
with entry {
output = translate_to(language="Welsh", phrase="Hello world");
print(output);
}llm is a built-in name -- no imports needed. By default it uses the model configured in your jac.toml (defaults to gpt-4o-mini). This simple piece of code replaces traditional prompt engineering without introducing additional complexity.
Consider a program that detects the personality type of a historical figure from their name. This can be built in a way that LLM picks from an enum and the output strictly adheres to this type.
enum Personality {
INTROVERT, EXTROVERT, AMBIVERT
}
def get_personality(name: str) -> Personality by llm();
with entry {
name = "Albert Einstein";
result = get_personality(name);
print(f"{result} personality detected for {name}");
}Similarly, custom types can be used as output types which force the LLM to adhere to the specified type and produce a valid result.
Even if we are elimination prompt engineering entierly, we allow specific ways to enrich code semantics through docstrings and semstrings.
"""Represents the personal record of a person"""
obj Person {
has name: str;
has dob: str;
has ssn: str;
}
sem Person.name = "Full name of the person";
sem Person.dob = "Date of Birth";
sem Person.ssn = "Last four digits of the Social Security Number of a person";
"""Calculate eligibility for various services based on person's data."""
def check_eligibility(person: Person, service_type: str) -> bool by llm();Docstrings naturally enhance the semantics of their associated code constructs, while the sem keyword provides an elegant way to enrich the meaning of class attributes and function arguments. Our research shows these concise semantic strings are more effective than traditional multi-line prompts.
Configure byLLM behavior globally using jac.toml:
[plugins.byllm]
system_prompt = "You are a helpful assistant..."
[plugins.byllm.model]
default_model = "gpt-4o-mini"
[plugins.byllm.call_params]
temperature = 0.7This enables centralized control over:
- System prompts across all LLM calls
- Default model selection
- Common parameters like temperature
Connect to custom or self-hosted models:
import from byllm.lib { Model }
glob llm = Model(
model_name="custom-model",
config={
"api_base": "https://your-endpoint.com/v1/chat/completions",
"api_key": "your_key",
"http_client": True
}
);byLLM is built using the underline priciple of Meaning Typed Programming and we shown our evaluation data compared with two such AI integration frameworks for python, such as DSPy and LMQL. We show significant performance gain against LMQL while allowing on par or better performance to DSPy, while reducing devloper complexity upto 10x.
Full Documentation: Jac byLLM Documentation
Complete Examples: Jac Examples Gallery
Research: The research journey of MTP is available on ACM Digital Library and published at OOPSLA 2025.
We welcome contributions to byLLM! Whether you're fixing bugs, improving documentation, or adding new features, your help is appreciated.
Areas we actively seek contributions:
- Bug fixes and improvements
- Documentation enhancements
- New examples and tutorials
- Test cases and benchmarks
Please see our Contributing Guide for detailed instructions.
If you find a bug or have a feature request, please open an issue.
Join our vibrant community:
- Discord Server - Chat with the team and community
This project is licensed under the MIT License.
byLLM integrates with various LLM providers (OpenAI, Anthropic, Google, etc.) through LiteLLM.
Jayanaka L. Dantanarayana, Yiping Kang, Kugesan Sivasothynathan, Christopher Clarke, Baichuan Li, Savini Kashmira, Krisztian Flautner, Lingjia Tang, and Jason Mars. 2025. MTP: A Meaning-Typed Language Ab- straction for AI-Integrated Programming. Proc. ACM Program. Lang. 9, OOPSLA2, Article 314 (October 2025), 29 pages. https://doi.org/10.1145/3763092
