Available Models in Bios

Bios supports fine-tuning of UltraSafe's closed-source expert models. Each model is purpose-built for specific domains with specialized training, ensuring superior performance in enterprise applications.

UltraSafe Exclusive

Bios exclusively supports UltraSafe's proprietary expert models. We do not support external third-party models (Llama, Qwen, etc.) to ensure enterprise-grade security, compliance, and consistent performance across all training operations.

Which Model Should I Use?

Choose your model based on the domain and use case of your application:

🏥 Healthcare Applications

Use ultrasafe/usf-healthcare for medical diagnostics, clinical documentation, patient communication, healthcare compliance, and medical research applications.

💰 Financial Services

Use ultrasafe/usf-finance for market analysis, risk assessment, portfolio management, financial reporting, and economic forecasting.

💻 Software Development

Use ultrasafe/usf-code for code generation, debugging assistance, code review, documentation, and technical problem-solving.

💬 Conversational AI

Use ultrasafe/usf-conversation for customer service, virtual assistants, dialogue systems, and human-like interaction applications.

🔧 General Purpose

Use ultrasafe/usf-mini for general-purpose tasks, experimentation, prototyping, or when your use case spans multiple domains.

Complete Model Catalog

All UltraSafe expert models available for fine-tuning through Bios:

Model IDDomainDescriptionUse Cases
ultrasafe/usf-miniGeneralLightweight expert model for general fine-tuning tasksPrototyping, multi-domain, experimentation
ultrasafe/usf-healthcareHealthcareMedical domain expert for clinical and healthcare applicationsDiagnostics, clinical notes, patient care
ultrasafe/usf-financeFinanceFinancial analysis expert for market research and economic modelingMarket analysis, risk modeling, trading
ultrasafe/usf-codeProgrammingProgramming specialist for software development and code generationCode generation, debugging, review
ultrasafe/usf-conversationConversationDialogue optimization for conversational AI applicationsCustomer service, virtual assistants

Model Capabilities

Each UltraSafe expert model comes with specialized capabilities optimized for its domain:

Domain Expertise

Pre-trained on curated domain-specific datasets with expert annotations. Healthcare models understand medical terminology and protocols, finance models comprehend market dynamics and regulations.

Enterprise Security

All models include built-in safety guardrails, content filtering, and compliance features. Your training data remains private and is never used to improve base models.

Optimized Performance

Expert models leverage UltraSafe's inference engine with FlashAttention 3, speculative decoding, and quality-preserving quantization for optimal speed and cost efficiency.

Fine-Tuning Ready

All models support LoRA fine-tuning with configurable rank, alpha, and target modules. Optimized for both supervised learning and RLHF training workflows.

Using Models in Your Training Code

Specify the model when creating your training client. Switching models is as simple as changing a string:

Model Selection in Code
1import bios
2
3service_client = bios.ServiceClient()
4
5# Healthcare model
6healthcare_client = service_client.create_lora_training_client(
7    base_model="ultrasafe/usf-healthcare",
8    rank=16
9)
10
11# Finance model
12finance_client = service_client.create_lora_training_client(
13    base_model="ultrasafe/usf-finance",
14    rank=16
15)
16
17# Code model
18code_client = service_client.create_lora_training_client(
19    base_model="ultrasafe/usf-code",
20    rank=16
21)
22
23# Same training code works with any model
24for batch in dataloader:
25    healthcare_client.forward_backward(batch, "cross_entropy")
26    healthcare_client.optim_step()

Unified API

The same Bios training API works identically across all UltraSafe expert models. Change models by modifying a single parameter—no code changes required. This enables rapid experimentation across domains.

Discovering Available Models

Query the Bios API to get the current list of supported models programmatically:

List Supported Models
1import bios
2
3# Create service client
4service_client = bios.ServiceClient()
5
6# Get server capabilities
7capabilities = service_client.get_server_capabilities()
8
9# Display available models
10print("Available UltraSafe Expert Models:")
11print("-" * 50)
12for model_info in capabilities.supported_models:
13    print(f"• {model_info.model_name}")
14    print(f"  Domain: {model_info.domain}")
15    print(f"  Description: {model_info.description}")
16    print()

Expected output:

Available UltraSafe Expert Models:
--------------------------------------------------
• ultrasafe/usf-mini
  Domain: General
  Description: Lightweight expert model for general fine-tuning tasks

• ultrasafe/usf-healthcare
  Domain: Healthcare
  Description: Medical domain expert for clinical applications

• ultrasafe/usf-finance
  Domain: Finance
  Description: Financial analysis expert for market research

• ultrasafe/usf-code
  Domain: Programming
  Description: Programming specialist for software development

• ultrasafe/usf-conversation
  Domain: Conversation
  Description: Dialogue optimization for conversational AI

Model-Specific Training Considerations

Healthcare Models

  • Require HIPAA-compliant data handling—ensure your training data meets compliance standards
  • Include medical safety classifiers—additional validation during sampling recommended
  • Optimized for clinical terminology and medical reasoning patterns

Finance Models

  • Pre-trained on financial documents, market data, and economic research
  • Understand financial regulations and compliance requirements
  • Optimized for numerical reasoning and quantitative analysis

Code Models

  • Support multiple programming languages with syntax-aware understanding
  • Trained on code repositories, documentation, and technical discussions
  • Optimized for code completion, bug detection, and architectural patterns

Switching Between Models

You can train multiple models in the same script or compare performance across domains:

Multi-Model Training
1import bios
2import asyncio
3
4async def train_multiple_models(dataloader):
5    """
6    Train different expert models on the same data
7    Useful for comparing domain-specific performance
8    """
9    service_client = bios.ServiceClient()
10    
11    # Create clients for different models
12    models = {
13        "healthcare": await service_client.create_lora_training_client_async(
14            base_model="ultrasafe/usf-healthcare", rank=16
15        ),
16        "finance": await service_client.create_lora_training_client_async(
17            base_model="ultrasafe/usf-finance", rank=16
18        ),
19        "general": await service_client.create_lora_training_client_async(
20            base_model="ultrasafe/usf-mini", rank=8
21        )
22    }
23    
24    # Train all models in parallel
25    for step, batch in enumerate(dataloader):
26        # Submit training for all models
27        futures = {}
28        for name, client in models.items():
29            fwd_future = await client.forward_backward_async(batch, "cross_entropy")
30            opt_future = await client.optim_step_async()
31            futures[name] = (fwd_future, opt_future)
32        
33        # Collect results
34        for name, (fwd_future, opt_future) in futures.items():
35            fwd_result = await fwd_future
36            await opt_future
37            print(f"{name}: Step {step}, Loss = {fwd_result.loss:.4f}")
38    
39    # Save all models
40    paths = {}
41    for name, client in models.items():
42        path = client.save_state(name=f"{name}_final").result().path
43        paths[name] = path
44        print(f"Saved {name} model: {path}")
45    
46    return paths
47
48# Run multi-model training
49asyncio.run(train_multiple_models(training_data))

Next Steps

Now that you understand available models, start training with the Quick Start guide: