Catchat homepage with o4-mini presenting

Overview

Catchat is a Large Language Model Portal provided by RCI (Research Computing Infrastructure). Catchat uses publicly available models like OpenAIs gpt-oss:120b (o4-mini), mistral for programming made my mistralAI, and gemma3 which was made by google.

Catchat is free to use for all Students / Faculty / Staff at Montana State University

Environmental Impact

Catchat is maintained by Research Computing with computational resources provided by the Tempest supercomputer which is powered by approximately 80% renewable energy and cooled with systems that do not consume water.

Getting Started

How to Use

  1. Access and Security

  2. Model Selection

  3. Tool Usage

  4. Agent Creation

  5. Connect to External Providers

  6. API Access

Available Models

gpt-oss:120b ( o4-mini ) gemma3:27b ( gemma ) mistral-small:24b ( mistral )  granite4:3b ( granite4 )

 

Use Cases

Use Case

Requirements

Recommended Models

Advanced reasoning and multi-domain research

Highest accuracy, long-context reasoning, strong factual recall, and cross-domain synthesis (science, engineering, policy, etc.)

gpt-oss:120b

Enterprise copilots / knowledge assistants

Strong reasoning, safe output, high reliability, API latency tolerance

gpt-oss:120b, gemma3:27b

General-purpose chatbots and customer service

Balanced cost–performance, multilingual fluency, context awareness

gemma3:27b, mistral-small:24b

Code generation and debugging (multi-language)

Syntax awareness, structured reasoning, efficiency on moderate hardware

gemma3:27b, mistral-small:24b

Lightweight internal automation (summaries, tagging, extraction)

Low latency, scalable inference, moderate reasoning depth

mistral-small:24b, granite4:3b

Fine-tuning / domain adaptation experiments

Model that trains efficiently, flexible licensing

mistral-small:24b, granite4:3b

Creative writing, marketing copy, dialogue

Natural tone generation, stylistic diversity, moderate context memory

gemma3:27b, mistral-small:24b

Technical document summarization

Structured understanding, moderate token context, reliability

gemma3:27b, gpt-oss:120b

Research prototyping / evaluation of LLM benchmarks

Open weights, transparency, large context

gpt-oss:120b, mistral-small:24b