Yappus Terminal

A modern terminal interface for your AI assistant, bringing intelligence to your command line.

Fully Local Mode Coming Soon
RAG SupportSOON

yappus@terminal

$yappus
Welcome to Yappus Terminal! Type 'exit' to quit or '/help' for commands.
$How do I find large files in Linux?
To find large files in Linux, you can use the 'find' command:
find / -type f -size +100M -print0 | xargs -0 du -h | sort -rh
$/file package.json
File content from package.json added to context.
$What are the main dependencies?
Based on package.json, the main dependencies are: react, next, tailwindcss.
$

Installation

Get Yappus running on your system

Debian/Ubuntu

curl -O https://raw.githubusercontent.com/MostlyKIGuess/Yappus-Term/main/install-yappus.sh
chmod +x install-yappus.sh
./install-yappus.sh

Arch Linux

# Install from AUR (e.g., with yay)
yay -S yappus

# Or build manually
git clone https://github.com/MostlyKIGuess/Yappus-Term.git
cd Yappus-Term
makepkg -si

Windows

# Download script using PowerShell
Invoke-WebRequest -Uri https://raw.githubusercontent.com/MostlyKIGuess/Yappus-Term/main/install-yappus.ps1 -OutFile install-yappus.ps1

# Run installer (Admin privileges may be required for system-wide install)
powershell -ExecutionPolicy Bypass -File install-yappus.ps1

Crates.io (Cargo)

# Ensure you have Rust and Cargo installed
# https://www.rust-lang.org/tools/install

cargo install yappus-term

For other systems or manual builds, check the GitHub repository.

Documentation

Learn how to use Yappus Terminal effectively

Command Line Arguments

Use Yappus directly from your shell for quick tasks.

Direct Query

Ask a question directly from your terminal.

yappus "How do I find large files in Linux?"

Change Model

Switch to a different Gemini model.

yappus model GEMINI_2_5_PRO

View History

Display your chat history.

yappus history

Clear History

Clear all saved chat history.

yappus clear-history

Run Setup

Re-run the initial setup for API key and model.

yappus setup

Analyze File

Include a file's content for context in your query.

yappus file script.sh "explain this script"

Export Chats

Export your chat history to a JSON file.

yappus export ~/my_chat_export.json

Interactive Mode Commands

Commands available within the Yappus interactive session (start with `yappus`):

/help

Show help message with available interactive commands.

/help

/model [name]

View or change the Gemini model.

/model GEMINI_FLASH

/history

View your chat history within the interactive session.

/history

/clearhistory

Clear your chat history.

/clearhistory

/file <path> [query]

Include file content in the conversation.

/file src/main.js explain this function

/ls [path]

List directory contents.

/ls ./src

/cd <path>

Change current directory.

/cd ../project

/pwd

Show current working directory.

/pwd

exit

Exit the Yappus interactive session.

exit

Usage Examples

See Yappus in action with these common use cases.

Code Generation

Ask Yappus to write a Python script to fetch weather data.

> Generate a Python script to get weather from an API
Sure, here's a Python script using the requests library and OpenWeatherMap API:
```python
import requests
import json

API_KEY = 'YOUR_API_KEY'
CITY = 'London'
URL = f'http://api.openweathermap.org/data/2.5/weather?q={CITY}&appid={API_KEY}&units=metric'

response = requests.get(URL)
if response.status_code == 200:
  data = response.json()
  main = data['main']
  print(f"Weather in {CITY}: {data['weather'][0]['description']}, Temp: {main['temp']}°C")
else:
  print('Error fetching weather data')
```

File Context Analysis

Analyze a configuration file and ask for improvements.

> /file Dockerfile
File content from Dockerfile added to context.
> What could be improved in this Dockerfile for a Node.js app?
Based on your Dockerfile, consider these improvements:
1. Use a more specific base image (e.g., `node:18-alpine`).
2. Implement multi-stage builds to reduce final image size.
3. Ensure you're copying only necessary files (`.dockerignore`).
4. Run as a non-root user for better security.

Command Piping & Explanation

Pipe a shell command and ask Yappus to explain it.

> /ls | what is this project?
Looks like your downloads directory to me.

Available Models

Yappus supports various Gemini models. You can switch between them usingyappus model [MODEL_NAME] or/model [MODEL_NAME].

GEMINI_FLASHCurrent

Default model, latest and greatest from Gemini 2.0 series.

GEMINI_2_5_PRO

Most capable model for complex tasks and reasoning.

GEMINI_2_5_FLASH

High performance with excellent reasoning capabilities.

GEMINI_1_5_PRO

Powerful legacy model, good for general purpose tasks.

GEMINI_1_5_FLASH

Fast and efficient legacy model.

Chat History Management

Keep track of your conversations and manage your data with ease.

Persistent Chat History

Yappus automatically saves your conversations, allowing you to pick up where you left off. History is stored locally in `~/.config/yappus-term/chat_history.json`.

Export Your Chats

Easily export your entire chat history to a JSON file for backup or analysis using the `yappus export [path]` command or `/export [path]` in interactive mode.

Manage History

View your history with `yappus history` or `/history`, and clear it completely with `yappus clear-history` or `/clearhistory` if needed.

Context Awareness

Yappus understands the context of your work for smarter interactions.

Conversation Flow

Yappus remembers previous parts of your current conversation, allowing for natural follow-up questions and more relevant AI responses without needing to repeat context.

File Content Integration

Use the `/file <path>` command to load a file's content directly into the conversation context. The AI can then answer questions or perform tasks based on that specific file.

Directory & Git Awareness

Yappus automatically includes information about your current working directory and (if applicable) your current Git branch and repository status as part of the context provided to the AI.

Features

Powerful capabilities at your fingertips

Interactive CLI

Chat with Gemini AI directly in your terminal.

Multiple Models

Supports various Gemini models, including Gemini 2.0 Flash.

Persistent History

Chat history is saved across sessions.

File Integration

Use /file for context-aware discussions about your code.

Context Awareness

Maintains conversation flow and uses directory/git context.

Command Piping

Combine shell commands with AI queries for powerful workflows.

Configurable

Manage API keys and model preferences easily.

Local Mode (Soon)

Upcoming support for fully local AI via Ollama.