API Reference

TvizLogger

The main class for logging training data.

In [1]:
from tviz import TvizLogger

Constructor

In [ ]:
TvizLogger(
    db_path: str | None = None,
    run_name: str | None = None,
    modality: str = "text"
)
ParameterTypeDefaultDescription
db_pathstr | NoneNonePath to SQLite database. Falls back to TVIZ_DB_PATH env var, then ~/.tviz/tviz.db
run_namestr | NoneNoneHuman-readable name for the run. Auto-generated if not provided
modalitystr"text"Either "text" for LLM tasks or "vision" for VLM tasks

Methods

log_hparams(config)

Log hyperparameters/configuration. Call once at the start of training.

In [2]:
logger.log_hparams({
    "learning_rate": 1e-4,
    "batch_size": 32,
    "model": "meta-llama/Llama-3.2-1B",
    "rank": 32,
})

log_metrics(metrics, step)

Log scalar metrics for a training step.

In [3]:
logger.log_metrics({
    "reward": 0.75,
    "loss": 0.23,
    "kl_divergence": 0.01,
    "entropy": 1.5,
}, step=100)

Built-in metric keys (automatically extracted):

  • reward_mean, reward_std
  • loss
  • kl_divergence, kl
  • entropy
  • learning_rate, lr
  • frac_mixed, frac_all_good, frac_all_bad

Other metrics are stored in an extras JSON field.

log_rollouts(rollouts, step)

Log trajectory rollouts for visualization.

In [4]:
logger.log_rollouts([
    {
        "group_idx": 0,
        "prompt_text": "What is 2+2?",
        "trajectories": [
            {"trajectory_idx": 0, "reward": 1.0, "output_text": "4"},
            {"trajectory_idx": 1, "reward": 0.0, "output_text": "5"},
        ],
    }
], step=100)

close()

Mark run as complete and close the database connection.

In [5]:
logger.close()

get_logger_url()

Get the URL to view this run in the dashboard.

In [6]:
logger.get_logger_url()
Out[6]:
'http://localhost:3003/training-run/a1b2c3d4'

Tinker Adapter

Convert Tinker Cookbook data structures to tviz format.

In [7]:
from tviz.adapters.tinker import from_tinker_batch

from_tinker_batch(groups, prompts, tokenizer)

Convert a batch of Tinker TrajectoryGroups to tviz format.

In [8]:
# After your rollout step:
rollouts = from_tinker_batch(
    trajectory_groups_P,
    tokenizer=tokenizer  # Decodes tokens to readable text
)
logger.log_rollouts(rollouts, step=i_batch)
ParameterTypeDescription
groupslist[TrajectoryGroup]Trajectory groups from prepare_minibatch
promptslist[str] | NoneOptional prompt texts (one per group)
tokenizerAny | NoneOptional tokenizer for decoding tokens