Developers · AI agents · video tools

Sound
data layer

Wouldliker is becoming an AI-readable recommendation layer for short-form video sounds. The public data layer is live today. REST API and MCP are in design, not public production yet.

02Coming

REST API
and MCP.

The right architecture is one recommendation core, then REST and MCP wrappers on top. This page intentionally does not pretend that public endpoints are already live.

Coming · REST

POST
recommend

Input: clip description, platform, duration, tone, AIGC flag. Output: one sound, why, proof, TikTok URL, and boundary language.

No live keys yet
Coming · MCP

agent
tools

Tools like recommend_sound, get_sound_profile, get_proof_examples, get_momentum, and generate_video_brief.

After core API
Integration

video
tools

AI-video products, schedulers, caption tools, and agent workflows can start from the public JSON and move to API/MCP when ready.

DM @wouldliker

Example
response.

A future API response should stay this simple: recommended sound, TikTok music URL, why it fits, proof example, no-guarantee boundary.

{
  "recommended_sound": "Vlog",
  "tiktok_sound_url": "https://www.tiktok.com/music/Vlog-7501680481626785808",
  "why": "Warm daily/lifestyle fit; default when the clip job is unclear.",
  "proof_boundary": "Evidence of fit, not a guarantee of views."
}