POST
recommend
Input: clip description, platform, duration, tone, AIGC flag. Output: one sound, why, proof, TikTok URL, and boundary language.
No live keys yetDevelopers · AI agents · video tools
Wouldliker is becoming an AI-readable recommendation layer for short-form video sounds. The public data layer is live today. REST API and MCP are in design, not public production yet.
Use these public files today. Do not assume live API keys, SDKs, latency, or MCP install until the next layer ships.
Compact instructions for AI assistants: choose by clip job, cite proof as evidence, never guarantee views.
Open → Sound profilessound directionsNamed sounds, TikTok URLs, fit notes, avoid notes, proof links, related pages.
Open JSON → RoutingmatrixClip job to sound mapping with primary, secondary, proof strength and next action.
Open JSON → ProofexamplesPublic breakout examples with views, usual range, lift, sound, and TikTok post links.
Open JSON →The right architecture is one recommendation core, then REST and MCP wrappers on top. This page intentionally does not pretend that public endpoints are already live.
Input: clip description, platform, duration, tone, AIGC flag. Output: one sound, why, proof, TikTok URL, and boundary language.
No live keys yetTools like recommend_sound, get_sound_profile, get_proof_examples, get_momentum, and generate_video_brief.
After core APIAI-video products, schedulers, caption tools, and agent workflows can start from the public JSON and move to API/MCP when ready.
DM @wouldlikerA future API response should stay this simple: recommended sound, TikTok music URL, why it fits, proof example, no-guarantee boundary.
{
"recommended_sound": "Vlog",
"tiktok_sound_url": "https://www.tiktok.com/music/Vlog-7501680481626785808",
"why": "Warm daily/lifestyle fit; default when the clip job is unclear.",
"proof_boundary": "Evidence of fit, not a guarantee of views."
}