Get a weekly rundown of the latest AI models and research... subscribe!


Models by this creator




Total Score


The AD_Stabilized_Motion model is an experimental AI model developed by maintainer manshoety. It is part of a set of very experimental models, so users should not expect amazing results. The model comes in two variants: mm-Stabilized_mid, which is a bit more stable than the base model, and mm-Stabilized_high, which is much more stable but has less movement. Similar models include stable-video-diffusion by christophy, lcm-video2video by fofr, and the stable-video-diffusion-img2vid-xt and stable-video-diffusion-img2vid models from Stability AI. Model inputs and outputs The AD_Stabilized_Motion model is a video-to-video model. It takes in a video as input and outputs a stabilized version of that video. Inputs Video Outputs Stabilized video Capabilities The AD_Stabilized_Motion model can be used to stabilize video footage, reducing camera shake and jitter while preserving the overall movement. The two variants offer different levels of stabilization, with the mm-Stabilized_high model providing more stable output but less overall movement. What can I use it for? The AD_Stabilized_Motion model could be useful for creators and filmmakers who need to stabilize footage for their projects, such as vlogs, travel videos, or other handheld camera work. The different model variants allow users to choose the right balance of stabilization and movement for their needs. Things to try Experimenting with the two model variants can help users find the right level of stabilization for their specific use case. The mm-Stabilized_high model may be particularly useful for footage that needs a very steady, fixed camera, while the mm-Stabilized_mid variant could work better for scenes that require more dynamic camera movement.

Read more

Updated 5/15/2024