Bguisard
Rank:Average Model Cost: $0.0000
Number of Runs: 4,366
Models by this creator
stable-diffusion-nano-2-1
$-/run
4.4K
Huggingface
stable-diffusion-nano
$-/run
7
Huggingface
ppo-PyramidsRND
ppo-PyramidsRND
ppo Agent playing Pyramids This is a trained model of a ppo agent playing Pyramids using the Unity ML-Agents Library. Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: Resume the training Watch your Agent play You can watch your agent playing directly in your browser:. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids Step 1: Write your model_id: bguisard/ppo-PyramidsRND Step 2: Select your .nn /.onnx file Click on Watch the agent play 👀
$-/run
1
Huggingface
dqn-SpaceInvadersNoFrameskip-v4
$-/run
0
Huggingface
PPO-LunarLander-v2
$-/run
0
Huggingface
rl_course_vizdoom_health_gathering_supreme
rl_course_vizdoom_health_gathering_supreme
A(n) APPO model trained on the doom_health_gathering_supreme environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ Downloading the model After installing Sample-Factory, download the model with: Using the model To run the model after download, use the enjoy script corresponding to this environment: You can also upload models to the Hugging Face Hub using the same script with the --push_to_hub flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details Training with this model To continue training with this model, use the train script corresponding to this environment: Note, you may have to adjust --train_for_env_steps to a suitably high number as the experiment will resume at the number of steps it concluded at.
$-/run
0
Huggingface
reinforce-CartPole-v1
reinforce-CartPole-v1
Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
$-/run
0
Huggingface
ppo-SnowballTarget
ppo-SnowballTarget
ppo Agent playing SnowballTarget This is a trained model of a ppo agent playing SnowballTarget using the Unity ML-Agents Library. Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: Resume the training Watch your Agent play You can watch your agent playing directly in your browser:. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget Step 1: Write your model_id: bguisard/ppo-SnowballTarget Step 2: Select your .nn /.onnx file Click on Watch the agent play 👀
$-/run
0
Huggingface
PPO-Huggy
PPO-Huggy
ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: Resume the training Watch your Agent play You can watch your agent playing directly in your browser:. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy Step 1: Write your model_id: bguisard/PPO-Huggy Step 2: Select your .nn /.onnx file Click on Watch the agent play 👀
$-/run
0
Huggingface
poca-SoccerTwos
poca-SoccerTwos
poca Agent playing SoccerTwos This is a trained model of a poca agent playing SoccerTwos using the Unity ML-Agents Library. Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: Resume the training Watch your Agent play You can watch your agent playing directly in your browser:. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos Step 1: Write your model_id: bguisard/poca-SoccerTwos Step 2: Select your .nn /.onnx file Click on Watch the agent play 👀
$-/run
0
Huggingface