Loading
1 Follower
0 Following
shivansh_beohar

Organization

IIIT Allahabad

Location

IN

Badges

1
1
0

Activity

Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Latest submissions

No submissions made in this challenge.

Behavioral Representation Learning from Animal Poses.

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

5 Puzzles 21 Days. Can you solve it all?

Latest submissions

No submissions made in this challenge.

Sample Efficient Reinforcement Learning in Minecraft

Latest submissions

No submissions made in this challenge.

Latest submissions

See All
failed 179781
failed 179657
graded 178638

Self-driving RL on DeepRacer cars - From simulation to real world

Latest submissions

No submissions made in this challenge.

Sample-efficient reinforcement learning in Minecraft

Latest submissions

No submissions made in this challenge.
Participant Rating
vrv 0
Participant Rating

Learn-to-Race: Autonomous Racing Virtual Challenge

Clarification on input sensors during evaluation

Over 2 years ago

The evaluator code in the starter kit does not allow Segmentation cameras during 1-hr practice session. Is the evaluator on the server different from the starter-kit ?

link to code line

Safety Infractions calculation during Stage 2 1-Hour practice period

Over 2 years ago

Does the evaluation metric (as shown on the Stage 2 leaderboard) include safety infractions happened during the 1-hour training session, or does it include only those infractions happened during the final 3 episodes evaluation?

Regarding Stage 2 evaluation

Over 2 years ago

The documentation states that during stage 2, the teams will upload their models β€œwith initialization”, this model will have 1 hour of training time to learn the new track and then it will be evaluated on that track.
Does this mean that we can upload pretrained networks/models during the stage 2 as well. In not then what exactly does initialization means here?

Get input actions given directly to simulator for creating Imitation Learning data

Over 2 years ago

Create a simple pygame program which takes input from keyboard, and displays frames.
Pass the input to the simulator and display the RGB images on pygame window. According to your requirement either save the image,input pair in memory and dump them altogether at the end of the episode or save as it is collected.

Clarification on input sensors during evaluation

Almost 3 years ago

The wording is a bit unclear to me, During the β€œevaluation” there is β€œpractice” session of 1 Hour and then the final β€œevaluation”. My question is regarding the β€œpractice” session, are we allowed to use any sensor during the practice session or only the three (Front Left and Right) ?

Clarification on input sensors during evaluation

Almost 3 years ago

For Round 2, can we use additional sensors for 1-hour training period ? Like segmentation camera view for the new track ?

Get input actions given directly to simulator for creating Imitation Learning data

Almost 3 years ago

@max333 Agreed on not using record_manually, I came to understand this when I solved my original problem as mentioned below:
Although I did not used BaseAgent class, I wrote a small pygame wrapper over the env, to take input from keyboard and display the screen inside pygame (I can still see the car in simulator, since the pygame window is small) and recorded my observations that way.
Thanks for your inputs.

Get input actions given directly to simulator for creating Imitation Learning data

Almost 3 years ago

I am trying to record a human-performed demo using the env.record_manually function. I am able to get observations but not the actions. I assume this is because during record_manually the actions are fed via Keyboard to the simulator and not through env.step and thus not captured anywhere in the python code. Can we query the simulator to get last action performed ? Or can we recover action somehow ?

If not record_manually, is there an another way to get actions+observations while driving the car ourselves?

Loading...