Loading
0 Follower
0 Following
tky

Location

JP

Badges

0
0
0

Activity

Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Sample-efficient reinforcement learning in Minecraft

Latest submissions

No submissions made in this challenge.

Sample-efficient reinforcement learning in Minecraft

Latest submissions

No submissions made in this challenge.

Robots that learn to interact with the environment autonomously

Latest submissions

See All
failed 25312
failed 25311
graded 25310

A new benchmark for Artificial Intelligence (AI) research in Reinforcement Learning

Latest submissions

See All
graded 2943
graded 2336
graded 2171
Participant Rating
Participant Rating
tky has not joined any teams yet...

NeurIPS 2019 - Robot open-Ended Autonomous Lear...

Have you ever successfully run 10M steps without resetting env?

About 5 years ago

Ah, you’re right. It was definitely because of storing observations in my local code. (Should have noticed before asking this question :sweat_smile:)
But I’m still thinking why my evaluation is stuck though. My code in gitlab repository at this point doesn’t store anything (RandomPolicy as it is). It seems like no submission has successfully finished yet (I see no entry for round 2).

Have you ever successfully run 10M steps without resetting env?

About 5 years ago

I’ve been running intrinsic phase locally without resetting environment as env reset is not allowed in evaluation server. However, I found out that my script dies without error.

I observed increase of memory usage even when running RandomPolicy so I assume there is a memory issue in environment as the number of steps increases in one episode.
I also suspect that makes evaluation process stop or even timeout error. (My RandomPolicy submission is still stuck around 2M steps for a few days now)

Is there anyone facing similar situation?
Or is this just my problem?

Intrinsic phase timeout

About 5 years ago

Hi, I am using the latest real_robots package (0.1.16) and found out that my evaluation is stuck around 2M steps for a few hours. I made no change to the policy in the starter kit (RandomPolicy) just to see how long it takes to submit no-learning agent. I assume there is still something that slows down the evaluation in the environment.
Could you investigate a little more?

Unity Obstacle Tower Challenge

Announcement: Debug your submissions

Almost 6 years ago

Hi, I used debug mode to test my submission and I tried to set it off to get actual result. But it seems it still runs in debug mode even after I updated aicrowd.json, pushed to the repo, and created a tag.

Does it take time to reflect the debug flag?
What should I do?

tky has not provided any information yet.