Loading

NeurIPS 2023 - The Neural MMO Challenge

Manually create your curriculum

A step-by-step guide for training agents with your curriculum

kyoung_whan_choe

Open in Colab doesn't seem to work. Click this url instead: https://colab.research.google.com/drive/1AZt_eEGTEZrnX3iJC7jHw7lbBTNaoMoj

Set up your instance - gpu and google drive

In [1]:
# Check if (NVIDIA) GPU is available
import torch
assert torch.cuda.is_available, "CUDA gpu not available"
In [2]:
# Set up the work directory
import os
assert os.path.exists("/content/drive/MyDrive"), "Google Drive not mounted"

WORK_DIR = "/content/drive/MyDrive/nmmo/"
CKPT_DIR = WORK_DIR + "runs"

Install nmmo env, pufferlib, and the baselines

See https://www.aicrowd.com/showcase/colab-starter-kit, if you are new to this.

When you restart your colab, you have to pip install nmmo, pufferlib, and baselines deps.

In [23]:
# This code assumes you've already downloaded the baselines repo in your google drive
# If not, please go through this first https://www.aicrowd.com/showcase/colab-starter-kit
assert os.path.exists(WORK_DIR), "Work directory not found. First, follow https://www.aicrowd.com/showcase/colab-starter-kit"

%cd $WORK_DIR
%cd baselines

# Install nmmo env and pufferlib
!pip install nmmo pufferlib > /dev/null
!pip install -r requirements_colab.txt > /dev/null

!pip show nmmo  # should be 2.0.3
!pip show pufferlib # should be 0.4.3
/content/drive/MyDrive/nmmo
/content/drive/MyDrive/nmmo/baselines
Name: nmmo
Version: 2.0.3
Summary: Neural MMO is a platform for multiagent intelligence research inspired by Massively Multiplayer Online (MMO) role-playing games. Documentation hosted at neuralmmo.github.io.
Home-page: https://github.com/neuralmmo/environment
Author: Joseph Suarez
Author-email: jsuarez@mit.edu
License: MIT
Location: /usr/local/lib/python3.10/dist-packages
Requires: autobahn, dill, gym, imageio, numpy, ordered-set, pettingzoo, psutil, py, pylint, pytest, pytest-benchmark, scipy, tqdm, Twisted, vec-noise
Required-by: 
Name: pufferlib
Version: 0.4.3
Summary: PufferAI LibraryPufferAI's library of RL tools and utilities
Home-page: https://github.com/PufferAI/PufferLib
Author: Joseph Suarez
Author-email: jsuarez@mit.edu
License: MIT
Location: /usr/local/lib/python3.10/dist-packages
Requires: cython, filelock, gym, numpy, opencv-python, openskill, pettingzoo
Required-by: 
In [ ]:
# If everything is correctly installed, this should run

!python train.py --runs-dir $CKPT_DIR --local-mode true --train-num-steps=5_000
INFO:root:Training run: nmmo_20231024_184251 (/content/drive/MyDrive/nmmo/runs/nmmo_20231024_184251)
INFO:root:Training args: Namespace(attend_task='none', attentional_decode=True, bptt_horizon=8, checkpoint_interval=30, clip_coef=0.1, death_fog_tick=None, device='cuda', early_stop_agent_num=8, encode_task=True, eval_batch_size=32768, eval_mode=False, eval_num_policies=2, eval_num_rounds=1, eval_num_steps=1000000, explore_bonus_weight=0.01, extra_encoders=True, heal_bonus_weight=0.03, hidden_size=256, input_size=256, learner_weight=1.0, local_mode=True, map_size=128, maps_path='maps/train/', max_episode_length=1024, max_opponent_policies=0, meander_bonus_weight=0.02, num_agents=128, num_buffers=1, num_cores=None, num_envs=1, num_lstm_layers=0, num_maps=128, num_npcs=256, policy_store_dir=None, ppo_learning_rate=0.00015, ppo_training_batch_size=128, ppo_update_epochs=3, resilient_population=0.2, rollout_batch_size=1024, run_name='nmmo_20231024_184251', runs_dir='/content/drive/MyDrive/nmmo/runs', seed=1, spawn_immunity=20, sqrt_achievement_rewards=False, task_size=4096, tasks_path='reinforcement_learning/curriculum_with_embedding.pkl', track='rl', train_num_steps=5000, use_serial_vecenv=True, wandb_entity=None, wandb_project=None)
INFO:root:Using policy store from /content/drive/MyDrive/nmmo/runs/nmmo_20231024_184251/policy_store
INFO:root:Generating 128 maps
Allocated 94.38 MB to environments. Only accurate for Serial backend.
PolicyPool sample_weights: [128]
Allocated to storage - Pytorch: 0.00 GB, System: 0.11 GB
INFO:root:PolicyPool: Updated policies: dict_keys(['learner'])
Allocated during evaluation - Pytorch: 0.01 GB, System: 1.53 GB
Epoch: 0 - 1K steps - 0:01:24 Elapsed
	Steps Per Second: Env=1259, Inference=159
	Train=435

Allocated during training - Pytorch: 0.07 GB, System: 0.24 GB
INFO:root:Saving policy to /content/drive/MyDrive/nmmo/runs/nmmo_20231024_184251/policy_store/nmmo_20231024_184251.000001
INFO:root:PolicyPool: Updated policies: dict_keys(['learner'])
Allocated during evaluation - Pytorch: 0.00 GB, System: 0.00 GB
Epoch: 1 - 2K steps - 0:01:29 Elapsed
	Steps Per Second: Env=632, Inference=4035
	Train=617

Allocated during training - Pytorch: 0.01 GB, System: 0.00 GB
INFO:root:PolicyPool: Updated policies: dict_keys(['learner'])
Allocated during evaluation - Pytorch: 0.00 GB, System: 0.00 GB
Epoch: 2 - 3K steps - 0:01:33 Elapsed
	Steps Per Second: Env=580, Inference=3956
	Train=578

Allocated during training - Pytorch: 0.01 GB, System: 0.00 GB
INFO:root:PolicyPool: Updated policies: dict_keys(['learner'])
Allocated during evaluation - Pytorch: 0.00 GB, System: 0.00 GB
Epoch: 3 - 4K steps - 0:01:38 Elapsed
	Steps Per Second: Env=440, Inference=3858
	Train=712

Allocated during training - Pytorch: 0.01 GB, System: 0.00 GB
INFO:root:Saving policy to /content/drive/MyDrive/nmmo/runs/nmmo_20231024_184251/policy_store/nmmo_20231024_184251.000004

Manually create your custom curriculum

  1. Use pre-built evaluation functions and TaskSpec to define training tasks.
  2. Define your own evaluation functions.
  3. Check if your training task tasks are valid and pickable. Must satisfy both.
  4. Generate the task embedding file using the task encoder.
  5. Train agents using the task embedding file.
  6. Extract the training task stats.
In [ ]:
curriculum = []

Define simple training tasks using pre-built functions

For the full list of pre-built functions, see https://github.com/NeuralMMO/environment/blob/2.0/nmmo/task/base_predicates.py

In [ ]:
# Training tasks for nmmo are defined using TaskSpec class
from nmmo.task.task_spec import TaskSpec

# Let's start with pre-built eval functions
from nmmo.task.base_predicates import CountEvent, InventorySpaceGE, TickGE, norm
In [ ]:
# Here are very simple training tasks, based on a pre-built function, CountEvent

# Agents have completed the task if they have done the event N times
essential_events = [
    "GO_FARTHEST",
    "EAT_FOOD",
    "DRINK_WATER",
    "SCORE_HIT",
    "HARVEST_ITEM",
    "LEVEL_UP",
]

for event_code in essential_events:
    curriculum.append(
        TaskSpec(
            eval_fn=CountEvent,  # is a pre-built eval function
            eval_fn_kwargs={"event": event_code, "N": 10},  # kwargs for CountEvent
        )
    )

print("Curriculum so far:\n", curriculum)
print("\nOne example training task:\n", curriculum[0])
Curriculum so far:
 [TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'GO_FARTHEST', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'EAT_FOOD', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'DRINK_WATER', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'SCORE_HIT', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'HARVEST_ITEM', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'LEVEL_UP', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'GO_FARTHEST', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'EAT_FOOD', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'DRINK_WATER', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'SCORE_HIT', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'HARVEST_ITEM', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'LEVEL_UP', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'GO_FARTHEST', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'EAT_FOOD', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'DRINK_WATER', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'SCORE_HIT', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'HARVEST_ITEM', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None), TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'LEVEL_UP', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None)]

One example training task:
 TaskSpec(eval_fn=<function CountEvent at 0x7eeb3b125120>, eval_fn_kwargs={'event': 'GO_FARTHEST', 'N': 10}, task_cls=<class 'nmmo.task.task_api.Task'>, task_kwargs={}, reward_to='agent', sampling_weight=1.0, embedding=None, predicate=None)

Define custom training tasks

You can also use the attributes available via GameState (gs) and subject. For example usage, please refer to the pre-built functions: https://github.com/NeuralMMO/environment/blob/2.0/nmmo/task/base_predicates.py

In [ ]:
# You can also use pre-built eval functions to define your own function
def PracticeInventoryManagement(gs, subject, space, num_tick):
    return norm(InventorySpaceGE(gs, subject, space) * TickGE(gs, subject, num_tick))

# Training tasks are defined using TaskSpec
for space in [2, 4, 8]:
    curriculum.append(
        TaskSpec(
            eval_fn=PracticeInventoryManagement,
            eval_fn_kwargs={"space": space, "num_tick": 500},
        )
    )
In [ ]:
# The eval functions can be built directly from accessing attributes in GameState and subject
# GameState: https://github.com/NeuralMMO/environment/blob/2.0/nmmo/task/game_state.py#L32

def PracticeEating(gs, subject):
    """The progress, the max of which is 1, should
    * increase small for each eating
    * increase big for the 1st and 3rd eating
    * reach 1 with 10 eatings
    """
    num_eat = len(subject.event.EAT_FOOD)
    progress = num_eat * 0.06
    if num_eat >= 1:
        progress += 0.1
    if num_eat >= 3:
        progress += 0.3
    return norm(progress)  # norm is a helper function to normalize the value to [0, 1]

curriculum.append(TaskSpec(eval_fn=PracticeEating, eval_fn_kwargs={}))

Check if your training task tasks are valid and pickable

So that these can be used for training.

Save your curriculum into a separate python file, so that it can be imported. In this tutorial, the above curriculum has been saved to curriculum_tutorial.py, and we are using it.

In [8]:
# Import the custom curriculum
import curriculum_generation.curriculum_tutorial as tutorial
CURRICULUM = tutorial.curriculum
print("The number of training tasks in the curriculum:", len(CURRICULUM))
The number of training tasks in the curriculum: 10
In [ ]:
# Check if these task specs are valid in the nmmo environment
# Invalid tasks will crash your agent training
from nmmo.task.task_spec import check_task_spec

results = check_task_spec(CURRICULUM)
num_error = 0
for result in results:
  if result["runnable"] is False:
    print("ERROR: ", result["spec_name"])
    num_error += 1
assert num_error == 0, "Invalid task specs will crash training. Please fix them."
print("All training tasks are valid.")
All training tasks are valid.
In [ ]:
# The task_spec must be picklable to be used for agent training
CURRICULUM_FILE_PATH = "custom_curriculum_with_embedding.pkl"
with open(CURRICULUM_FILE_PATH, "wb") as f:
  import dill
  dill.dump(CURRICULUM, f)
print("All training tasks are picklable.")
All training tasks are picklable.

Generate the task embedding file

To use the curriculum for agent training, the curriculum, task_spec, should be saved to a file with the embeddings using the task encoder. The task encoder uses a coding LLM to encode the task_spec into a vector.

In [19]:
%cd $WORK_DIR
%cd baselines

import curriculum_generation
from curriculum_generation.task_encoder import TaskEncoder

# The codegen25 7b model is the default, but it does not work with the free colab tier
LLM_CHECKPOINT = "Salesforce/codegen25-7b-instruct"
CURRICULUM_FILE_PATH = "custom_curriculum_with_embedding.pkl"
/content/drive/MyDrive/nmmo
/content/drive/MyDrive/nmmo/baselines

**NOTE: It takes ~20 minutes to download the 7B model, and it does NOT work in the free tier due to insufficient RAM while loading.**

If you have enough storage in your google drive, you can save these pre-trained models to your drive and re-use them later. Please see https://stackoverflow.com/questions/73842234/setting-huggingface-cache-in-google-colab-notebook-to-google-drive

In [20]:
# Check the available RAM before running below
import psutil
avail_ram = psutil.virtual_memory()[1]/1_000_000_000  # GB
assert avail_ram > 16, "Need a high-ram instance, available with Colab Pro."

# Get the task embeddings for the training tasks and save to file
# You need to provide the curriculum file as a module to the task encoder
with TaskEncoder(LLM_CHECKPOINT, curriculum_generation.curriculum_tutorial) as task_encoder:
    task_encoder.get_task_embedding(CURRICULUM, save_to_file=CURRICULUM_FILE_PATH)

print("Generated the task embedding file.")
Using unk_token, but it is not set yet.
Using unk_token, but it is not set yet.
100%|██████████| 1/1 [00:00<00:00,  3.07it/s]
100%|██████████| 5/5 [00:02<00:00,  2.25it/s]
Generating the task embedding done.

Train agents with your curriculum

Provide your curriculum file with the arg --tasks-path like below.

In [22]:
!python train.py --runs-dir $CKPT_DIR --tasks-path $CURRICULUM_FILE_PATH
INFO:root:Training run: nmmo_20231024_235253 (/content/drive/MyDrive/nmmo/runs/nmmo_20231024_235253)
INFO:root:Training args: Namespace(attend_task='none', attentional_decode=True, bptt_horizon=8, checkpoint_interval=30, clip_coef=0.1, death_fog_tick=None, device='cuda', early_stop_agent_num=8, encode_task=True, eval_batch_size=32768, eval_mode=False, eval_num_policies=2, eval_num_rounds=1, eval_num_steps=1000000, explore_bonus_weight=0.01, extra_encoders=True, heal_bonus_weight=0.03, hidden_size=256, input_size=256, learner_weight=1.0, local_mode=False, map_size=128, maps_path='maps/train/', max_episode_length=1024, max_opponent_policies=0, meander_bonus_weight=0.02, num_agents=128, num_buffers=2, num_cores=None, num_envs=6, num_lstm_layers=0, num_maps=128, num_npcs=256, policy_store_dir=None, ppo_learning_rate=0.00015, ppo_training_batch_size=128, ppo_update_epochs=3, resilient_population=0.2, rollout_batch_size=32768, run_name='nmmo_20231024_235253', runs_dir='/content/drive/MyDrive/nmmo/runs', seed=1, spawn_immunity=20, sqrt_achievement_rewards=False, task_size=4096, tasks_path='reinforcement_learning/curriculum_with_embedding.pkl', track='rl', train_num_steps=10000000, use_serial_vecenv=False, wandb_entity=None, wandb_project=None)
INFO:root:Using policy store from /content/drive/MyDrive/nmmo/runs/nmmo_20231024_235253/policy_store
Allocated 93.70 MB to environments. Only accurate for Serial backend.
PolicyPool sample_weights: [128]
Allocated to storage - Pytorch: 0.00 GB, System: 3.41 GB
INFO:root:PolicyPool: Updated policies: dict_keys(['learner'])
Allocated during evaluation - Pytorch: 0.01 GB, System: 0.76 GB
Epoch: 0 - 32K steps - 0:00:38 Elapsed
	Steps Per Second: Env=1905, Inference=5271
	Train=756

Allocated during training - Pytorch: 0.07 GB, System: 3.61 GB
INFO:root:Saving policy to /content/drive/MyDrive/nmmo/runs/nmmo_20231024_235253/policy_store/nmmo_20231024_235253.000001
INFO:root:PolicyPool: Updated policies: dict_keys(['learner'])
Allocated during evaluation - Pytorch: 0.00 GB, System: 0.04 GB
Epoch: 1 - 65K steps - 0:02:23 Elapsed
	Steps Per Second: Env=847, Inference=7591
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/utils.py", line 223, in wrapper
    result = func(*args, **kwargs)
  File "/content/drive/MyDrive/nmmo/baselines/reinforcement_learning/clean_pufferl.py", line 620, in train
    self.optimizer.step()
  File "/usr/local/lib/python3.10/dist-packages/torch/optim/optimizer.py", line 373, in wrapper
    out = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/optim/optimizer.py", line 76, in _use_grad
    ret = func(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/optim/adam.py", line 163, in step
    adam(
  File "/usr/local/lib/python3.10/dist-packages/torch/optim/adam.py", line 311, in adam
    func(params,
  File "/usr/local/lib/python3.10/dist-packages/torch/optim/adam.py", line 552, in _multi_tensor_adam
    bias_correction2 = [1 - beta2 ** _get_value(step) for step in device_state_steps]
  File "/usr/local/lib/python3.10/dist-packages/torch/optim/adam.py", line 552, in <listcomp>
    bias_correction2 = [1 - beta2 ** _get_value(step) for step in device_state_steps]
  File "/usr/local/lib/python3.10/dist-packages/torch/optim/optimizer.py", line 84, in _get_value
    def _get_value(x):
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/content/drive/MyDrive/nmmo/baselines/train.py", line 135, in <module>
    reinforcement_learning_track(trainer, args)
  File "/content/drive/MyDrive/nmmo/baselines/train.py", line 68, in reinforcement_learning_track
    trainer.train(
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/utils.py", line 223, in wrapper
    result = func(*args, **kwargs)
KeyboardInterrupt
Process Process-12:
Process Process-9:
Process Process-10:
Process Process-8:
Process Process-7:
Process Process-5:
Exception ignored in atexit callback: <function _exit_function at 0x7989b0c63c70>
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/util.py", line 357, in _exit_function
Process Process-4:
Process Process-2:
Process Process-11:
Process Process-3:
Process Process-6:
Process Process-1:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
Traceback (most recent call last):
Traceback (most recent call last):
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
KeyboardInterrupt
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()

During handling of the above exception, another exception occurred:

  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
Traceback (most recent call last):
KeyboardInterrupt
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
KeyboardInterrupt
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
KeyboardInterrupt
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)

During handling of the above exception, another exception occurred:

  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
KeyboardInterrupt
Traceback (most recent call last):
KeyboardInterrupt

During handling of the above exception, another exception occurred:


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
KeyboardInterrupt
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()

During handling of the above exception, another exception occurred:

  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
KeyboardInterrupt
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
KeyboardInterrupt
KeyboardInterrupt
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
KeyboardInterrupt
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
KeyboardInterrupt
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
KeyboardInterrupt
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
KeyboardInterrupt
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
KeyboardInterrupt
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
KeyboardInterrupt
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/local/lib/python3.10/dist-packages/pufferlib/vectorization.py", line 349, in _worker_process
    request, args, kwargs = request_queue.get()
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 103, in get
    res = self._recv_bytes()
KeyboardInterrupt
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 317, in _bootstrap
    util._exit_function()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
KeyboardInterrupt
^C

Comments

kyoung_whan_choe
6 months ago

The colab link has an additional section called “Generate a replay for a specific task”, which shows how to generate a replay of all agents doing the same task you specified. https://colab.research.google.com/drive/1AZt_eEGTEZrnX3iJC7jHw7lbBTNaoMoj#scrollTo=xtvJbCFFU53Z

You must login before you can post a comment.

Execute