Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meet an ValueError when I replace RingNetwork with HighwayNetwork for network in the tutorial03 #1048

Open
17150934 opened this issue Jul 28, 2021 · 1 comment
Labels

Comments

@17150934
Copy link

In the tutorial03, I replace RingNetwork with HighwayNetwork for network ,but I meet an ValueError:

Traceback (most recent call last):
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/tune/trial_runner.py", line 426, in _process_trial
    result = self.trial_executor.fetch_result(trial)
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/tune/ray_trial_executor.py", line 378, in fetch_result
    result = ray.get(trial_future[0], DEFAULT_GET_TIMEOUT)
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/worker.py", line 1457, in get
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(ValueError): ray::PPO.train() (pid=10954, ip=192.168.6.112)
  File "python/ray/_raylet.pyx", line 636, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 619, in ray._raylet.execute_task.function_executor
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 444, in train
    raise e
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 433, in train
    result = Trainable.train(self)
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/tune/trainable.py", line 176, in train
    result = self._train()
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 129, in _train
    fetches = self.optimizer.step()
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/optimizers/multi_gpu_optimizer.py", line 140, in step
    self.num_envs_per_worker, self.train_batch_size)
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/optimizers/rollout.py", line 29, in collect_samples
    next_sample = ray_get_and_free(fut_sample)
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/utils/memory.py", line 33, in ray_get_and_free
    result = ray.get(object_ids)
ray.exceptions.RayTaskError(ValueError): ray::RolloutWorker.sample() (pid=11030, ip=192.168.6.112)
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/utils/tf_run_builder.py", line 94, in run_timeline
    fetches = sess.run(ops, feed_dict=feed_dict)
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run
    run_metadata_ptr)
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1156, in _run
    (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 42) for Tensor 'default_policy/observation:0', which has shape '(?, 44)'

During handling of the above exception, another exception occurred:

ray::RolloutWorker.sample() (pid=11030, ip=192.168.6.112)
  File "python/ray/_raylet.pyx", line 633, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 634, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 636, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 619, in ray._raylet.execute_task.function_executor
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 471, in sample
    batches = [self.input_reader.next()]
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 56, in next
    batches = [self.get_data()]
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 99, in get_data
    item = next(self.rollout_provider)
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 327, in _env_runner
    active_episodes)
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 551, in _do_policy_eval
    eval_results[k] = builder.get(v)
  File "/home/vcdc/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/utils/tf_run_builder.py", line 53, in get
    self.fetches, self.feed_dict))
ValueError: Error fetching: [<tf.Tensor 'default_policy/add:0' shape=(?, 1) dtype=float32>, {'action_prob': <tf.Tensor 'default_policy/Exp_1:0' shape=(?,) dtype=float32>, 'action_logp': <tf.Tensor 'default_policy/sub_2:0' shape=(?,) dtype=float32>, 'vf_preds': <tf.Tensor 'default_policy/Reshape:0' shape=(?,) dtype=float32>, 'behaviour_logits': <tf.Tensor 'default_policy/model/fc_out/BiasAdd:0' shape=(?, 2) dtype=float32>}], feed_dict={<tf.Tensor 'default_policy/observation:0' shape=(?, 44) dtype=float32>: [array([0.44409109, 0.44405716, 0.44405939, 0.44408473, 0.44404423,
       0.44407543, 0.44402707, 0.44407156, 0.44409773, 0.44407641,
       0.44411795, 0.44408998, 0.4440337 , 0.44362106, 0.44408406,
       0.44404458, 0.44409067, 0.3988301 , 0.44532913, 0.44532913,
       0.01731031, 0.09245801, 0.09184036, 0.09243479, 0.09184368,
       0.27578934, 0.27271432, 0.27350743, 0.27468785, 0.45574352,
       0.45488228, 0.45227   , 0.45656885, 0.63957073, 0.63730218,
       0.63756669, 0.63982425, 0.81881247, 0.81595532, 0.81928188,
       0.81876308, 0.91137627])], <tf.Tensor 'default_policy/action:0' shape=(?, 1) dtype=float32>: [array([0.7231164], dtype=float32)], <tf.Tensor 'default_policy/prev_reward:0' shape=(?,) dtype=float32>: [0.41458751306631525], <tf.Tensor 'default_policy/PlaceholderWithDefault:0' shape=() dtype=bool>: False}

When I set the horizon to a low value(eg. 110),It is correct,but it produce the error above when I set the horizon to a high value(eg.200)
Why?

@17150934 17150934 added the bug label Jul 28, 2021
@TrinhTuanHung2021
Copy link

I also got this error and I see no one currently supports the use of this tool. This team hasn't commented for almost a year now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants