-
Notifications
You must be signed in to change notification settings - Fork 66
Open
Description
Question
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
- Running on all addresses (0.0.0.0)
- Running on http://127.0.0.1:5801
- Running on http://192.168.120.33:5801
Press CTRL+C to quit
read http data cost 0.03390765190124512
init reset model!!!
/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:631: UserWarning:do_sampleis set toFalse. However,temperatureis set to0.1-- this flag is only used in sample-based generation modes. You should setdo_sample=Trueor unsettemperature.
warnings.warn(
/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:636: UserWarning:do_sampleis set toFalse. However,top_pis set to0.001-- this flag is only used in sample-based generation modes. You should setdo_sample=Trueor unsettop_p.
warnings.warn(
/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:653: UserWarning:do_sampleis set toFalse. However,top_kis set to1-- this flag is only used in sample-based generation modes. You should setdo_sample=Trueor unsettop_k.
warnings.warn(
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:- Avoid using
tokenizersbefore the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either: - Avoid using
tokenizersbefore the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[2026-01-08 15:43:36,689] ERROR in app: Exception on /eval_dual [POST]
Traceback (most recent call last):
File "/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/flask/app.py", line 1511, in wsgi_app
response = self.full_dispatch_request()
File "/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/flask/app.py", line 919, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/flask/app.py", line 917, in full_dispatch_request
rv = self.dispatch_request()
File "/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/flask/app.py", line 902, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "/home/z/下载/InternNav/scripts/realworld/http_internvla_server.py", line 56, in eval_dual
dual_sys_output = agent.step(
File "/home/z/InternNav/internnav/agent/internvla_n1_agent_realworld.py", line 135, in step
self.output_action, self.output_latent, self.output_pixel = self.step_s2(
File "/home/z/InternNav/internnav/agent/internvla_n1_agent_realworld.py", line 230, in step_s2
outputs = self.model.generate(
File "/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/transformers/generation/utils.py", line 2460, in generate
result = self._sample(
File "/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/transformers/generation/utils.py", line 3426, in _sample
outputs = self(**model_inputs, return_dict=True)
File "/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/z/InternNav/internnav/model/basemodel/internvla_n1/internvla_n1.py", line 250, in forward
latent_queries = self.get_model().latent_queries.repeat(input_ids.shape[0], 1, 1)
File "/home/z/anaconda3/envs/internnav/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1928, in getattr
raise AttributeError(
AttributeError: 'InternVLAN1Model' object has no attribute 'latent_queries'
192.168.254.110 - - [08/Jan/2026 15:43:36] "POST /eval_dual HTTP/1.1" 500 -
- Avoid using
这次改动需要修改模型架构,无法确认怎么回事,我采用最新的代码和最新的模型去跑
Metadata
Metadata
Assignees
Labels
No labels