Blocking call for moving vehicles

Hi,

I have noticed that moving vehicles using SetTransfrom is not a blocking call. For data collection, this is a slight problem because one would assume the car is where it was set to be. Right now I poll the position after using SetTransform, but even that is not enough. After the transform returned by GetTransform matches the intended pose, I have to wait an additional 10ms. Only then do the sensor readings match the pose. I have tried synchronous mode but that also didn’t help. I had to use Tick() twice + an additional delay to get the correct sensor output.

I use the C++ API but I assume this is also the case for python, maybe even slightly worse because of the additional abstraction layer. Is there any way of doing this faster?
Thanks in advance.

Hello hos-b,

I guess you are in synchronous mode, right? Which version of CARLA are you using?

Hi @DSantosOlivan,

I did try synchronous mode to see whether it would resolve the issue but it didn’t. To be honest it was a couple of months ago, and since it didn’t help me with SetTransform, I didn’t commit it. As far as I remember, I was using two Tick()s + some random thread delay to make sure the agent was where I wanted to be before capturing sensor data. I have since switched back to async mode. I also wasn’t sure how the sensor_tick would play out with me moving cars and ticking the simulator. Just to be sure, I let the simulator run in full speed.

Currently I move the agents, wait x milliseconds and then save the last frame from all the sensors. If I don’t wait, I get the sensor output from the previous pose. I currently use two PCs:

  1. old laptop for testing, 100ms delay after SetTransform, CARLA 0.9.10
  2. remote & capable PC for collection, 10ms delay after SetTransform, CARLA 0.9.11

I haven’t removed the delay on the remote PC. I wanted to be on the safe side since my data collection takes a couple of days. Should a single Tick() after SetTransform() suffice to move a car in the simulator?

In asynchronous mode, set transform is not going to be blocking because as the server and the client are running separately makes no sense to block that call. There is always going to be a delay and it will be bigger or smaller depending on the pc, the network, and the speed of the server.
If you want to be sure that your commands apply instantaneously, you need to use synchronous mode. It used to have a delay of one extra frame in the application due to the pipeline of the simulator but from the release 0.9.11, the pipeline has been improved and now the commands and instantaneously applied. You can check the specifics in the release notes. Be aware that the reception of the sensor data is not synchronized by the world tick and it needs to be done in the client. Check the script ‘sensor_synchronization.py’ for an example of how to do that.

That makes sense. I arbitrarily added the second Tick() & the delay in 0.9.10 and didn’t thoroughly test the limit. Thank you for the explanation.
I read the example but still don’t quite get the interplay between sensor_tick and the synchronous mode’s Tick(). There are 3 scenarios that interest me.

Let’s assume:

  • the simulator is set to sync mode with delta_seconds = 0.1, i.e. with each Tick(), the simulator advances 100ms.
  • Move() moves the agent to a random spot on the map
  • Callback() is the sensor callback function for each camera
  1. sensor_tick=0.1: If I Move() then Tick(), the vehicle will bet at the new pose after the blocking Tick() call. Callback() is also invoked once. Does this image belong to the old pose or the new one?
  2. sensor_tick=0.05: Now if I Move() then Tick(), Callback() is invoked twice for each camera. Which pose would these two images belong to?
  3. sensor_tick=0.2: Move() -> Tick() -> Move() -> Tick() invokes Callback() once. Where was the image taken?

Thanks in advance and sorry for the long question.

Check the diagram that we post in the 0.9.11, there you see that the sensor computation is done once per server tick so if you select a sensor_tick < dt you have one callback per tick, NOT more. As the sensor information is rendered/computed after the physics step, you will have the information after move it. Anyway, along with the sensor information, you have also the timestamp from which you can know when the information was taken.

that cleared it up. thanks