embodichain.lab.sim.sensors#
Classes
Configuration class for sensors. |
|
Base class for sensor abstraction in the simulation engine. |
|
Configuration class for Camera. |
|
Base class for sensor abstraction in the simulation engine. |
|
Configuration class for StereoCamera. |
|
Base class for sensor abstraction in the simulation engine. |
Sensor#
- class embodichain.lab.sim.sensors.BaseSensor[source]#
Bases:
BatchEntityBase class for sensor abstraction in the simulation engine.
Sensors should inherit from this class and implement the update and get_data methods.
Methods:
__init__(config[, device])destroy()Destroy all entities managed by this batch entity.
get_arena_pose([to_matrix])Get the pose of the sensor in the arena frame.
get_data([copy])Retrieve data from the sensor.
reset([env_ids])Reset the entity to its initial state.
update(**kwargs)Update the sensor state based on the current simulation state.
- destroy()#
Destroy all entities managed by this batch entity.
- Return type:
None
- abstract get_arena_pose(to_matrix=False)[source]#
Get the pose of the sensor in the arena frame.
- Parameters:
to_matrix (
bool) – If True, return the pose as a 4x4 transformation matrix.- Return type:
Tensor- Returns:
A tensor representing the pose of the sensor in the arena frame.
- get_data(copy=True)[source]#
Retrieve data from the sensor.
- Parameters:
copy (
bool) – If True, return a copy of the data buffer. Defaults to True.- Return type:
Dict[str,Tensor]- Returns:
The data collected by the sensor.
- class embodichain.lab.sim.sensors.SensorCfg[source]#
Configuration class for sensors.
This class can be extended to include specific sensor configurations.
Classes:
Configuration of the sensor offset relative to the parent frame.
Methods:
from_dict(init_dict)Initialize the configuration from a dictionary.
Get the data types supported by this sensor configuration.
Attributes:
4x4 transformation matrix of the root in local frame.
Position of the root in simulation world frame.
Euler angles (in degree) of the root in simulation world frame.
- class OffsetCfg[source]#
Configuration of the sensor offset relative to the parent frame.
Attributes:
Name of the parent frame.
Position of the sensor in the parent frame.
Orientation of the sensor in the parent frame as a quaternion (w, x, y, z).
-
parent:
Optional[str]# Name of the parent frame. If not specified, the sensor will be placed in the arena frame.
This is usually the case when the sensor is not attached to any specific object, eg, link of a robot arm.
-
pos:
Tuple[float,float,float]# Position of the sensor in the parent frame. Defaults to (0.0, 0.0, 0.0).
-
quat:
Tuple[float,float,float,float]# Orientation of the sensor in the parent frame as a quaternion (w, x, y, z). Defaults to (1.0, 0.0, 0.0, 0.0).
-
parent:
- classmethod from_dict(init_dict)[source]#
Initialize the configuration from a dictionary.
- Return type:
- abstract get_data_types()[source]#
Get the data types supported by this sensor configuration.
- Return type:
List[str]- Returns:
A list of data types that this sensor configuration supports.
- init_local_pose: Optional[np.ndarray]#
4x4 transformation matrix of the root in local frame. If specified, it will override init_pos and init_rot.
- init_pos: tuple[float, float, float]#
Position of the root in simulation world frame. Defaults to (0.0, 0.0, 0.0).
- init_rot: tuple[float, float, float]#
Euler angles (in degree) of the root in simulation world frame. Defaults to (0.0, 0.0, 0.0).
Camera#
- class embodichain.lab.sim.sensors.Camera[source]#
Bases:
BaseSensorBase class for sensor abstraction in the simulation engine.
Sensors should inherit from this class and implement the update and get_data methods.
Methods:
__init__(config[, device])destroy()Destroy all entities managed by this batch entity.
get_arena_pose([to_matrix])Get the pose of the sensor in the arena frame.
get_data([copy])Retrieve data from the sensor.
Get the camera intrinsics for both left and right cameras.
get_local_pose([to_matrix])Get the local pose of the camera.
look_at(eye, target[, up, env_ids])Set the camera to look at a target point.
reset([env_ids])Reset the entity to its initial state.
set_intrinsics(intrinsics[, env_ids])Set the camera intrinsics for both left and right cameras.
set_local_pose(pose[, env_ids])Set the local pose of the camera.
update(**kwargs)Update the sensor data.
Attributes:
Check if Ray Tracing rendering backend is enabled in the default dexsim world.
- destroy()#
Destroy all entities managed by this batch entity.
- Return type:
None
- get_arena_pose(to_matrix=False)[source]#
Get the pose of the sensor in the arena frame.
- Parameters:
to_matrix (bool) – If True, return the pose as a 4x4 transformation matrix.
- Return type:
Tensor- Returns:
A tensor representing the pose of the sensor in the arena frame.
- get_data(copy=True)#
Retrieve data from the sensor.
- Parameters:
copy (
bool) – If True, return a copy of the data buffer. Defaults to True.- Return type:
Dict[str,Tensor]- Returns:
The data collected by the sensor.
- get_intrinsics()[source]#
Get the camera intrinsics for both left and right cameras.
- Returns:
The intrinsics for the left camera with shape (N, 3, 3).
- Return type:
torch.Tensor
- get_local_pose(to_matrix=False)[source]#
Get the local pose of the camera.
- Parameters:
to_matrix (bool) – If True, return the pose as a 4x4 matrix. If False, return as a quaternion.
- Returns:
The local pose of the camera.
- Return type:
torch.Tensor
- property is_rt_enabled: bool#
Check if Ray Tracing rendering backend is enabled in the default dexsim world.
- Returns:
True if Ray Tracing rendering is enabled, False otherwise.
- Return type:
bool
- look_at(eye, target, up=None, env_ids=None)[source]#
Set the camera to look at a target point.
- Parameters:
eye (torch.Tensor) – The position of the camera (eye) with shape (N, 3).
target (torch.Tensor) – The point the camera should look at (target) with shape (N, 3).
up (Optional[torch.Tensor]) – The up direction vector. If None, defaults to [0, 0, 1].
env_ids (Optional[Sequence[int]]) – The environment IDs to set the look at for. If None, set for all environments.
- Return type:
None
- reset(env_ids=None)[source]#
Reset the entity to its initial state.
- Parameters:
env_ids (Optional[Sequence[int]]) – The environment IDs to reset. If None, reset all environments.
- Return type:
None
- set_intrinsics(intrinsics, env_ids=None)[source]#
Set the camera intrinsics for both left and right cameras.
- Parameters:
intrinsics (torch.Tensor) – The intrinsics for the left camera with shape (4,) / (3, 3) or (N, 4) / (N, 3, 3).
env_ids (Optional[Sequence[int]], optional) – The environment ids to set the intrinsics. If None, set for all environments. Defaults to None.
- Return type:
None
- set_local_pose(pose, env_ids=None)[source]#
Set the local pose of the camera.
Note: The pose should be in the OpenGL coordinate system, which means the Y is up and Z is forward.
- Parameters:
pose (torch.Tensor) – The local pose to set, should be a 4x4 transformation matrix.
env_ids (Optional[Sequence[int]]) – The environment IDs to set the pose for. If None, set for all environments.
- Return type:
None
- update(**kwargs)[source]#
Update the sensor data.
- The supported data types are:
color: RGB images with shape (B, H, W, 4) and dtype torch.uint8
depth: Depth images with shape (B, H, W) and dtype torch.float32
mask: Instance segmentation masks with shape (B, H, W) and dtype torch.int32
normal: Normal images with shape (B, H, W, 3) and dtype torch.float32
position: Position images with shape (B, H, W, 3) and dtype torch.float32
- Parameters:
**kwargs – Additional keyword arguments for sensor update. - fetch_only (bool): If True, only fetch the data from dexsim internal frame buffer without performing rendering.
- Return type:
None
- class embodichain.lab.sim.sensors.CameraCfg[source]#
Bases:
SensorCfgConfiguration class for Camera.
Classes:
Configuration class for camera extrinsics.
Configuration of the sensor offset relative to the parent frame.
Methods:
from_dict(init_dict)Initialize the configuration from a dictionary.
Get the data types supported by this sensor configuration.
Get the view attributes for the camera.
Attributes:
4x4 transformation matrix of the root in local frame.
Position of the root in simulation world frame.
Euler angles (in degree) of the root in simulation world frame.
- class ExtrinsicsCfg[source]#
Bases:
OffsetCfgConfiguration class for camera extrinsics.
The extrinsics define the position and orientation of the camera in the 3D world. If eye, target, and up are provided, they will be used to compute the extrinsics. Otherwise, the position and orientation will be set to the defaults.
Attributes:
Name of the parent frame.
Position of the sensor in the parent frame.
Orientation of the sensor in the parent frame as a quaternion (w, x, y, z).
Alternative way to specify the camera extrinsics using eye, target, and up vectors.
-
parent:
Optional[str]# Name of the parent frame. If not specified, the sensor will be placed in the arena frame.
This is usually the case when the sensor is not attached to any specific object, eg, link of a robot arm.
-
pos:
Tuple[float,float,float]# Position of the sensor in the parent frame. Defaults to (0.0, 0.0, 0.0).
-
quat:
Tuple[float,float,float,float]# Orientation of the sensor in the parent frame as a quaternion (w, x, y, z). Defaults to (1.0, 0.0, 0.0, 0.0).
-
up:
Optional[Tuple[float,float,float]]# Alternative way to specify the camera extrinsics using eye, target, and up vectors.
-
parent:
- class OffsetCfg#
Bases:
objectConfiguration of the sensor offset relative to the parent frame.
Attributes:
Name of the parent frame.
Position of the sensor in the parent frame.
Orientation of the sensor in the parent frame as a quaternion (w, x, y, z).
-
parent:
Optional[str]# Name of the parent frame. If not specified, the sensor will be placed in the arena frame.
This is usually the case when the sensor is not attached to any specific object, eg, link of a robot arm.
-
pos:
Tuple[float,float,float]# Position of the sensor in the parent frame. Defaults to (0.0, 0.0, 0.0).
-
quat:
Tuple[float,float,float,float]# Orientation of the sensor in the parent frame as a quaternion (w, x, y, z). Defaults to (1.0, 0.0, 0.0, 0.0).
-
parent:
- classmethod from_dict(init_dict)#
Initialize the configuration from a dictionary.
- Return type:
- get_data_types()[source]#
Get the data types supported by this sensor configuration.
- Return type:
List[str]- Returns:
A list of data types that this sensor configuration supports.
- get_view_attrib()[source]#
Get the view attributes for the camera.
The camera view whcich is used to render the scene Default view attributes for the camera are: [COLOR, DEPTH, MASK] The supported view attributes are:
COLOR: RGBA images
DEPTH: Depth images
MASK: Instance segmentation masks
NORMAL: Normal images
POSITION: Position images with 3D coordinates.
- Return type:
ViewFlags- Returns:
The view attributes for the camera.
- init_local_pose: Optional[np.ndarray]#
4x4 transformation matrix of the root in local frame. If specified, it will override init_pos and init_rot.
- init_pos: tuple[float, float, float]#
Position of the root in simulation world frame. Defaults to (0.0, 0.0, 0.0).
- init_rot: tuple[float, float, float]#
Euler angles (in degree) of the root in simulation world frame. Defaults to (0.0, 0.0, 0.0).
Stereo Camera#
- class embodichain.lab.sim.sensors.StereoCamera[source]#
Bases:
CameraBase class for sensor abstraction in the simulation engine.
Sensors should inherit from this class and implement the update and get_data methods.
Methods:
__init__(config[, device])destroy()Destroy all entities managed by this batch entity.
get_arena_pose([to_matrix])Get the pose of the sensor in the arena frame.
get_data([copy])Retrieve data from the sensor.
Get the camera intrinsics for both left and right cameras.
Get the local pose of the left and right cameras.
get_local_pose([to_matrix])Get the local pose of the camera.
look_at(eye, target[, up, env_ids])Set the camera to look at a target point.
reset([env_ids])Reset the entity to its initial state.
set_intrinsics(intrinsics[, ...])Set the camera intrinsics for both left and right cameras.
set_local_pose(pose[, env_ids])Set the local pose of the camera.
update(**kwargs)Update the sensor data.
Attributes:
Check if Ray Tracing rendering backend is enabled in the default dexsim world.
- destroy()#
Destroy all entities managed by this batch entity.
- Return type:
None
- get_arena_pose(to_matrix=False)#
Get the pose of the sensor in the arena frame.
- Parameters:
to_matrix (bool) – If True, return the pose as a 4x4 transformation matrix.
- Return type:
Tensor- Returns:
A tensor representing the pose of the sensor in the arena frame.
- get_data(copy=True)#
Retrieve data from the sensor.
- Parameters:
copy (
bool) – If True, return a copy of the data buffer. Defaults to True.- Return type:
Dict[str,Tensor]- Returns:
The data collected by the sensor.
- get_intrinsics()[source]#
Get the camera intrinsics for both left and right cameras.
- Returns:
The intrinsics for the left and right cameras with shape (B, 3, 3).
- Return type:
Tuple[torch.Tensor, torch.Tensor]
- get_left_right_arena_pose()[source]#
Get the local pose of the left and right cameras.
- Returns:
The local pose of the left camera with shape (num_envs, 4, 4).
- Return type:
torch.Tensor
- get_local_pose(to_matrix=False)#
Get the local pose of the camera.
- Parameters:
to_matrix (bool) – If True, return the pose as a 4x4 matrix. If False, return as a quaternion.
- Returns:
The local pose of the camera.
- Return type:
torch.Tensor
- property is_rt_enabled: bool#
Check if Ray Tracing rendering backend is enabled in the default dexsim world.
- Returns:
True if Ray Tracing rendering is enabled, False otherwise.
- Return type:
bool
- look_at(eye, target, up=None, env_ids=None)#
Set the camera to look at a target point.
- Parameters:
eye (torch.Tensor) – The position of the camera (eye) with shape (N, 3).
target (torch.Tensor) – The point the camera should look at (target) with shape (N, 3).
up (Optional[torch.Tensor]) – The up direction vector. If None, defaults to [0, 0, 1].
env_ids (Optional[Sequence[int]]) – The environment IDs to set the look at for. If None, set for all environments.
- Return type:
None
- reset(env_ids=None)#
Reset the entity to its initial state.
- Parameters:
env_ids (Optional[Sequence[int]]) – The environment IDs to reset. If None, reset all environments.
- Return type:
None
- set_intrinsics(intrinsics, right_intrinsics=None, env_ids=None)[source]#
Set the camera intrinsics for both left and right cameras.
- Parameters:
intrinsics (torch.Tensor) – The intrinsics for the left camera with shape (4,) / (3, 3) or (B, 4) / (B, 3, 3).
right_intrinsics (Optional[torch.Tensor], optional) – The intrinsics for the right camera with shape 4,) / (3, 3) or (B, 4) / (B, 3, 3). If None, use the same intrinsics as the left camera. Defaults to None.
env_ids (Optional[Sequence[int]], optional) – The environment ids to set the intrinsics. If None, set for all environments. Defaults to None.
- Return type:
None
- set_local_pose(pose, env_ids=None)#
Set the local pose of the camera.
Note: The pose should be in the OpenGL coordinate system, which means the Y is up and Z is forward.
- Parameters:
pose (torch.Tensor) – The local pose to set, should be a 4x4 transformation matrix.
env_ids (Optional[Sequence[int]]) – The environment IDs to set the pose for. If None, set for all environments.
- Return type:
None
- update(**kwargs)[source]#
Update the sensor data.
- The supported data types are:
color: RGB images with shape (B, H, W, 4) and dtype torch.uint8
depth: Depth images with shape (B, H, W, 1) and dtype torch.float32
mask: Instance segmentation masks with shape (B, H, W, 1) and dtype torch.int32
normal: Normal images with shape (B, H, W, 3) and dtype torch.float32
position: Position images with shape (B, H, W, 3) and dtype torch.float32
disparity: Disparity images with shape (B, H, W, 1) and dtype torch.float32
- Parameters:
**kwargs – Additional keyword arguments for sensor update. - fetch_only (bool): If True, only fetch the data from dexsim internal frame buffer without performing rendering.
- Return type:
None
- class embodichain.lab.sim.sensors.StereoCameraCfg[source]#
Bases:
CameraCfgConfiguration class for StereoCamera.
Classes:
Configuration class for camera extrinsics.
Configuration of the sensor offset relative to the parent frame.
Methods:
from_dict(init_dict)Initialize the configuration from a dictionary.
Get the data types supported by this sensor configuration.
Get the view attributes for the camera.
Attributes:
4x4 transformation matrix of the root in local frame.
Position of the root in simulation world frame.
Euler angles (in degree) of the root in simulation world frame.
Get the transformation matrix from left camera to right camera.
Get the transformation matrix from right camera to left camera.
- class ExtrinsicsCfg#
Bases:
OffsetCfgConfiguration class for camera extrinsics.
The extrinsics define the position and orientation of the camera in the 3D world. If eye, target, and up are provided, they will be used to compute the extrinsics. Otherwise, the position and orientation will be set to the defaults.
Attributes:
Name of the parent frame.
Position of the sensor in the parent frame.
Orientation of the sensor in the parent frame as a quaternion (w, x, y, z).
Alternative way to specify the camera extrinsics using eye, target, and up vectors.
-
parent:
Optional[str]# Name of the parent frame. If not specified, the sensor will be placed in the arena frame.
This is usually the case when the sensor is not attached to any specific object, eg, link of a robot arm.
-
pos:
Tuple[float,float,float]# Position of the sensor in the parent frame. Defaults to (0.0, 0.0, 0.0).
-
quat:
Tuple[float,float,float,float]# Orientation of the sensor in the parent frame as a quaternion (w, x, y, z). Defaults to (1.0, 0.0, 0.0, 0.0).
-
up:
Optional[Tuple[float,float,float]]# Alternative way to specify the camera extrinsics using eye, target, and up vectors.
-
parent:
- class OffsetCfg#
Bases:
objectConfiguration of the sensor offset relative to the parent frame.
Attributes:
Name of the parent frame.
Position of the sensor in the parent frame.
Orientation of the sensor in the parent frame as a quaternion (w, x, y, z).
-
parent:
Optional[str]# Name of the parent frame. If not specified, the sensor will be placed in the arena frame.
This is usually the case when the sensor is not attached to any specific object, eg, link of a robot arm.
-
pos:
Tuple[float,float,float]# Position of the sensor in the parent frame. Defaults to (0.0, 0.0, 0.0).
-
quat:
Tuple[float,float,float,float]# Orientation of the sensor in the parent frame as a quaternion (w, x, y, z). Defaults to (1.0, 0.0, 0.0, 0.0).
-
parent:
- classmethod from_dict(init_dict)#
Initialize the configuration from a dictionary.
- Return type:
- get_data_types()[source]#
Get the data types supported by this sensor configuration.
- Return type:
List[str]- Returns:
A list of data types that this sensor configuration supports.
- get_view_attrib()#
Get the view attributes for the camera.
The camera view whcich is used to render the scene Default view attributes for the camera are: [COLOR, DEPTH, MASK] The supported view attributes are:
COLOR: RGBA images
DEPTH: Depth images
MASK: Instance segmentation masks
NORMAL: Normal images
POSITION: Position images with 3D coordinates.
- Return type:
ViewFlags- Returns:
The view attributes for the camera.
- init_local_pose: Optional[np.ndarray]#
4x4 transformation matrix of the root in local frame. If specified, it will override init_pos and init_rot.
- init_pos: tuple[float, float, float]#
Position of the root in simulation world frame. Defaults to (0.0, 0.0, 0.0).
- init_rot: tuple[float, float, float]#
Euler angles (in degree) of the root in simulation world frame. Defaults to (0.0, 0.0, 0.0).
- property left_to_right: Tensor#
Get the transformation matrix from left camera to right camera.
- property right_to_left: Tensor#
Get the transformation matrix from right camera to left camera.