RelatedMaterial
|
create: {"material_type"=>"Article", "availability"=>nil, "link"=>"https://doi.org/10.1007/s13042-023-02029-8", "uri"=>"10.1007/s13042-023-02029-8", "uri_type"=>"DOI", "citation"=>"Ding, Y., Xu, Y., Liu, Q. et al. Research on boundary-aware waters segmentation network for unmanned surface vehicles in complex inland waters. Int. J. Mach. Learn. & Cyber. (2023). https://doi.org/10.1007/s13042-023-02029-8", "dataset_id"=>341, "selected_type"=>"Article", "datacite_list"=>"IsCitedBy", "note"=>nil, "feature"=>nil}
|
2024-01-09T15:47:35Z
|
RelatedMaterial
|
update: {"note"=>[nil, ""], "feature"=>[nil, false]}
|
2023-12-18T17:27:03Z
|
RelatedMaterial
|
update: {"note"=>[nil, ""], "feature"=>[nil, false]}
|
2023-12-18T17:27:03Z
|
RelatedMaterial
|
update: {"uri"=>["", "https://www.youtube.com/channel/UCOU9e7xxqmL_s4QX6jsGZSw"], "uri_type"=>["", "URL"], "citation"=>["", "Visual-Inertial Canoe Dataset"], "datacite_list"=>["", "IsSupplementedBy"], "note"=>[nil, ""], "feature"=>[nil, false]}
|
2023-12-18T17:27:03Z
|
RelatedMaterial
|
update: {"link"=>["", "http://hdl.handle.net/2142/99412"], "uri"=>["", "hdl.handle.net/2142/99412"], "uri_type"=>["", "Handle"], "datacite_list"=>["", "IsSupplementTo"]}
|
2018-10-08T16:40:53Z
|
RelatedMaterial
|
update: {"uri"=>[nil, ""], "uri_type"=>[nil, ""], "datacite_list"=>[nil, ""]}
|
2018-02-09T16:19:37Z
|
Dataset
|
update: {"subject"=>[nil, "Technology and Engineering"]}
|
2018-02-09T16:19:37Z
|
RelatedMaterial
|
update: {"citation"=>["M. Miller. \"Hardware and Software Considerations for Monocular SLAM in a Riverine Environment.\" Master's thesis, University of Illinois at Urbana-Champaign, 2017.", "Martin Miller. \"Hardware and Software Considerations for Monocular SLAM in a Riverine Environment.\" Master's thesis, University of Illinois at Urbana-Champaign, 2017."]}
|
2018-01-30T14:59:33Z
|
RelatedMaterial
|
update: {"link"=>["", "https://doi.org/10.1177/0278364917751842"], "citation"=>["M. Miller, S.-J. Chung, and S. Hutchinson, \"The Visual-Inertial Canoe Dataset\". The International Journal of Robotics Research, to appear, 2017.", "Martin Miller, Soon-Jo Chung, and Seth Hutchinson. The Visual–Inertial Canoe Dataset. The International Journal of Robotics Research, 37(1):13--20, 2018."]}
|
2018-01-30T14:59:33Z
|
Dataset
|
update: {"description"=>["If you use this dataset, please cite the IJRR data paper using the above citation info.\r\n\r\nWe present a dataset collected from a canoe along the Sangamon River in Illinois. The canoe was equipped with a stereo camera, an IMU, and a GPS device, which provide visual data suitable for stereo or monocular applications, inertial measurements, and position data for ground truth. We recorded a canoe trip up and down the river for 44 minutes covering 2.7 km round trip. The dataset adds to those previously recorded in unstructured environments and is unique in that it is recorded on a river, which provides its own set of challenges and constraints that are described\r\nin this paper. The data is divided into subsets, which can be downloaded individually. \r\n\r\nVideo previews are available on Youtube:\r\nhttps://www.youtube.com/channel/UCOU9e7xxqmL_s4QX6jsGZSw\r\n\r\nThe information below can also be found in the README files provided in the 527 dataset and each of its subsets. The purpose of this document is to assist researchers in using this dataset.\r\n\r\nImages\r\n======\r\nRaw\r\n---\r\nThe raw images are stored in the cam0 and cam1 directories in bmp format. They are bayered images that need to be debayered and undistorted before they are used. The camera parameters for these images can be found in camchain-imucam.yaml. Note that the camera intrinsics describe a 1600x1200 resolution image, so the focal length and center pixel coordinates must be scaled by 0.5 before they are used. The distortion coefficients remain the same even for the scaled images. The camera to imu tranformation matrix is also in this file. cam0/ refers to the left camera, and cam1/ refers to the right camera.\r\n\r\nRectified\r\n---------\r\nStereo rectified, undistorted, row-aligned, debayered images are stored in the rectified/ directory in the same way as the raw images except that they are in png format. The params.yaml file contains the projection and rotation matrices necessary to use these images. The resolution of these parameters do not need to be scaled as is necessary for the raw images.\r\n\r\nparams.yml\r\n----------\r\nThe stereo rectification parameters. R0,R1,P0,P1, and Q correspond to the outputs of the OpenCV stereoRectify function except that 1s and 2s are replaced by 0s and 1s, respectively.\r\n\r\nR0: The rectifying rotation matrix of the left camera.\r\nR1: The rectifying rotation matrix of the right camera.\r\nP0: The projection matrix of the left camera.\r\nP1: The projection matrix of the right camera.\r\nQ: Disparity to depth mapping matrix\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the left camera frame.\r\n\r\ncamchain-imucam.yaml\r\n--------------------\r\nThe camera intrinsic and extrinsic parameters and the camera to IMU transformation usable with the raw images.\r\n\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the camera frame.\r\n\r\ndistortion_coeffs: lens distortion coefficients using the radial tangential model.\r\n\r\nintrinsics: focal length x, focal length y, principal point x, principal point y\r\n\r\nresolution: resolution of calibration. Scale the intrinsics for use with the raw 800x600 images. The distortion coefficients do not change when the image is scaled.\r\n\r\nT_cn_cnm1: Transformation matrix from the right camera to the left camera.\r\n\r\n\r\nSensors\r\n-------\r\nHere, each message in name.csv is described\r\n\r\n###rawimus###\r\ntime # GPS time in seconds\r\nmessage name # rawimus\r\nacceleration_z # m/s^2 IMU uses right-forward-up coordinates\r\n-acceleration_y # m/s^2\r\nacceleration_x # m/s^2\r\nangular_rate_z # rad/s IMU uses right-forward-up coordinates\r\n-angular_rate_y # rad/s\r\nangular_rate_x # rad/s\r\n\r\n###IMG###\r\ntime # GPS time in seconds\r\nmessage name # IMG\r\nleft image filename\r\nright image filename\r\n\r\n###inspvas###\r\ntime # GPS time in seconds\r\nmessage name # inspvas\r\nlatitude\r\nlongitude\r\naltitude # ellipsoidal height WGS84 in meters\r\nnorth velocity # m/s\r\neast velocity # m/s\r\nup velocity # m/s\r\nroll # right hand rotation about y axis in degrees\r\npitch # right hand rotation about x axis in degrees\r\nazimuth # left hand rotation about z axis in degrees clockwise from north\r\n\r\n###inscovs###\r\ntime # GPS time in seconds\r\nmessage name # inscovs\r\nposition covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz m^2\r\nattitude covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz deg^2\r\nvelocity covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz (m/s)^2\r\n\r\n###bestutm###\r\ntime # GPS time in seconds\r\nmessage name # bestutm\r\nutm zone # numerical zone\r\nutm character # alphabetical zone\r\nnorthing # m\r\neasting # m\r\nheight # m above mean sea level\r\n\r\nCamera logs\r\n-----------\r\nThe files name.cam0 and name.cam1 are text files that correspond to cameras 0 and 1, respectively. The columns are defined by:\r\n\r\nunused: The first column is all 1s and can be ignored.\r\n\r\nsoftware frame number: This number increments at the end of every iteration of the software loop.\r\n\r\ncamera frame number: This number is generated by the camera and increments each time the shutter is triggered. The software and camera frame numbers do not have to start at the same value, but if the difference between the initial and final values is not the same, it suggests that frames may have been dropped.\r\n\r\ncamera timestamp: This is the cameras internal timestamp of the frame capture in units of 100 milliseconds.\r\n\r\nPC timestamp: This is the PC time of arrival of the image.\r\n\r\nname.kml\r\n--------\r\nThe kml file is a mapping file that can be read by software such as Google Earth. It contains the recorded GPS trajectory.\r\n\r\nname.unicsv\r\n-----------\r\nThis is a csv file of the GPS trajectory in UTM coordinates that can be read by gpsbabel, software for manipulating GPS paths.", "If you use this dataset, please cite the IJRR data paper (bibtex is below).\r\n\r\nWe present a dataset collected from a canoe along the Sangamon River in Illinois. The canoe was equipped with a stereo camera, an IMU, and a GPS device, which provide visual data suitable for stereo or monocular applications, inertial measurements, and position data for ground truth. We recorded a canoe trip up and down the river for 44 minutes covering 2.7 km round trip. The dataset adds to those previously recorded in unstructured environments and is unique in that it is recorded on a river, which provides its own set of challenges and constraints that are described\r\nin this paper. The data is divided into subsets, which can be downloaded individually. \r\n\r\nVideo previews are available on Youtube:\r\nhttps://www.youtube.com/channel/UCOU9e7xxqmL_s4QX6jsGZSw\r\n\r\nThe information below can also be found in the README files provided in the 527 dataset and each of its subsets. The purpose of this document is to assist researchers in using this dataset.\r\n\r\nImages\r\n======\r\nRaw\r\n---\r\nThe raw images are stored in the cam0 and cam1 directories in bmp format. They are bayered images that need to be debayered and undistorted before they are used. The camera parameters for these images can be found in camchain-imucam.yaml. Note that the camera intrinsics describe a 1600x1200 resolution image, so the focal length and center pixel coordinates must be scaled by 0.5 before they are used. The distortion coefficients remain the same even for the scaled images. The camera to imu tranformation matrix is also in this file. cam0/ refers to the left camera, and cam1/ refers to the right camera.\r\n\r\nRectified\r\n---------\r\nStereo rectified, undistorted, row-aligned, debayered images are stored in the rectified/ directory in the same way as the raw images except that they are in png format. The params.yaml file contains the projection and rotation matrices necessary to use these images. The resolution of these parameters do not need to be scaled as is necessary for the raw images.\r\n\r\nparams.yml\r\n----------\r\nThe stereo rectification parameters. R0,R1,P0,P1, and Q correspond to the outputs of the OpenCV stereoRectify function except that 1s and 2s are replaced by 0s and 1s, respectively.\r\n\r\nR0: The rectifying rotation matrix of the left camera.\r\nR1: The rectifying rotation matrix of the right camera.\r\nP0: The projection matrix of the left camera.\r\nP1: The projection matrix of the right camera.\r\nQ: Disparity to depth mapping matrix\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the left camera frame.\r\n\r\ncamchain-imucam.yaml\r\n--------------------\r\nThe camera intrinsic and extrinsic parameters and the camera to IMU transformation usable with the raw images.\r\n\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the camera frame.\r\n\r\ndistortion_coeffs: lens distortion coefficients using the radial tangential model.\r\n\r\nintrinsics: focal length x, focal length y, principal point x, principal point y\r\n\r\nresolution: resolution of calibration. Scale the intrinsics for use with the raw 800x600 images. The distortion coefficients do not change when the image is scaled.\r\n\r\nT_cn_cnm1: Transformation matrix from the right camera to the left camera.\r\n\r\n\r\nSensors\r\n-------\r\nHere, each message in name.csv is described\r\n\r\n###rawimus###\r\ntime # GPS time in seconds\r\nmessage name # rawimus\r\nacceleration_z # m/s^2 IMU uses right-forward-up coordinates\r\n-acceleration_y # m/s^2\r\nacceleration_x # m/s^2\r\nangular_rate_z # rad/s IMU uses right-forward-up coordinates\r\n-angular_rate_y # rad/s\r\nangular_rate_x # rad/s\r\n\r\n###IMG###\r\ntime # GPS time in seconds\r\nmessage name # IMG\r\nleft image filename\r\nright image filename\r\n\r\n###inspvas###\r\ntime # GPS time in seconds\r\nmessage name # inspvas\r\nlatitude\r\nlongitude\r\naltitude # ellipsoidal height WGS84 in meters\r\nnorth velocity # m/s\r\neast velocity # m/s\r\nup velocity # m/s\r\nroll # right hand rotation about y axis in degrees\r\npitch # right hand rotation about x axis in degrees\r\nazimuth # left hand rotation about z axis in degrees clockwise from north\r\n\r\n###inscovs###\r\ntime # GPS time in seconds\r\nmessage name # inscovs\r\nposition covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz m^2\r\nattitude covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz deg^2\r\nvelocity covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz (m/s)^2\r\n\r\n###bestutm###\r\ntime # GPS time in seconds\r\nmessage name # bestutm\r\nutm zone # numerical zone\r\nutm character # alphabetical zone\r\nnorthing # m\r\neasting # m\r\nheight # m above mean sea level\r\n\r\nCamera logs\r\n-----------\r\nThe files name.cam0 and name.cam1 are text files that correspond to cameras 0 and 1, respectively. The columns are defined by:\r\n\r\nunused: The first column is all 1s and can be ignored.\r\n\r\nsoftware frame number: This number increments at the end of every iteration of the software loop.\r\n\r\ncamera frame number: This number is generated by the camera and increments each time the shutter is triggered. The software and camera frame numbers do not have to start at the same value, but if the difference between the initial and final values is not the same, it suggests that frames may have been dropped.\r\n\r\ncamera timestamp: This is the cameras internal timestamp of the frame capture in units of 100 milliseconds.\r\n\r\nPC timestamp: This is the PC time of arrival of the image.\r\n\r\nname.kml\r\n--------\r\nThe kml file is a mapping file that can be read by software such as Google Earth. It contains the recorded GPS trajectory.\r\n\r\nname.unicsv\r\n-----------\r\nThis is a csv file of the GPS trajectory in UTM coordinates that can be read by gpsbabel, software for manipulating GPS paths.\r\n\r\n\r\n@article{doi:10.1177/0278364917751842,\r\nauthor = {Martin Miller and Soon-Jo Chung and Seth Hutchinson},\r\ntitle ={The Visual–Inertial Canoe Dataset},\r\njournal = {The International Journal of Robotics Research},\r\nvolume = {37},\r\nnumber = {1},\r\npages = {13-20},\r\nyear = {2018},\r\ndoi = {10.1177/0278364917751842},\r\nURL = {https://doi.org/10.1177/0278364917751842},\r\neprint = {https://doi.org/10.1177/0278364917751842}\r\n}"]}
|
2018-01-30T14:59:33Z
|
Dataset
|
update: {"description"=>["If you use this dataset, please cite the data paper using the above citation info.\r\n\r\nWe present a dataset collected from a canoe along the Sangamon River in Illinois. The canoe was equipped with a stereo camera, an IMU, and a GPS device, which provide visual data suitable for stereo or monocular applications, inertial measurements, and position data for ground truth. We recorded a canoe trip up and down the river for 44 minutes covering 2.7 km round trip. The dataset adds to those previously recorded in unstructured environments and is unique in that it is recorded on a river, which provides its own set of challenges and constraints that are described\r\nin this paper. The data is divided into subsets, which can be downloaded individually. \r\n\r\nVideo previews are available on Youtube:\r\nhttps://www.youtube.com/channel/UCOU9e7xxqmL_s4QX6jsGZSw\r\n\r\nThe information below can also be found in the README files provided in the 527 dataset and each of its subsets. The purpose of this document is to assist researchers in using this dataset.\r\n\r\nImages\r\n======\r\nRaw\r\n---\r\nThe raw images are stored in the cam0 and cam1 directories in bmp format. They are bayered images that need to be debayered and undistorted before they are used. The camera parameters for these images can be found in camchain-imucam.yaml. Note that the camera intrinsics describe a 1600x1200 resolution image, so the focal length and center pixel coordinates must be scaled by 0.5 before they are used. The distortion coefficients remain the same even for the scaled images. The camera to imu tranformation matrix is also in this file. cam0/ refers to the left camera, and cam1/ refers to the right camera.\r\n\r\nRectified\r\n---------\r\nStereo rectified, undistorted, row-aligned, debayered images are stored in the rectified/ directory in the same way as the raw images except that they are in png format. The params.yaml file contains the projection and rotation matrices necessary to use these images. The resolution of these parameters do not need to be scaled as is necessary for the raw images.\r\n\r\nparams.yml\r\n----------\r\nThe stereo rectification parameters. R0,R1,P0,P1, and Q correspond to the outputs of the OpenCV stereoRectify function except that 1s and 2s are replaced by 0s and 1s, respectively.\r\n\r\nR0: The rectifying rotation matrix of the left camera.\r\nR1: The rectifying rotation matrix of the right camera.\r\nP0: The projection matrix of the left camera.\r\nP1: The projection matrix of the right camera.\r\nQ: Disparity to depth mapping matrix\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the left camera frame.\r\n\r\ncamchain-imucam.yaml\r\n--------------------\r\nThe camera intrinsic and extrinsic parameters and the camera to IMU transformation usable with the raw images.\r\n\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the camera frame.\r\n\r\ndistortion_coeffs: lens distortion coefficients using the radial tangential model.\r\n\r\nintrinsics: focal length x, focal length y, principal point x, principal point y\r\n\r\nresolution: resolution of calibration. Scale the intrinsics for use with the raw 800x600 images. The distortion coefficients do not change when the image is scaled.\r\n\r\nT_cn_cnm1: Transformation matrix from the right camera to the left camera.\r\n\r\n\r\nSensors\r\n-------\r\nHere, each message in name.csv is described\r\n\r\n###rawimus###\r\ntime # GPS time in seconds\r\nmessage name # rawimus\r\nacceleration_z # m/s^2 IMU uses right-forward-up coordinates\r\n-acceleration_y # m/s^2\r\nacceleration_x # m/s^2\r\nangular_rate_z # rad/s IMU uses right-forward-up coordinates\r\n-angular_rate_y # rad/s\r\nangular_rate_x # rad/s\r\n\r\n###IMG###\r\ntime # GPS time in seconds\r\nmessage name # IMG\r\nleft image filename\r\nright image filename\r\n\r\n###inspvas###\r\ntime # GPS time in seconds\r\nmessage name # inspvas\r\nlatitude\r\nlongitude\r\naltitude # ellipsoidal height WGS84 in meters\r\nnorth velocity # m/s\r\neast velocity # m/s\r\nup velocity # m/s\r\nroll # right hand rotation about y axis in degrees\r\npitch # right hand rotation about x axis in degrees\r\nazimuth # left hand rotation about z axis in degrees clockwise from north\r\n\r\n###inscovs###\r\ntime # GPS time in seconds\r\nmessage name # inscovs\r\nposition covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz m^2\r\nattitude covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz deg^2\r\nvelocity covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz (m/s)^2\r\n\r\n###bestutm###\r\ntime # GPS time in seconds\r\nmessage name # bestutm\r\nutm zone # numerical zone\r\nutm character # alphabetical zone\r\nnorthing # m\r\neasting # m\r\nheight # m above mean sea level\r\n\r\nCamera logs\r\n-----------\r\nThe files name.cam0 and name.cam1 are text files that correspond to cameras 0 and 1, respectively. The columns are defined by:\r\n\r\nunused: The first column is all 1s and can be ignored.\r\n\r\nsoftware frame number: This number increments at the end of every iteration of the software loop.\r\n\r\ncamera frame number: This number is generated by the camera and increments each time the shutter is triggered. The software and camera frame numbers do not have to start at the same value, but if the difference between the initial and final values is not the same, it suggests that frames may have been dropped.\r\n\r\ncamera timestamp: This is the cameras internal timestamp of the frame capture in units of 100 milliseconds.\r\n\r\nPC timestamp: This is the PC time of arrival of the image.\r\n\r\nname.kml\r\n--------\r\nThe kml file is a mapping file that can be read by software such as Google Earth. It contains the recorded GPS trajectory.\r\n\r\nname.unicsv\r\n-----------\r\nThis is a csv file of the GPS trajectory in UTM coordinates that can be read by gpsbabel, software for manipulating GPS paths.", "If you use this dataset, please cite the IJRR data paper using the above citation info.\r\n\r\nWe present a dataset collected from a canoe along the Sangamon River in Illinois. The canoe was equipped with a stereo camera, an IMU, and a GPS device, which provide visual data suitable for stereo or monocular applications, inertial measurements, and position data for ground truth. We recorded a canoe trip up and down the river for 44 minutes covering 2.7 km round trip. The dataset adds to those previously recorded in unstructured environments and is unique in that it is recorded on a river, which provides its own set of challenges and constraints that are described\r\nin this paper. The data is divided into subsets, which can be downloaded individually. \r\n\r\nVideo previews are available on Youtube:\r\nhttps://www.youtube.com/channel/UCOU9e7xxqmL_s4QX6jsGZSw\r\n\r\nThe information below can also be found in the README files provided in the 527 dataset and each of its subsets. The purpose of this document is to assist researchers in using this dataset.\r\n\r\nImages\r\n======\r\nRaw\r\n---\r\nThe raw images are stored in the cam0 and cam1 directories in bmp format. They are bayered images that need to be debayered and undistorted before they are used. The camera parameters for these images can be found in camchain-imucam.yaml. Note that the camera intrinsics describe a 1600x1200 resolution image, so the focal length and center pixel coordinates must be scaled by 0.5 before they are used. The distortion coefficients remain the same even for the scaled images. The camera to imu tranformation matrix is also in this file. cam0/ refers to the left camera, and cam1/ refers to the right camera.\r\n\r\nRectified\r\n---------\r\nStereo rectified, undistorted, row-aligned, debayered images are stored in the rectified/ directory in the same way as the raw images except that they are in png format. The params.yaml file contains the projection and rotation matrices necessary to use these images. The resolution of these parameters do not need to be scaled as is necessary for the raw images.\r\n\r\nparams.yml\r\n----------\r\nThe stereo rectification parameters. R0,R1,P0,P1, and Q correspond to the outputs of the OpenCV stereoRectify function except that 1s and 2s are replaced by 0s and 1s, respectively.\r\n\r\nR0: The rectifying rotation matrix of the left camera.\r\nR1: The rectifying rotation matrix of the right camera.\r\nP0: The projection matrix of the left camera.\r\nP1: The projection matrix of the right camera.\r\nQ: Disparity to depth mapping matrix\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the left camera frame.\r\n\r\ncamchain-imucam.yaml\r\n--------------------\r\nThe camera intrinsic and extrinsic parameters and the camera to IMU transformation usable with the raw images.\r\n\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the camera frame.\r\n\r\ndistortion_coeffs: lens distortion coefficients using the radial tangential model.\r\n\r\nintrinsics: focal length x, focal length y, principal point x, principal point y\r\n\r\nresolution: resolution of calibration. Scale the intrinsics for use with the raw 800x600 images. The distortion coefficients do not change when the image is scaled.\r\n\r\nT_cn_cnm1: Transformation matrix from the right camera to the left camera.\r\n\r\n\r\nSensors\r\n-------\r\nHere, each message in name.csv is described\r\n\r\n###rawimus###\r\ntime # GPS time in seconds\r\nmessage name # rawimus\r\nacceleration_z # m/s^2 IMU uses right-forward-up coordinates\r\n-acceleration_y # m/s^2\r\nacceleration_x # m/s^2\r\nangular_rate_z # rad/s IMU uses right-forward-up coordinates\r\n-angular_rate_y # rad/s\r\nangular_rate_x # rad/s\r\n\r\n###IMG###\r\ntime # GPS time in seconds\r\nmessage name # IMG\r\nleft image filename\r\nright image filename\r\n\r\n###inspvas###\r\ntime # GPS time in seconds\r\nmessage name # inspvas\r\nlatitude\r\nlongitude\r\naltitude # ellipsoidal height WGS84 in meters\r\nnorth velocity # m/s\r\neast velocity # m/s\r\nup velocity # m/s\r\nroll # right hand rotation about y axis in degrees\r\npitch # right hand rotation about x axis in degrees\r\nazimuth # left hand rotation about z axis in degrees clockwise from north\r\n\r\n###inscovs###\r\ntime # GPS time in seconds\r\nmessage name # inscovs\r\nposition covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz m^2\r\nattitude covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz deg^2\r\nvelocity covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz (m/s)^2\r\n\r\n###bestutm###\r\ntime # GPS time in seconds\r\nmessage name # bestutm\r\nutm zone # numerical zone\r\nutm character # alphabetical zone\r\nnorthing # m\r\neasting # m\r\nheight # m above mean sea level\r\n\r\nCamera logs\r\n-----------\r\nThe files name.cam0 and name.cam1 are text files that correspond to cameras 0 and 1, respectively. The columns are defined by:\r\n\r\nunused: The first column is all 1s and can be ignored.\r\n\r\nsoftware frame number: This number increments at the end of every iteration of the software loop.\r\n\r\ncamera frame number: This number is generated by the camera and increments each time the shutter is triggered. The software and camera frame numbers do not have to start at the same value, but if the difference between the initial and final values is not the same, it suggests that frames may have been dropped.\r\n\r\ncamera timestamp: This is the cameras internal timestamp of the frame capture in units of 100 milliseconds.\r\n\r\nPC timestamp: This is the PC time of arrival of the image.\r\n\r\nname.kml\r\n--------\r\nThe kml file is a mapping file that can be read by software such as Google Earth. It contains the recorded GPS trajectory.\r\n\r\nname.unicsv\r\n-----------\r\nThis is a csv file of the GPS trajectory in UTM coordinates that can be read by gpsbabel, software for manipulating GPS paths."]}
|
2017-12-13T14:16:56Z
|
RelatedMaterial
|
update: {"citation"=>["M. Miller, S.-J. Chung, and S. Hutchinson, \"The Visual-Inertial Canoe Dataset\". The International Journal of Robotics Research, conditionally accepted, 2017.", "M. Miller, S.-J. Chung, and S. Hutchinson, \"The Visual-Inertial Canoe Dataset\". The International Journal of Robotics Research, to appear, 2017."]}
|
2017-12-13T14:15:37Z
|
RelatedMaterial
|
create: {"material_type"=>"Thesis", "availability"=>nil, "link"=>"", "uri"=>nil, "uri_type"=>nil, "citation"=>"M. Miller. \"Hardware and Software Considerations for Monocular SLAM in a Riverine Environment.\" Master's thesis, University of Illinois at Urbana-Champaign, 2017.", "dataset_id"=>341, "selected_type"=>"Thesis", "datacite_list"=>nil}
|
2017-12-12T00:08:27Z
|
RelatedMaterial
|
update: {"citation"=>["citation placeholder", "M. Miller, S.-J. Chung, and S. Hutchinson, \"The Visual-Inertial Canoe Dataset\". The International Journal of Robotics Research, conditionally accepted, 2017."], "datacite_list"=>["", "IsSupplementTo"]}
|
2017-11-16T21:40:23Z
|
Dataset
|
update: {"description"=>["There is a data paper associated with this dataset. If you use the data here, \r\nplease cite the data paper:\r\n\r\n@article{miller2018VisualICD,\r\n author = {Miller, Martin and Chung, Soon-Jo and Hutchinson, Seth},\r\n title = {The Visual-Inertial Canoe Dataset},\r\n journal = {International Journal of Robotics Research},\r\n note = {accepted},\r\n}\r\n\r\nWe present a dataset collected from a canoe along the Sangamon River in\r\nIllinois. The canoe was equipped with a stereo camera, an IMU, and a GPS\r\ndevice, which provide visual data suitable for stereo or monocular\r\napplications, inertial measurements, and position data for ground truth. We\r\nrecorded a canoe trip up and down the river for 44 minutes covering 2.7\r\nkm round trip. The dataset adds to those previously recorded in\r\nunstructured environments and is unique in that it is recorded on a river,\r\nwhich provides its own set of challenges and constraints that are described\r\nin this paper. The data is divided into subsets, which can be downloaded individually. \r\n\r\nVideo previews are available on Youtube:\r\nhttps://www.youtube.com/channel/UCOU9e7xxqmL_s4QX6jsGZSw\r\n\r\nThe information below can also be found in the README files provided in the 527\r\ndataset and each of its subsets. The purpose of this document is to assist researchers\r\nin using this dataset.\r\n\r\nImages\r\n======\r\nRaw\r\n---\r\nThe raw images are stored in the cam0 and cam1 directories in bmp format. They\r\nare bayered images that need to be debayered and undistorted before they are\r\nused. The camera parameters for these images can be found in\r\ncamchain-imucam.yaml. Note that the camera intrinsics describe a 1600x1200\r\nresolution image, so the focal length and center pixel coordinates must be\r\nscaled by 0.5 before they are used. The distortion coefficients remain the same\r\neven for the scaled images. The camera to imu tranformation matrix is also in\r\nthis file. cam0/ refers to the left camera, and cam1/ refers to the right\r\ncamera.\r\n\r\nRectified\r\n---------\r\nStereo rectified, undistorted, row-aligned, debayered images are stored in the\r\nrectified/ directory in the same way as the raw images except that they are in\r\npng format. The params.yaml file contains the projection and rotation matrices\r\nnecessary to use these images. The resolution of these parameters do not need to\r\nbe scaled as is necessary for the raw images.\r\n\r\nparams.yml\r\n----------\r\nThe stereo rectification parameters. R0,R1,P0,P1, and Q correspond to the\r\noutputs of the OpenCV stereoRectify function except that 1s and 2s are replaced\r\nby 0s and 1s, respectively.\r\n\r\nR0: The rectifying rotation matrix of the left camera.\r\nR1: The rectifying rotation matrix of the right camera.\r\nP0: The projection matrix of the left camera.\r\nP1: The projection matrix of the right camera.\r\nQ: Disparity to depth mapping matrix\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the left camera frame.\r\n\r\ncamchain-imucam.yaml\r\n--------------------\r\nThe camera intrinsic and extrinsic parameters and the camera to IMU\r\ntransformation usable with the raw images.\r\n\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the camera frame.\r\n\r\ndistortion_coeffs: lens distortion coefficients using the radial tangential model.\r\n\r\nintrinsics: focal length x, focal length y, principal point x, principal point y\r\n\r\nresolution: resolution of calibration. Scale the intrinsics for use with the raw\r\n800x600 images. The distortion coefficients do not change when the image is scaled.\r\n\r\nT_cn_cnm1: Transformation matrix from the right camera to the left camera.\r\n\r\n\r\nSensors\r\n-------\r\nHere, each message in name.csv is described\r\n\r\n###rawimus###\r\ntime # GPS time in seconds\r\nmessage name # rawimus\r\nacceleration_z # m/s^2 IMU uses right-forward-up coordinates\r\n-acceleration_y # m/s^2\r\nacceleration_x # m/s^2\r\nangular_rate_z # rad/s IMU uses right-forward-up coordinates\r\n-angular_rate_y # rad/s\r\nangular_rate_x # rad/s\r\n\r\n###IMG###\r\ntime # GPS time in seconds\r\nmessage name # IMG\r\nleft image filename\r\nright image filename\r\n\r\n###inspvas###\r\ntime # GPS time in seconds\r\nmessage name # inspvas\r\nlatitude\r\nlongitude\r\naltitude # ellipsoidal height WGS84 in meters\r\nnorth velocity # m/s\r\neast velocity # m/s\r\nup velocity # m/s\r\nroll # right hand rotation about y axis in degrees\r\npitch # right hand rotation about x axis in degrees\r\nazimuth # left hand rotation about z axis in degrees clockwise from north\r\n\r\n###inscovs###\r\ntime # GPS time in seconds\r\nmessage name # inscovs\r\nposition covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz m^2\r\nattitude covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz deg^2\r\nvelocity covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz (m/s)^2\r\n\r\n###bestutm###\r\ntime # GPS time in seconds\r\nmessage name # bestutm\r\nutm zone # numerical zone\r\nutm character # alphabetical zone\r\nnorthing # m\r\neasting # m\r\nheight # m above mean sea level\r\n\r\nCamera logs\r\n-----------\r\nThe files name.cam0 and name.cam1 are text files that correspond to cameras 0\r\nand 1, respectively. The columns are defined by:\r\n\r\nunused: The first column is all 1s and can be ignored.\r\n\r\nsoftware frame number: This number increments at the end of every iteration of\r\nthe software loop.\r\n\r\ncamera frame number: This number is generated by the camera and increments each\r\ntime the shutter is triggered. The software and camera frame numbers do not have\r\nto start at the same value, but if the difference between the initial and final\r\nvalues is not the same, it suggests that frames may have been dropped.\r\n\r\ncamera timestamp: This is the cameras internal timestamp of the frame capture in\r\nunits of 100 milliseconds.\r\n\r\nPC timestamp: This is the PC time of arrival of the image.\r\n\r\nname.kml\r\n--------\r\nThe kml file is a mapping file that can be read by software such as Google\r\nEarth. It contains the recorded GPS trajectory.\r\n\r\nname.unicsv\r\n-----------\r\nThis is a csv file of the GPS trajectory in UTM coordinates that can be read by\r\ngpsbabel, software for manipulating GPS paths.\r\n\r\n", "If you use this dataset, please cite the data paper using the above citation info.\r\n\r\nWe present a dataset collected from a canoe along the Sangamon River in Illinois. The canoe was equipped with a stereo camera, an IMU, and a GPS device, which provide visual data suitable for stereo or monocular applications, inertial measurements, and position data for ground truth. We recorded a canoe trip up and down the river for 44 minutes covering 2.7 km round trip. The dataset adds to those previously recorded in unstructured environments and is unique in that it is recorded on a river, which provides its own set of challenges and constraints that are described\r\nin this paper. The data is divided into subsets, which can be downloaded individually. \r\n\r\nVideo previews are available on Youtube:\r\nhttps://www.youtube.com/channel/UCOU9e7xxqmL_s4QX6jsGZSw\r\n\r\nThe information below can also be found in the README files provided in the 527 dataset and each of its subsets. The purpose of this document is to assist researchers in using this dataset.\r\n\r\nImages\r\n======\r\nRaw\r\n---\r\nThe raw images are stored in the cam0 and cam1 directories in bmp format. They are bayered images that need to be debayered and undistorted before they are used. The camera parameters for these images can be found in camchain-imucam.yaml. Note that the camera intrinsics describe a 1600x1200 resolution image, so the focal length and center pixel coordinates must be scaled by 0.5 before they are used. The distortion coefficients remain the same even for the scaled images. The camera to imu tranformation matrix is also in this file. cam0/ refers to the left camera, and cam1/ refers to the right camera.\r\n\r\nRectified\r\n---------\r\nStereo rectified, undistorted, row-aligned, debayered images are stored in the rectified/ directory in the same way as the raw images except that they are in png format. The params.yaml file contains the projection and rotation matrices necessary to use these images. The resolution of these parameters do not need to be scaled as is necessary for the raw images.\r\n\r\nparams.yml\r\n----------\r\nThe stereo rectification parameters. R0,R1,P0,P1, and Q correspond to the outputs of the OpenCV stereoRectify function except that 1s and 2s are replaced by 0s and 1s, respectively.\r\n\r\nR0: The rectifying rotation matrix of the left camera.\r\nR1: The rectifying rotation matrix of the right camera.\r\nP0: The projection matrix of the left camera.\r\nP1: The projection matrix of the right camera.\r\nQ: Disparity to depth mapping matrix\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the left camera frame.\r\n\r\ncamchain-imucam.yaml\r\n--------------------\r\nThe camera intrinsic and extrinsic parameters and the camera to IMU transformation usable with the raw images.\r\n\r\nT_cam_imu: Transformation matrix for a point in the IMU frame to the camera frame.\r\n\r\ndistortion_coeffs: lens distortion coefficients using the radial tangential model.\r\n\r\nintrinsics: focal length x, focal length y, principal point x, principal point y\r\n\r\nresolution: resolution of calibration. Scale the intrinsics for use with the raw 800x600 images. The distortion coefficients do not change when the image is scaled.\r\n\r\nT_cn_cnm1: Transformation matrix from the right camera to the left camera.\r\n\r\n\r\nSensors\r\n-------\r\nHere, each message in name.csv is described\r\n\r\n###rawimus###\r\ntime # GPS time in seconds\r\nmessage name # rawimus\r\nacceleration_z # m/s^2 IMU uses right-forward-up coordinates\r\n-acceleration_y # m/s^2\r\nacceleration_x # m/s^2\r\nangular_rate_z # rad/s IMU uses right-forward-up coordinates\r\n-angular_rate_y # rad/s\r\nangular_rate_x # rad/s\r\n\r\n###IMG###\r\ntime # GPS time in seconds\r\nmessage name # IMG\r\nleft image filename\r\nright image filename\r\n\r\n###inspvas###\r\ntime # GPS time in seconds\r\nmessage name # inspvas\r\nlatitude\r\nlongitude\r\naltitude # ellipsoidal height WGS84 in meters\r\nnorth velocity # m/s\r\neast velocity # m/s\r\nup velocity # m/s\r\nroll # right hand rotation about y axis in degrees\r\npitch # right hand rotation about x axis in degrees\r\nazimuth # left hand rotation about z axis in degrees clockwise from north\r\n\r\n###inscovs###\r\ntime # GPS time in seconds\r\nmessage name # inscovs\r\nposition covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz m^2\r\nattitude covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz deg^2\r\nvelocity covariance # 9 values xx,xy,xz,yx,yy,yz,zx,zy,zz (m/s)^2\r\n\r\n###bestutm###\r\ntime # GPS time in seconds\r\nmessage name # bestutm\r\nutm zone # numerical zone\r\nutm character # alphabetical zone\r\nnorthing # m\r\neasting # m\r\nheight # m above mean sea level\r\n\r\nCamera logs\r\n-----------\r\nThe files name.cam0 and name.cam1 are text files that correspond to cameras 0 and 1, respectively. The columns are defined by:\r\n\r\nunused: The first column is all 1s and can be ignored.\r\n\r\nsoftware frame number: This number increments at the end of every iteration of the software loop.\r\n\r\ncamera frame number: This number is generated by the camera and increments each time the shutter is triggered. The software and camera frame numbers do not have to start at the same value, but if the difference between the initial and final values is not the same, it suggests that frames may have been dropped.\r\n\r\ncamera timestamp: This is the cameras internal timestamp of the frame capture in units of 100 milliseconds.\r\n\r\nPC timestamp: This is the PC time of arrival of the image.\r\n\r\nname.kml\r\n--------\r\nThe kml file is a mapping file that can be read by software such as Google Earth. It contains the recorded GPS trajectory.\r\n\r\nname.unicsv\r\n-----------\r\nThis is a csv file of the GPS trajectory in UTM coordinates that can be read by gpsbabel, software for manipulating GPS paths."]}
|
2017-11-16T21:40:23Z
|
RelatedMaterial
|
create: {"material_type"=>"Data Paper", "availability"=>nil, "link"=>"", "uri"=>"", "uri_type"=>"", "citation"=>"citation placeholder", "dataset_id"=>341, "selected_type"=>"Other", "datacite_list"=>""}
|
2017-11-16T21:30:25Z
|
RelatedMaterial
|
update: {"uri"=>[nil, ""], "uri_type"=>[nil, ""], "datacite_list"=>[nil, ""]}
|
2017-11-16T21:30:25Z
|
Dataset
|
update: {"version_comment"=>[nil, ""]}
|
2017-11-16T21:30:25Z
|
Creator
|
update: {"is_contact"=>[false, true]}
|
2017-11-15T01:33:42Z
|
Creator
|
update: {"is_contact"=>[true, false]}
|
2017-11-15T01:33:42Z
|
Dataset
|
update: {"corresponding_creator_name"=>["Soon-Jo Chung", "Seth Hutchinson"], "corresponding_creator_email"=>["sjchung@caltech.edu", "seth@illinois.edu"]}
|
2017-11-15T01:33:42Z
|