-
Notifications
You must be signed in to change notification settings - Fork 123
Description
I opened this issue to solicit ideas and begin planning for what is going to replace the circular buffer. @nicost had you already opened an issue related to this? I did a quick search but nothing turned up.
@marktsuchida @nicost @nenada @jondaniels @bls337 @fadero @mattersoflight @ieivanov @edyoshikun @dpshepherd @carlkesselman (+ anyone else interested) would love to hear your thoughts and ideas.
A disorganized, partial list of current limitations and desired features for improvement:
Multi-camera support
Currently we use the same buffer for multiple cameras. Determining which image came from which camera is a nightmare as a result, and the image sizes/types of the cameras need to be the same. It may be possible in some cases to run multiple instances of the core, but this comes with other (device-specific) pitfalls, and requires hacks in higher-level code (i.e. acquisition engine)
Forced copies
Currently camera adapters copy into the buffer with InsertImage
. Higher level code (Java/Python) then copies the data again to get access to it. A new buffer should ideally have the ability to handle pointers to image data, so that code can be written in Java/Python that writes to disk (in the c++) layer without unnecessary copies. This would make the API more complex and require memory management, which would ideally be hidden from most users by being abstracted away by the acquisition engine
Metadata handling
Metadata is not always necessary for every image in a stream, and may impact performance in some situations (e.g. when image size relative to metadata size is small). Flexibility to turn off metadata in certain situations would be a welcome feature.
Streaming directly to special locations
I can imagine scenarios where one might want to stream data directly to a GPU for real time processing, or stream directly to file/memory-mapped area with no intermediate processing. How feasible would it be to set up a general mechanism for doing this type of thing? What are the pitfalls we might run into?
Exotic camera types
There are a variety of new types of cameras. For example:
- Cameras with polarization-sensitve pixels
- Cameras with multispectral sensores
- Event cameras
- Light-field cameras
- Cameras that are actually many cameras in one (1, 2)
How can we make our memory model generic enough to support things like this, without making it overly complex? Is it possible to also make something that would be compatible with a point-scanning system in the future? We have the concepts of multi-component and multi-channel cameras currently. There may be ways to generalize and improve these.