Skip to content

Video/object track aggregation event #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 21 commits into
base: development
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
660da53
Initial version Aggregated object track event proposal
bsriramprasad Jun 18, 2024
33791b5
additional changes
bsriramprasad Jun 20, 2024
e287219
Added parent object ID
bsriramprasad Jun 24, 2024
ec20b93
removed parent id as its included in tt:Object
bsriramprasad Jun 24, 2024
e4152ab
Additional changes based on internal review
bsriramprasad Jun 26, 2024
4dbfcd1
Address review feedback
bsriramprasad Jun 27, 2024
6894d3c
Merge branch 'onvif:development' into video/object-track-aggregation-…
bsriramprasad Aug 15, 2024
18df87d
Merge branch 'onvif:development' into video/object-track-aggregation-…
bsriramprasad Aug 23, 2024
255f872
Addressed VEWG members feedback
bsriramprasad Aug 23, 2024
6072b38
Addressed below changes based on the Bangkok F2F discussions
bsriramprasad Sep 10, 2024
9d7b4dc
Merge branch 'onvif:development' into video/object-track-aggregation-…
bsriramprasad Sep 10, 2024
8152bbf
fixed documentation issues
bsriramprasad Sep 10, 2024
9e129bc
Merge branch 'onvif:development' into video/object-track-aggregation-…
bsriramprasad Sep 30, 2024
98c8542
fixed inconsistent rule type QName
bsriramprasad Oct 1, 2024
9ad4037
Addressed Bosch feedback
bsriramprasad Dec 19, 2024
5e714aa
Merge branch 'onvif:development' into video/object-track-aggregation-…
bsriramprasad Dec 19, 2024
13fcd6a
Added recommendation to enable object track data for sparsely crowded…
bsriramprasad Dec 20, 2024
18b6cdc
Merge branch 'onvif:development' into video/object-track-aggregation-…
bsriramprasad Jan 27, 2025
106aaac
update based on IST F2F discussion
bsriramprasad Mar 11, 2025
b0cf6a7
addressed feedback from IST meetings
bsriramprasad Mar 18, 2025
8751ef8
schema fix and clarifications
bsriramprasad Apr 8, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
125 changes: 125 additions & 0 deletions doc/Analytics.xml
Original file line number Diff line number Diff line change
Expand Up @@ -4452,6 +4452,131 @@ xmlns:acme="http://www.acme.com/schema">
</tt:ParentTopic>
</tt:Messages>
</tt:RuleDescription>
]]></programlisting>
</section>
<section>
<title>Object Track Aggregation</title>
<para>This event is intended to describe an object throughout visible duration within a scene.
The description should include aggregated or summarized information along with contextual
data, such as images or bounding boxes, to assist clients in visualizing the object's
trajectory. This event can complement per-frame, per-object scene description by providing a
higher-level representation of the object’s trajectory, enhancing forensic analysis while
optimizing bandwidth consumption.</para>
<para>It is recommended to enable ObjectTrack data in sparsely crowded scenes to optimize data
produced from the device.</para>
<para>The process by which device aggregates object track data for a given object is out of
scope of this specification.</para>
<para>The Object Track Aggregation rule generates an object track aggregation event for below
listed scenarios </para>
<itemizedlist>
<listitem>
<para> Optionally, Initial aggregation after an object appears OR an Intermediate
aggregation while the object is present in the field of view.</para>
</listitem>
<listitem>
<para> Final aggregation when the object is no longer visible in the field of view. </para>
</listitem>
</itemizedlist>
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We think it would be beneficial to add more context describing this feature.

Things to consider are:

  • Description of the general feature and how it relates to the scene description data described in, 5 Scene Description.
  • Description of intended use.
  • Describe limitations
  • Give examples, some of interest are
    • Given a set of scene description frames what is a possible ObjectTrackAggregation event
    • Given a set of "unfiltered" and a set of rule parameters what is the expected output of "filtered" ObjectTrackAggregation events.

We are not sure if this is the correct place to add it given how other rules are described in Appendix A, but we suggest to make room for it in the specification.

As an example we could minimally add something like here:

<para>
Object track aggregation can be used as an alternative to the scene description. <Insert description of what it is and how it can be used as an alternative to the scene description>. Benefits can include reduction of transferred data and spending less resources aggregating data in the client. 
</para>

Copy link

@dstafx dstafx Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to push this further can you please prepare the minimal example by resolving the <Insert description..> part?

As for the other parts, since you are not sure what is the correct place, I suggest not doing that now. If you have important details in the other parts, such as a limitation, it can be added to the minimal suggestion.

Copy link
Collaborator Author

@dfahlen dfahlen Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general I think this feature is unclear in its scope and intention. I have not been part in formulating this feature and I can not guess the thought process or intentions of those who did. Therefore I think it is better if someone involved with designing this feature as it currently stands tries to add some more context with above considerations. Don't you agree?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will propose to add the below text to the spec right at the start of the section

This event may be used to describe an object in the scene alternative or complementary to Scene description generated per frame by aggregating data collected per object for an object track. The process by which devices aggregate object track data for a given object is out of scope of this specification.

Copy link

@dstafx dstafx Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Promised @dfahlen to generally comment on these: ( the comments shall not be added to the standard)

  • Description of the general feature and how it relates to the scene description data described in, 5 Scene Description.
    Object track aggregation can be used as an alternative to the scene description. The object track contain an object appearance aggregated and consolidated over all frames that can be used in forensic search. An image can be provided to of the object to be used for subsequent automated analysis or manual operator verification on the client. Benefits can also include reduction of transferred data and spending less resources aggregating data in the client.
    Scene description datatypes are reused but describe general appearance features rather than per frame appearance.

  • Description of intended use.
    Main use case is forensic search. See above.

  • Describe limitations
    When consolidating a track details are lost. Selected details can be offered by vendors in the object track attribute where needed.

Copy link
Collaborator Author

@dfahlen dfahlen Mar 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you describe in more detail what you mean by "object appearance aggregated and consolidated over all frames"? How does the data structure included in the ObjectTrack/Aggregation message support the representation this?

Additionally, tt:Object includes fields for more information than just visual feature of an object, such as bounding box, geo position, speed etc. Does those fields lack meaning in this context? If they have no clear interpretation this should be mentioned in the standard to avoid confusion.

<para>Optionally, device may include additional object track data for e.g. snapshot image as
configured in ObjectTrackDataFilter parameter.</para>
<para>If an initial or intermediate aggregation is sent for an object, Final aggregation event
for that object shall be sent even if final classification does not meet class or threshold
filters in rule configuration.</para>
<para>Device should fill in aggregated tt:Object structure with data qualified as
classifications for e.g. Vehicle or License plate and should not fill in temporal data like
speed etc., as they are not meaningful in the aggregation context.</para>
<para>In order to reinforce aggregation data, Device should use ObjectTrack tt:Object
structure to provide interesting observations captured at various time instances within the
track duration like cropped object image etc.,</para>
<variablelist role="op">
<varlistentry>
<term>Parameters</term>
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarification of "A.1.2.1 Generic parameters" in https://www.onvif.org/specs/srv/analytics/ONVIF-Analytics-Service-Spec.pdf

It mentions some parameters for events defined in Appendex A. How should these be handled in the context of the Object Track aggregation rule.

  • Is this a region based detection rule? I.e. should the field parameter be supported?
  • Should this rule support the Armed parameter

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally yes, region should be supported (not a MUST) otherwise we are opening up this event to include lot more objects if visible in scene, so its an additional filter like class to minimize what we send to client.

From PoC, we can say we did not have time to implement, let's see if WG objects to that.

Same for Armed, some rules even after creation needs an explict 'enable/disable' to actually trigger events and if our rules falls under such implementation pattern we may need to signal that support in options, for now we don't need to.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is a region based rule this should be clarified as it is not obvious. In other "region based rules" e.g. "A.4 Loitering Detector" the field parameter has been explicitly added to the parameter list in "A.4 Loitering Detector".

If this is intended to support the armed this needs to be clarified as well. Also a general description of the workflow regarding armed/disarmed would be beneficial as it is not obvious what a client is expected to do with regards to the armed parameter.

Personally, I do not see a reason why the armed parameter is needed and I am not sure the region parameter is needed but that is besides the point. The point is that regardless of if the region and armed parameters should be included or not in the rule it sould al last be clear from the spec if they are or not.

Do you agree?

As a note: Neither "Filed" or "Armed" are currently supported in the PoC.

Copy link
Owner

@bsriramprasad bsriramprasad Feb 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I do not see a reason why the armed parameter is needed and I am not sure the region parameter is needed but that is besides the point. The point is that regardless of if the region and armed parameters should be included or not in the rule it sould al last be clear from the spec if they are or not.

This can be implementation dependent.

If vendor wants to provide client with additional option of configuring the field, they would respond the same in the GetRuleOptions for client to leverage such option in CreateRule.

If vendor implements the rule without any field support (missing field in GetRuleOptions) devices may trigger more events as its viewing more FOV.

For some rules having field explicitly, many of those specs are written years ago while the section "Generic parameters" section with field/armed is a bit newer update added to avoid adding field/armed into every new rule that gets included in the spec in future and hence also made 'rule should contain a parameter 'Field' ElementItem. ' (Note the should and not SHALL - to keep it open for vendor implementations/flavors)

<listitem>
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General comment:

It is not clear from reading the document what the meaning of mandatory/optional parameters in this context should be.

What is the interpretation of mandatory parameters?

mandatory to support by a device but optional for a user to set

or

mandatory to support by a device a and mandatory for a user to set

Similarly, what is the interpretation of optional parameters?

Mandatory for a device to support and optional for a user to set.

or

Optional to implement and optional for a user to set.

Given the text in 6.2.3.3 CreateRules

"The device shall accept adding of analytics rules with an empty Parameter definition"

The interpretation should probably be,

Mandatory - Mandatory for a device to support and optional for a user to set.
Optional - Optional to implement and optional for a user to set.

It may still be confusing when parameters that are optional support by a device specify default behavior. Does a device not supporting a specific parameter need to consider the default behavior or not? If not, it may be confusing that a device may function differently dependent on if an optional parameter is supported by the device or not.

This is exemplified by the similar comment regarding the parameter ReportTimeInterval.

It would be great to clarify this!

Copy link
Owner

@bsriramprasad bsriramprasad Feb 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes interpretation should be

  • Mandatory - Mandatory for a device to support and optional for a user to set.
  • Optional - Optional to implement and optional for a user to set.

This observation is true for Rule Parameters + payload description in spec.

Copy link
Collaborator Author

@dfahlen dfahlen Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you agree this may be confusing? Do you think this can be clarified in the sepc?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a general guideline, ONVIF spec is always written from device perspective - unless explicitly stated otherwise.

  • So if parameter (in rule configuration) is mentioned as optional, its upto the device to support that option and hence optional for client to configure (subject to availability from device)

    • Ref: 6.1.3.3 Occurence
      • Configuration parameters may signal their allowed minimal occurrence using the minOccurs attribute. Setting the value to zero signals that the parameter is optional.
  • So if a field in event payload is is mentioned as optional, its upto the device to include/exclude from the event payload.

<para role="param">ClassFilter [tt:StringList]</para>
<para role="text">List of classes to be aggregated.</para>
<para role="param">ConfidenceLevel - optional [xs:float]</para>
<para role="text">Minimum confidence level of object classification for generating
aggregation event. </para>
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General comment, on how to handle multiple parameter specifying a filter for a set of classification candidates.

How does the class filter and confidence level work in combination with multiple classification candidates?

Are they independent or work in conjunction with each other.

Independent interpretation: (Feels wrong)
*Is any of the candidates the following class ""

  • Is any of the confidence levels of the candidates over ""

Combined interpretation: (Feels more correct)

  • Is there any candidate with a class specified in the class filter with a likelihood above the specified confidence level.

Example, "Independent interpretation":

Data:
{Human: 0.9, Animal: 0.7}

Filter:
class filter = [Animal]
confidence = 0.8

Output:
The message passes the filter,

  • One candidate is Animal
  • One candidate is above 0.9

{Human: 0.9, Animal: 0.7}

Example, "Combined interpretation":

Data:
{Human: 0.9, Animal: 0.7}

Filter:
class filter = [Animal]
confidence = 0.8

Output:
The message does not pass the filter

No candidate is both Animal and above 0.8 in confidence.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General comment:

In a class filter is it possible to specify unclassified objects?

Example:

A consumer is interested in all ObjectTrackAggregation events with Unclassified objects and Humans but not Vehicles.

How to specify this class filter?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a class filter or confidence filter is used it is possible to get an Update event but no Final Event.

This makes it impossible for the user to distinguish between not ever receiving a final event and that the object is still in the scene.

Example:
Confidence Level = 0.5
Update event has {Human: 0.7} and is sent to the user.
Final event has {Human: 0.4 } and is not sent to the user.

When does the user know that the track is over?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to add a clarification that says, irrespective of the filter - Final aggregation has to be sent when track ends, we can bring this up before 25.06.

Copy link
Collaborator Author

@dfahlen dfahlen Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, great, lets add this comment to the public pr.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If an initial or intermediate aggregation is sent for an object, Final aggregation event for that object shall be sent even if final classification does not meet class or threshold filters in rule configuration.

update pushed into PR.

Copy link
Collaborator Author

@dfahlen dfahlen Mar 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to update the PoC to support this in order to follow updated spec.

<para role="param">ReportTimeInterval - optional [xs:duration]</para>
<para role="text">Optional time interval to control update frequency. If omitted the
final aggregation shall be provided, intermediates optionally provided.</para>
<para role="param">ObjectTrackDataFilter - optional [tt:StringList]</para>
<para role="text">Optional list of object tree elements expressed as xpath filter for
e.g. "Object/Appearance/Image" to be included in Object track data. If omitted,
ObjectTrackData shall not be provided.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Topic</term>
<listitem>
<para role="param">tns1:RuleEngine/ObjectTrack/Aggregation</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Source</term>
<listitem>
<para role="text">See <xref linkend="_msgSource"/></para>
</listitem>
</varlistentry>
<varlistentry>
<term>Data</term>
<listitem>
<para role="param">AggregationStart - mandatory [xs:dateTime] </para>
<para role="text">All aggregation events shall set object track start time as the
AggregationStart time. </para>
<para role="param">Object - mandatory [tt:Object]</para>
<para role="text">Track aggregation data of an object.</para>
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this element intended to be interpreted and used?

tt:Object as described in "5.3.1 Objects" is intended to be used to describe the state of an object at a single point in time.

In this context the element description suggests that tt:Object should be used to describe an "aggregation" of data.
It is not clear from the description of the element nor the surrounding context what type of "aggregation" is to be expected. Furthermore since tt:Object was originally designed to describe the state of an object at a single point in time most of the elements in the data structure has no natural interpretation when representing some sort of "aggregate".

From this I am not convinced tt:Object is the correct data structure to use for the purpose of "aggregation".

Regardless of what data structure that is ultimately used, I think it needs to be further described what type of "aggregation" that is intended here as the word "aggregation" has a very broad interpretation and in isolation it does not describe the meaning or interpretation of some piece data.

Copy link
Owner

@bsriramprasad bsriramprasad Feb 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not clear from the description of the element nor the surrounding context what type of "aggregation" is to be expected.

  • We tried to get into the details of what/how aggregation is and it did not fly enough and hence it is purposely left open to individual device capabilities of how thorough/compact/precise aggregation is done.
    • Axis when implements on its products will have to explain what it means.

I am open to a recommendation/proposal text from you/team on the lines of "It is recommended to enable ObjectTrack data in sparsely crowded scenes to optimize data produced from the device." to explain what could be part of aggregation and we don't want want to get back to WG for details that they care less about.

Here is my proposal

  • It is recommended that the methods by which devices construct and process object track aggregation data for a given object in a scene be left to the discretion of individual implementations.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also see it as a benefit of not locking down the the implementation details, in any large system not just onvif, since it allows us to change it using more advanced technology in the future. As long as the scope of the consolidation is known, time interval + object id + source/producer, it's sufficiently defined.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will collect all clarifications into a separate PR.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not talking about implementation details. I am talking about the interface of this feature, that is not a detail nor is it technology dependent.

The tt:Object data structure is part of the interface. As that is the case it needs a clear interpretation and meaning. As I pointed out above I do not think it does as the original intention of the data structure is to represent the state of an object at a single point in time.

If tt:Object has no clear meaning that can be described in the standard I think it is better to leave it out and instead make clear that kind of data needs to be delivered as vendor extensions.

If we really want to standardize a data structure it needs a standard interpretation otherwise there is no point in standardizing it.

I am am asking for one of the following:

  • Some effort to be put into describing what the interpretation of tt:Object should be in this context.
  • If it is not possible to describe what the interpretation of tt:Object should be in this context, change the data structure into something that it is possible to describe the interpretation of.
  • Remove tt:Object and make it clear that data of that kind is outside the scope of the standard and something that needs to be added as vendor extensions.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Numerous discussions in WG did not yield much progress and tt:Object is the only option available for now, though we all know its not perfect, without that the whole event/feature has absolutely no base at all.

I don't see ONVIF inventing anything complementing tt:Object in near future (next 3-4 years)

In that context, I would say lets add more text about "interpretation of tt:Object should be in this context." in order to render what we did in last year little more meaningful.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proposed text

This event may be used to describe an object in the scene alternative or complementary to Scene description generated per frame by aggregating data collected per object for an object track. The process by which devices aggregate object track data for a given object is out of scope of this specification.

The Object Track Aggregation rule generates an object track aggregation event for below listed scenarios

Optionally, Initial aggregation after an object appears OR an Intermediate aggregation while the object is present in the field of view.
Final aggregation when the object is no longer visible in the field of view.

Optionally, device may include additional object track data for e.g. snapshot image as configured in ObjectTrackDataFilter parameter.

It is recommended to enable ObjectTrack data in sparsely crowded scenes to optimize data produced from the device.

Copy link
Collaborator Author

@dfahlen dfahlen Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, this is a start but I still think we need additional clarification.

The main issue is not that it is up to the vendor how "aggregation" is done, the specification does not need describe that. This is the same for other features. E.g. the standard describes how to represent the state of tracked object in a video frame with tt:Frame and tt:Object and how that data should be interpreted, but it says nothing about how the tracker should be implemented.

So, what is needed here is a description of how tt:Object should be interpreted in the context of "aggregation" not the process by which the aggregation is done by a vendor.

Additionally, the above suggested text mentions "object track". This is the first time in the standard that that term is used. I think it needs to be defined what "object track" mean in the context of the standard.

Copy link
Collaborator Author

@dfahlen dfahlen Mar 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A benchmark for a "good" description is that a human should be able to read the description on how the data should be represented and interpreted and then be able to annotate a video using that representation.

This way we can ensure representation is meaningful an possible to explain. It is also possible to develop benchmarks that measure how well a machine can perform the task by comparing the machine output to a human annotation.

<para role="param">ObjectTrack - optional [tt:ObjectTrack]</para>
<para role="text">Optional additional description of an object to provide enriched context.</para>
<para role="param">Final - optional [xs:boolean]</para>
<para role="text">Signals TRUE for final aggregation event for the detected
object.</para>
</listitem>
</varlistentry>
</variablelist>
<para>See below example for an Object Track Aggregation definition:</para>
<programlisting><![CDATA[<tt:RuleDescription Name="tt:ObjectTrackAggregation">
<tt:Parameters>
<tt:SimpleItemDescription Name="ClassFilter" Type="tt:StringList"/>
<tt:SimpleItemDescription Name="ConfidenceLevel" Type="xs:float"/>
<tt:SimpleItemDescription Name="ReportTimeInterval" Type="xs:float"/>
<tt:SimpleItemDescription Name="ObjectTrackDataFilter" Type="tt:StringList"/>
</tt:Parameters>
<tt:Messages>
<tt:Source>
...
</tt:Source>
<tt:Data>
<tt:SimpleItemDescription Name="AggregationStart" Type="xs:dateTime"/>
<tt:SimpleItemDescription Name="Final" Type="xs:boolean"/>
<tt:ElementItemDescription Name="Object" Type="tt:Object"/>
<tt:ElementItemDescription Name="ObjectTrack" Type="tt:ObjectTrackData"/>
</tt:Data>
<tt:ParentTopic>tns1:RuleEngine/ObjectTrack/Aggregation</tt:ParentTopic>
</tt:Messages>
</tt:RuleDescription>
]]></programlisting>
<para>The object types may differ between devices. See below for example rule options, based on the description language parameter options defined in <xref linkend="_Toc529533198" />.</para>
<programlisting><![CDATA[<tan:GetRuleOptionsResponse>
<tan:RuleOptions Name="ClassFilter" RuleType="tt:ObjectTrackAggregation">
<tt:StringList>Person Vehicle Face LicensePlate</tt:StringList>
</tan:RuleOptions>
<tan:RuleOptions Name="ConfidenceLevel" RuleType="tt:ObjectTrackAggregation">
<tt:FloatRange>
<tt:Min>0.0</tt:Min>
<tt:Max>100.0</tt:Max>
</tt:FloatRange>
</tan:RuleOptions>
<tan:RuleOptions Name="ReportTimeInterval" RuleType="tt:ObjectTrackAggregation">
<tt:DurationRange >
<tt:Min>PT1S</tt:Min>
<tt:Max>PT1M</tt:Max>
</tt:DurationRange>
</tan:RuleOptions>
<tan:RuleOptions Name="ObjectTrackDataFilter" RuleType="tt:ObjectTrackAggregation">
<tt:StringList>Object/Appearance/Image Object/Appearance/Shape/BoundingBox</tt:StringList>
</tan:RuleOptions>
</tan:GetRuleOptionsResponse>
]]></programlisting>
</section>
</appendix>
Expand Down
22 changes: 22 additions & 0 deletions wsdl/ver10/schema/metadatastream.xsd
Original file line number Diff line number Diff line change
Expand Up @@ -411,6 +411,28 @@ IN NO EVENT WILL THE CORPORATION OR ITS MEMBERS OR THEIR AFFILIATES BE LIABLE FO
</xs:attribute>
<xs:anyAttribute processContents="lax"/>
</xs:complexType>

<xs:complexType name="ObjectTrack">
<xs:annotation>
<xs:documentation>An Object track includes a sequence of object states representing how an object's state changes over time.</xs:documentation>
</xs:annotation>
<xs:sequence>
<xs:element name="ObjectState" type="tt:ObjectState" minOccurs="1" maxOccurs="unbounded"/>
</xs:sequence>
<xs:anyAttribute processContents="lax"/>
</xs:complexType>

<xs:complexType name="ObjectState">
<xs:annotation>
<xs:documentation>An object state describes an object's condition, for e.g position, speed and appearance, at a certain time instance.</xs:documentation>
</xs:annotation>
<xs:complexContent>
<xs:extension base="tt:Object">
<xs:attribute name="CaptureTime" type="xs:dateTime" use="required"/>
<xs:anyAttribute processContents="lax"/>
</xs:extension>
</xs:complexContent>
</xs:complexType>
<!--===============================-->
<!-- Metadata Streaming Types -->
<!--===============================-->
Expand Down