- 翻譯公司資訊
-
世聯(lián)翻譯公司完成軟件實(shí)驗(yàn)介紹英文翻譯
發(fā)布時(shí)間:2020-06-23 11:07 點(diǎn)擊:
世聯(lián)翻譯公司完成軟件實(shí)驗(yàn)介紹英文翻譯
To analyze for example the time from the start of tracking to the time theanimal enters a particular zone.This chapter describes the Trial Control functions of the EthoVision XT Baseversion only. For a detailed overview of conditions, creating sub-rules andcontrolling hardware devices, see the EthoVision XT Trial and Hardware ControlManual which you can find on your installation DVD.158 Chapter 7 - Trial Control7.1 Introduction to Trial Controlwhy use trial control?Trial Control allows you to automate your experiment. For example: You want to set a maximum duration for your trials. See page 182 You want to automate the start and/or stop of data acquisition.A few examples:- Start recording when the rat is first detected in the open field.- Stop recording when the rat has reached the platform in the Morris water maze.- Start recording at exactly 12:30:00.- Stop recording after the animal has been in the closed arms of the plus maze for 5minutes. See page 185To use Trial Control:1. Open the Trial Control screen (see page 163).2. Define the conditions that, when met during your trial, trigger specific actions. Organizeconditions and actions in a sequence (see page 171).3. Before starting data acquisition, make sure that those Trial Control Settings are active.See also page 663 for instructions how to manage Trial Control Settings.Your EthoVision XT license and Trial ControlYour EthoVision XT license determines which type of Trial Control you can use. EthoVision XT Base license – You can define a rule to start and stop data recording (Start-Stop trial rule; see page 185). You cannot control hardware devices. EthoVision XT Base + Trial and Hardware Control Module – You can define a Star/Stoptrial rule, and in addition sub-rules. Moreover, you can control hardware devices. Toacquire data in an experiment made with the Trial and Hardware Control Module, youmust have a hardware key enabled for Trial and Hardware Control plugged in yourcomputer.The EthoVision Trial and Hardware Control Manual, which you can find on yourinstallation DVD, includes extensive information on the functions available withthe Trial and Hardware Control Module.Chapter 7 - Trial Control 159conditions and actionsA Condition is a statement that EthoVision evaluates. An Action is a command executed on avariable or a hardware device. You can therefore control your experiment by linkingconditions with actions. Example – In a Morris water maze test, stop tracking when the rat is detected on theplatform (provided that the platform has been defined as a zone).The action is Stop tracking and the condition is Rat detected on the platform.You define and link conditions with actions in a graphical form. The example above can berepresented by the following:the start-stop trial ruleConditions and actions are organized in a logical sequence named called the Start-Stop trialrule. This can be viewed as a set of instructions executed for starting and stopping datarecording.For more information on the Start-Stop trial rule, see page 185.The Trial Control function also allows you to analyze events that occurred during the trial, orthe time between two specific events. For example, the time from the condition A beingactivated to action X being taken. For the detailed procedure, see page 193.Figure 7.1 A condition is followed by an action.The condition checks that the animal is in thezone named “Platform”. The action “Stop track”is taken when the condition is met.With the Trial and Hardware Control add-on, you can also define subroutinescalled Sub-rules. The sub-rules are meant to carry out specific actions. They canstart at specific times and be repeated according to user-specified conditions.For more information, see the EthoVision XT Trial and Hardware Control Manualon the EthoVision XT installation DVD.160 Chapter 7 - Trial Controlhow trial control instructions are executedThe instructions contained in the Trial Control Settings are carried out from the moment youstart a trial, to the moment the trial is stopped. Only the instructions in the Trial ControlSettings currently active (that is, highlighted in blue in the Experiment Explorer) are carriedout.The program evaluates the Trial Control sequence at each sample time. The rate at which thishappens depends on your chosen sample rate, not on the video frame rate.The program remembers which Trial Control box was evaluated (active) in the previoussample. Depending on the type of this box: For a Condition box – EthoVision XT checks whether the condition is met. If it is not, thecondition becomes false. The program waits until the condition is met. When thishappens (condition becomes true - see 3 in Figure 7.2), the program passes control to thenext box in the sequence. The condition becomes then inactive (see 4 in Figure 7.2). For an Action box – EthoVision XT carries out the action (see 4 in Figure 7.2), and passescontrol to the next box, which becomes active. Then, the Action box becomes inactive(see 5 in Figure 7.2). For Sub-rules and their References, see the EthoVision XT Trial and Hardware ControlManual.When a box becomes active, the previous becomes inactive. Boxes combined in parallel using operators (see page 178) are evaluated at the sametime, in unspecified order. This means that one cannot establish which condition isevaluated/which action is taken first. Actions on Trial Control variables are executed immediately. Actions on hardware devicesare executed when all boxes that must be evaluated at that sample time have beenevaluated. If a box being evaluated contains a condition that is immediately true, the programpasses control to the next box. Therefore, within one sample time the program can passcontrol to two or more boxes to the right. When you stop the trial or the Maximum trial duration has been reached, all Trial Controlboxes are deactivated. When the Rule End box of the Start/Stop trial rule is evaluated, data recording stops.From that moment, Trial Control is deactivated, even in those sub-rules that wereongoing in the meantime.Chapter 7 - Trial Control 161Trial Control in multiple arenasIf your experimental setup includes two or more arenas, Trial Control is applied to each arenaseparately. This means that, if a condition is met in one arena, EthoVision XT takes thecorresponding action in that arena, not the others.In the following example, a setup includes four cages, each defined as an arena. A TrialControl In zone condition (see page 173) has been defined so that tracking starts when theanimal is first detected in the arena. When you first put an animal in Arena 2, the condition isFigure 7.2 Schematic representation of how Trail Control Instructions are executed. The scheme shows anexample of a Start-Stop trial rule (see page 185).1-Tracking starts, either manually or because a previous condition has been met.2- Control passes to a Condition box (for example, “Is mouse on top of Shelter?” which becomes active. Thecondition is evaluated. Since the condition is not met immediately, it becomes false. 3- The Condition ismet.4- Control passes to the next box. In this case, it is an Action. Actions are taken immediately.5- The Action box becomes inactive, and the next box becomes active.For clarity, step 3 and 4 have been placed separately. In reality, when a condition is met it becomes inactiveat the same time, and control passes to the next box.Hatched outlines - Condition Box becomes active.Dark outlines - Condition becomes true or Action is taken. Pale outlines - Box becomes inactive162 Chapter 7 - Trial Controlmet in this arena and tracking starts for that arena. When you release the second animal inArena 4, 2 seconds later, tracking in that arena starts 2 seconds later than in Arena 2 (seeFigure 7.3).The advantage of Trial Control in multiple arenas is that you can put one animal at a timeinto the arenas, and EthoVision XT will start tracking in each arena at the appropriatemoment.If your setup includes multiple arenas, you cannot define a condition/action specific to onearena. This means that the zone on which a condition is based on must be present in allarenas, and have the same name. If a zone is not present in an arena, and a condition is based on that zone, Trial Controlcannot progress for that arena. Therefore, tracking does not stop unless you set aMaximum trial duration or tracking reaches the end of the video. At any sample time, Trial Control carries out the instructions for each arena. However,you cannot establish in which order the arenas are evaluated at a specific sample time.Figure 7.3 Trial Control in multiple arenas. The time values displayed on the monitor are the times elapsedsince the start of tracking in a particular arena.Tracking started earlier in Arena 2 than in Arena 4 (see text),therefore ar any time the Elapsed time (duration of tracking) is longer in Arena 2 than in Arena 4.Chapter 7 - Trial Control 1637.2 The Trial Control screenTo access the Trial Control screen, click Trial Control Settings 1 in the Experiment Explorer, orfrom the Setup menu, select Trial Control Settings, then click Open and select Trial ControlSettings 1. Next, click OK. The Trial Control screen appears, showing the default Trial Controlsettings.To access the Trial Control screen, you can also create a new Trial Control Settings, or openone other than Trial Control Settings 1 (see page 663). The Components pane, listing the conditions on which you can base your actions and theoperators which you can use to combine conditions. See the next page. The Trial Control Settings window, showing the Trial Control Settings that are active. Itcontains a sequence of boxes connected by arrows. See page 166. The Maximum trial duration pane that enables you to define a maximum duration of thetrial. See page 171.Figure 7.4 The Trial Control Settings screen. A - Components pane. B - Maximum Trial duration pane.C - Trial Control Settings window.164 Chapter 7 - Trial ControlYou can show/hide the Components pane and the Maximum trial duration pane by clickingthe Show/Hide button on the component tool bar and selecting/deselecting thecorresponding option in the menu.the components paneWith the Components pane (see Figure 7.5) you choose the blocks that build up your trialcontrol rules. Not all the components listed below may be available on your screen,depending on what EthoVision XT license you have on your computer (see page 158).If you do not see the Components pane, click the Show/Hide button on thecomponents tool bar and select Components.Figure 7.5 The Components pane for Trial Control.Chapter 7 - Trial Control 165 Structures- Sub-rule – To define a subroutine that can be called from a specific point of the TrialControl sequence. Reference – To insert a call to a sub-rule within a sequence of instructions.- Operator – To combine two or more conditions in such a way that an action is takenwhen All, Any or "N of All" conditions are met. See page 178. Conditions (see page 172):- Time – To define a condition based on time.- Time interval – To define a condition based on a time interval.- Trial Control variable – To define a condition based on a Trial control variable.- Dependent variables – To define a condition based on a variable that describe theanimal's behavior, for example velocity, presence in a zone, movement etc.Under Dependent variables, you can view the list of variables available.- Hardware – To define a condition based on the state of a hardware device (only withthe Trial and Hardware Control add-on). Actions- Trial Control variable – To define an action on a Trial control variable. See page 175.- Hardware – To define an action on a hardware device (only with the Trial andHardware Control add-on).- External command – To control external applications. With an External commandaction you can, for example, start an external application or run a batch file.How to use the Components paneTo define a sub-rule, condition, action or operator: Double-click its name. Click the button next to it. Drag the name from the Components pane to the Trial Control window.A new Trial Control box appears in the top-left corner of the Trial Control window. Insert thenew box in the sequence of boxes (see page 169).For the complete procedure for Programming Trial Control, see page 171.For more information on Sub-rules, References to sub-rules and hardwaredevices, see the EthoVision XT Trial and Hardware Control Manual that you canfind on your installation DVD.166 Chapter 7 - Trial Controlthe trial control settings windowThe Trial Control Settings window contains the sequences of instructions (rules) currentlypresent in the Trial Control Settings. When you create a new Trial Control Settings profile, theTrial Control window contains the default Start-Stop trial rule (see page 185).You can then define your own conditions in the Start-Stop trial rule that determine the startand stop of data recording.For more information: About programming Trial Control – See page 171. About the Start-Stop trial rule – See page 185.GridThe trial control boxes automatically snap to a grid. You can change this by clicking theShow/Hide button on the component tool bar and selecting/deselecting the two Gridoptions (Snap to Grid and Show Grid).ZoomThe component tool bar of the Trial Control Settings shows three zoom icons: Zoom in – You can keep zooming in until the trial control boxes have reached theiroriginal full size. Zoom out – You can keep zooming out until all trial control boxes fit in the window.Figure 7.6 The Trial Control window, with the default Start-Stop trial rule.Chapter 7 - Trial Control 167 Fit all – Clicking this button fits all trial control boxes into the window.working with trial control boxesA Trial control box has the following information: A - Type of control (Rule Begin/End, Action, Condition, Operator, Reference). You cannotchange this text. B - Name – Text describing the control. To change this text, click the Settings button andenter the text under Name, for example Drop one food item. You can also add a longerdescription under Comment (this is not shown).Names of Trial Control boxes must be unique, unless you make a copy of an existing box(see page 180). C - Properties – Depending on the type of control, it contains the option chosen, theformula or the command to be given, or the sub-rule that reference refers to.The Trial Control window is ‘dynamic’: this means that it expands when youmove trial control boxes to the right. In this case, you can navigate ‘from left toright’ in the Trial Control window by using the scroll bar at the bottom. Use theZoom to fit button in the component tool bar to make all trial control boxesvisible.Figure 7.7 An example of a Trial Control box.168 Chapter 7 - Trial ControlColorsTrial control boxes have different colors: Blue - for the Start-Stop trial rule, sub-rules and sub-rule references. Olive green – for conditions. Light green – for actions. Grey – for operators.Moving a box1. Hover the mouse on the margin or the colored area of the box. The mouse cursorchanges to a four-headed arrow.2. Drag the box to the position you require.Moving a group of boxes1. Draw a box around the boxes you want to move (see figure below) or click on the boxesyou want to select while holding the Ctrl key.As a result, the selected boxes get a gray, dark border.2. Hover the mouse on the margin or the colored area of one of the selected boxes. Themouse cursor changes to a four-headed arrow.Chapter 7 - Trial Control 1693. Drag the group of boxes to the position you require.Inserting a box in a sequence1. Drag the Trial Control box between two boxes until the connecting arrow turns white.2. Release the mouse button. The new box is inserted.Connecting two boxes1. Point the mouse to the center of the first box, press and hold the left mouse button anddrag toward the center of the other box.2. Release the mouse button when the pointer has reached the center of the other box. Thetwo boxes are connected.- You cannot create connections from the Rule End box to any other box, nor from anybox to the Rule Begin box.- Operator boxes can have one, two or more input arrows; all other boxes have no morethan one input arrow.- All boxes can have 1 or more output arrows, pointing to different boxes.- You cannot create a circular sequence of Trial Control boxes.Modifying the settings in a boxFollow the instructions below when you have inserted a Trial Control box, and you want tomodify the properties of that box.1. Locate the Trial Control box that specifies the condition or operator you want to change.You can find the name of the condition/operator in the upper green/grey area of the box.2. Click the Settings button in the lower part of the box.3. Make the appropriate settings in the window that appears (see the corresponding sectionabove for defining conditions and operators).170 Chapter 7 - Trial ControlDeleting a box1. Click the title of the box. The box border is highlighted.2. Press Delete.Deleting a group of boxes1. Draw a box around the boxes you want to delete or click on the boxes you want to selectwhile holding the Ctrl key.2. Press Delete.You cannot delete the Rule Begin, the Rule End box, the Start track box and the Stop trackbox.Deleting a connecting arrow1. Click the arrow you want to delete. The arrow turns bold to show it is selected.2. Press Delete.You cannot delete the arrow connecting the Stop track box and the Rule End box.Exporting Trial Control SettingsYou can export an image of the Trial Control Settings:1. Click the Export image button in the component tool bar.2. Select a location to save the image to, type in the File name or accept the default one andselect an image type from the Save as type list.3. Click Save.The complete Trial Control window is exported, irrespective of the zoom factor.Chapter 7 - Trial Control 171maximum trial duration paneIn the Maximum Trial Duration pane you define the maximum duration of the trials. Forfurther information, see page 182.If you do not see this pane, click the Show/Hide button on the component tool bar and selectMaximum Trial Duration. If the text in this pane is greyed out, the Trial Control Settings areread-only.7.3 Programming Trial Controlprocedure1. Before defining Trial Control in the program, it is helpful to draw your experimentalprocedure as a flow diagram, where each block represents an action or a conditionwhich, when met, triggers other actions or conditions.2. From the Setup menu, select Trial Control Settings, select New, enter a name of the newTrial Control Settings or accept the suggested one, and click OK. The default Start/Stoptrial rule appears on the screen.3. Build the Trial Control sequence outlined in step 1, using the components available.- To define a Condition, click one of the buttons under Conditions. See page 172Figure 7.8 The Maximum Trial Duration pane.If you just want record data for a specific time, you can do so by setting theMaximum trial duration (page 27).172 Chapter 7 - Trial Control- To define an Action, click the button under Actions. See page 174Insert the box in the appropriate place in the sequence.4. Test the Trial Control sequence. See page 1835. Apply Trial Control to your trials. See page 183 When you create a new action or condition, and another of the same type has alreadybeen defined in this or other Trial Control Settings, a message appears asking youwhether you want to create a new element or make a copy of the existing element. Formore information, see page 180. You can also combine multiple conditions. To combine multiple boxes, see page 178.using conditionsA Condition is a statement that EthoVision checks during the trial. When the Condition ismet (True), the program evaluates the next Trial Control element (another condition, anaction or a reference to a sub-rule).Examples of conditions (in italics): When the rat reaches the platform, stop tracking. When the mouse is detected in the open field, start tracking. When the animal has visited zone A ten times, stop tracking.How to define a condition1. In the Components pane under Conditions, locate the type of condition you want todefine.2. Double-click the condition name or click the button next to it.3. If the Add a condition window appears, it means that there is at least one condition of thesame type in your experiment. You are asked to choose between creating a new condition,or re-use an existing one (see page 180). Choose the option you require and click OK. If thiswindow does not appear, skip this step.Chapter 7 - Trial Control 1734. Next to Condition name, type in the name you want to give to the condition, or accept thedefault name.5. Specify the condition properties.6. Enter a Comment (optional), then click OK.7. Insert the condition box in the sequence. If the condition is complex (for example, "stop the trial either if the rat has reached theplatform or it has been swimming for 60 seconds"), then you must define separateconditions and combine them (see page 178). See also the examples on page 189. For a detailed overview of conditions, see the EthoVision XT Trial and Hardware ControlManual on your installation DVD.Types of condition Time – Helps you defining a time interval that must elapse before an action be taken.Example – Start tracking after a delay of 2 seconds, or start tracking at 12h00. Time interval – This condition makes sense when it is combined with another condition.Example: Stop tracking when the animal is found in Zone A (In zone condition) between 5and 10 minutes (Time interval condition). Trial Control variable – Helps you make a comparison between a Trial Control variableand a value, another variable or a formula at the time the condition becomes active (forthe meaning of becomes active, see page 160).Example – Stop tracking when the variable Counter has reached 10. Dependent variables – To define a condition based on the behavior of the subject.Choose one of the dependent variables to create the condition.Example 1 – Stop tracking when the subject has visited 10 times in the Target zone (In zonecondition).Example 2 – Stop tracking when the subject has been walking for more than 5 minutes(Movement condition).Note — You cannot create a Trial Control condition based on one of the behaviorsdetected with the Automatic Behavior Recognition function. Hardware – To define a condition based on the signal given by a hardware device. To usehardware devices with EthoVision, you must have the Trial and Hardware Control addon.See the EthoVision XT Trial and Hardware Control Manual on your installation DVD.174 Chapter 7 - Trial Controlusing actionsAn Action is a command that EthoVision carries out during acquisition and that influencesthe trial.Examples of actions (in italics): When the animal is detected in the arena, start tracking.This is an example of system actions (start tracking and stop tracking). When the animal enters the maze's left arm, do C= C+1.This is an example of an action taken on a Trial Control variable. See page 175. When the animal comes out of the shelter, start video recording with Media Recorder.The actions Start tracking and Stop tracking are already defined in the Start-Stop trial rule.Beside these, you can define actions on Trial control variables. You cannot create additional actions of the Start track and Stop track type, nor can youdelete the existing ones. If your EthoVision license includes the Trial and Hardware Control add-on module, youcan also define actions on hardware devices. See the EthoVision XT Trial and HardwareControl Manual on your installation DVD.How to define a Trial Control variable1. In the Components pane, click the button next to Trial Control variable under Conditionsor Actions. Next, click the Variables button.2. The Trial Control Variables window lists the variables currently in the experiment (alsothose defined in other Trial Control Settings). To add a new variable, click Add variable.If you have inserted Condition boxes based on Activity continuous in your TrialControl rule and then deselect Activity analysis in the Experiment settings (seepage 100), your rule becomes invalid. The Condition boxes based on Activitycontinuous are removed from your sequence and the connecting arrows areremoved. Redesign your Trial Control rule and connect the arrows between theboxes (see page 169).For a detailed overview of conditions, see “Overview of conditions” in theEthoVision XT Trial and Hardware Control Manual, which you can find on yourinstallation DVD.Chapter 7 - Trial Control 1753. A new row is appended to the table. Under Name, type in the name you want to give tothe variable. Under Initial Value, enter the value of this variable at the start of the trial(default: 0).4. Click OK. In the TC-variable action/condition window, define the action or condition yourequire. Click Cancel if you do not want to create a condition or action based on thisvariable at this point. To delete a variable, click the variable name in the Trial Control Variables window andclick the Delete variable button. To rename a variable, click the variable name in the Trial Control Variables window andedit this name. The default name of a new trial control variable is VarN, where N is a progressive number. The variable name cannot contain blank spaces.How to define an Action on a Trial Control variable1. In the Components pane, under Actions click the button next to Trial Control variable.2. If the Add an action window appears, it means that there is at least one action of the sametype in your experiment. You are asked to choose between creating a new action, or re-usean existing one (see page 180).3. Next to Action Name, enter the name of the action (for example, Increment Counter) oraccept the default name.4. Under Action to perform, select the variable from the list. You can also create the variableby clicking Variables if you have not yet done so.5. Next to the = symbol, do one of the following:- To assign the same value of another variable (for example A = B), select the othervariable (B) from the second list.- To enter a formula, click the double-arrow button.176 Chapter 7 - Trial ControlSelect the operator from the list and specify the formula in the second and third lists.For example, A= A + 1.- To assign a random value, select Random from the second list, and select the Minimumand Maximum limits (only integer values, 0 up to 999) in which the random valuemust lay.6. Enter a Comment (optional), then click OK.7. Insert the resulting Action box in the Trial Control rule.Notes If your setup includes multiple arenas, each arena receives an instance of the variable.Thus, a variable can have different values in different arenas. You cannot combine Random with a formula (for example, to compute A= Random+1).The equivalent solution is the following: define first an action B= Random, and then onemore action A= B+1. Place the two resulting Action boxes in sequence. To generate a random value, the maximum limit must be greater than the minimum.How to define an External command1. In the Components pane, under Actions click the button next to External command.Chapter 7 - Trial Control 1772. Next to Action Name, enter the name of the action (for example, start recording) or acceptthe default name.3. Under Actions to perform, select which file you want to run by clicking the ellipsis button.4. Next, select one of the file types from the list:- Executables (*.exe).- Batch Files (*.bat).- All Files (*.*).5. Locate the file and click Open.6. Optionally, enter a Command line option.Example - You carry out live tracking during a 24-hour period and you want to make arecording in Media Recorder but only when the animal leaves the shelter (defined as aHidden Zone, where it spends most of its time). First, start up Media Recorder using anExternal command box: select MRCmd.exe as the Executable to run and enter /E as aCommand line option to start Media Recorder. Next, insert a Condition Out of shelter andcombine this with a Time condition to make sure that Media Recorder is started beforerecording starts (see Figure 7.9 for an example). Then, insert an External command box:select MRCmd.exe as the Executable to run and enter /R as a Command line option tostart recording with Media Recorder. Similarly, you can stop recording (Command lineoption: /S) when the animal enters the shelter again.Click the Information button to get additional information about defining anExternal command.There may be a delay between the command Start Recording and the momentMedia Recorder actually starts recording. Run a test recording to test how longthis delay is.178 Chapter 7 - Trial Controlusing operatorsThe Operators help you combine actions, conditions and sub-rules in various ways. Forexample: When at least one of the two conditions A and B is met, then do …This is an example of conditions combined by an operator of the "Any" type (OR logic). When two conditions are met at the same time, then do …This is an example of conditions combined by an operator of the "All" type (AND logic). When at least/at most/exactly 4 of 8 conditions are met, then do …This is an example of conditions combined by an operator of the "N of All" type.To combine conditions/actions/rules:1. Define the conditions/actions/rules that you want to combine. Place them in your TrialControl sequence as parallel branches. The connecting arrows must originate from thecondition/action that precedes the combination of elements you want to define.Figure 7.9 Example of the External command action to start a recording with Media Recorder when theanimal leaves a shelter. The left Start MR action box starts up Media Recorder. The Start recording MRaction box on the right starts the recording when both the Out of Shelter and Time(1) conditions are true,that is, the center-point of the animal has left the shelter at least 5 seconds after Media Recorder wasstarted.Chapter 7 - Trial Control 1792. In the Components pane under Structures, double-click Operator or click the button nextto it.3. If the Add an operator window appears, it means that there is at least one operator of thesame type in your experiment. You are asked to choose between creating a new operator,or re-use an existing one. If this window does not appear, skip this step.- Create a new operator – A new operator is created.- Reuse an existing operator – Select the name of the operator already present in yourexperiment. See page 180 for more information.Click OK. The Operator window appears.4. Under Name, enter the Operator name or accept the default name Operator (n), where nis a progressive number.5. Under Operator triggers when, select the option that applies:- Any (at least one) of the inputs is 'true'.- All inputs are simultaneously 'true'.- N of All inputs are simultaneously 'true'.Where 'true' means a condition met, an action carried out, or a sub-rule finished(depending on the elements you want to combine).- If you choose the third option, specify how many inputs must be 'true': = (exactly equalto), not= (not equal to), >= (at least), <= (a maximum of), etc. Specify the number in thebox.180 Chapter 7 - Trial Control6. Enter a Comment (optional) to describe this operator, and click OK.7. A new Operator box appears in the Trial Control. Place the box right of the elementsdefined in step 1, and connect each element (or ending element, in the case of a sequence)to the operator.8. Connect the operator to the next element that should be activated. Names of operators must be unique in your experiment. You cannot define twooperators with the same Operator name, even if these are defined in two different TrialControl Settings. An Operator can also have just one input box. In that case the operator is of no use,because control passes immediately to the next box as soon as the input conditionbecomes true or the input action is carried out. EthoVision informs you about this.re-using trial control elementsAll elements of Trial Control (conditions, actions, operators, sub-rules and sub-rulereferences) can be duplicated that you have defined in other Trial Control Settings can beduplicated and re-used in your current Trial Control Settings to reduce your time spentediting.To re-use all the elements defined in your current Trial Control Settings profile, make a copyof it: right-click the profile in the Experiment Explorer and select Duplicate.Chapter 7 - Trial Control 181How to re-use a Trial Control element1. Click the button next to the category of element that you want to re-use.2. The Add window appears. Select Reuse an existing condition/ action.This window does not appear when the experiment contains only one Trial ControlSettings profile, or the experiment contains more Trial Control Settings profiles but noneof them contains an element of the same type as that you have chosen.3. Select the name of the existing element from the list next to the option.The second list shows the Trial Control Settings profile that contains that element. If theelement is present in multiple Trial Control Settings, choose the appropriate one fromthe list.4. Click OK.5. A window appears for the type of element chosen. The Name and settings specified hereare the same as in the element chosen in step 3.- To create an identical copy of the element, click OK and go to step 7.- In all other cases, edit the settings and click OK, then go to step 6.6. If you have changed any property of the new element (including name and comment), awindow appears showing two options:- Apply the new settings only in the current trial control profile.- Apply the new settings in all writable Trial Control profiles.The program asks you whether you want to apply the properties only to the new copy, orto extend those changes to the original elements in all Trial Control Settings that arewritable (that means, not locked after acquisition). Choose the option you require andclick OK.7. Insert the resulting box in the Trial Control sequence. If you choose the option Apply the new settings in all writable trial control profiles,changes are not made in those profiles made read-only after data acquisition. You cannot re-use a Trial Control element from the same Trial Control Settings. This isbecause the Trial Control elements must be unique in order for correct analysis to bedone.182 Chapter 7 - Trial Controldefining a maximum trial durationIf the conditions to stop the trial (see page 185) are never met, EthoVision XT waitsindefinitely and the trial never ends. To prevent this from happening, you can define amaximum trial duration. For example in a novel object test, if you define a condition 'stopthe track when the mouse enters the zone with the familiar object' it may happen that themouse completely ignores the familiar object and only pays attention to the novel object. Use a maximum trial duration – Select this check box to define a maximum trial durationand enter the maximum duration of the trial (in hours, minutes or seconds).When you set a Maximum trial duration, the trial stops when that time has beenreached, regardless of whether one or more rules are being evaluated.Instead of using a Maximum trial duration, you can also define a condition based ontime and place it immediately before the Stop track box (see page 185). However, thereare two important differences: If you use Maximum trial duration, the program counts the time from the start of thetrial (this is indicated by the Start-Stop trial box). Instead, a condition placedimmediately before the Stop track box considers the time from the start of datarecording (this is indicated by the Start track box). The two starting points may not bethe same if you have a condition between Start-Stop trial and Start track that makesdata recording start some time later than the trial. With a multi-arena setup, a Maximum trial duration stops the trial (and thus datarecording) in all the arenas simultaneously, even when data recording had started atdifferent times. Instead, a time condition placed between the Start track and the Stoptrack box stops data recording in one arena when the condition is met in that arena. Thismeans that you can have data recording stop at different times in different arenas.For example, you set to start data recording when the animal is detected for the firsttime (In zone condition). Next, you define a delay condition of 5 minutes immediatelybefore the Stop track box. It the animals are detected for the first time at different timesin different arenas, data recording stops at different times too because of the constantdelay for all arenas. The trial ends when the recording stops in the last arena.Chapter 7 - Trial Control 183testing the trial control sequenceapplying trial control to your trialsTo apply Trial Control to your trials, make sure that the appropriate Trial Control Settingsprofile is highlighted in blue in the Experiment Explorer.Test your setup thoroughly before carrying out the actual trials (see above). For setups with multiple arenas – Trial Control is applied to each arena independently. For batch data acquisition – In the Trial List, you can specify which Trial Control Settingsyou want to use for a specific trial. For more information, see page 270. Locked Trial Control Settings – When a Trial Control Settings profile is used for acquiringat least one trial, it becomes locked. Locked settings are indicated by a lock symbol in theExperiment Explorer, and cannot be edited. To edit a locked Trial Control Settings profile,make a copy of it and edit this copy. See page 663. Tracking from video files – When you track from video files, Trial Control checksconditions using video time instead of the real time.- Conditions based on Delays – If you select the Detection Determines Speed option,Trial Control is carried out at the speed set by EthoVision in order not to skip videoimages (see page 280). This results in the video playing faster or slower than normal(1x), depending on the processor load necessary to detect subjects. For example, ifdetection requires little processor work, the program tracks the subject faster thannormal. A Delay condition (for example, Delay 60 s) is therefore met earlier than at realtime.- Using Clock time – If you define a condition based on clock time, or schedule a sub-rulewith Clock time, this is translated into the video start time, that is, the date and timethe video file used for tracking was created.Example 1 – You set a Time condition to start tracking After clock time 11:30. The videofile was created on March 6, 2008 at 11:00. once you start the trial, the condition is metIt is not easy to make a complex Trial Control sequence work right the first time.To check that Trial Control works as expected, see “Testing the trial controlsequence” in the EthoVision XT Trial and Hardware Control Manual on yourinstallation DVD.184 Chapter 7 - Trial Controlhalf an hour later in the video.If you had set to start tracking After clock time 10:30, tracking would start immediatelyafter starting the trial.Example 2 – You set a sub-rule to start at 10:00 (1st day). The video file was created onMarch 6, 2008 at 11:00. Once you start the trial, the sub-rule never starts, because theplanned start occurs before the initial time of the video. To make a sub-rule start whentracking from that video, set the start time between 11:00 and the video end time. Recording video, then tracking – If you choose to record video first and then acquire datafrom the resulting video file (see page 297):- When recording video only, Trial control is turned off. You get an appropriate messagewhen selecting the Save video file only option in the Acquisition window.- When you track from that video, Trial Control for Start-Stop is activated, but youcannot control hardware devices. Re-doing a trial – For video files recorded with EthoVision, you can re-do thecorresponding trial (see Redo trials in Chapter 9). However, if you re-do a trial the TrialControl log files recorded with the previous instance of the trial are deleted. Stopping a trial – When you stop the trial, all rules active in the Trial Control Settings areended immediately, and hardware devices are reset.Chapter 7 - Trial Control 1857.4 The Start-Stop trial ruleThe Start-Stop trial rule is displayed on your screen when you create or open Trial ControlSettings. With this rule, you control the start and stop of data acquisition (tracking). You canonly modify the initial Start-Stop trial rule.the default start-stop trial ruleThe default Start-Stop trial rule is a sequence of six boxes (but see exceptions described onpage 186): Rule Begin - Start-Stop trial – Activated when you start the trial (from the Acquisitionmenu, select Start Trial, or click the Start Trial button, or press Ctrl+F5).Once you start the trial, control passes to the next box. Condition - In Zone - Cumulative duration >=1.00 s When Center-Point is in Arena – Thisis the default Start track condition. It is fulfilled when center point of the subject (or ofany subjects, in the case of an arena with multiple subjects) has been detected in thearena for 1 second after you started the trial.If you start the trial and the animal is not detected yet, the program waits until it detectsthe animal for 1 second, then it starts tracking.The condition is applied separately for each arena. This means that tracking can starts atdifferent times in different arenas in the same trial. Action - Start track – Activated when the condition on its left side is met. Once this box isactivated, data recording (tracking) starts. If the condition placed between the Start-Stoptrial box and this box is not met immediately, tracking starts later than the time youstart the trial.Figure 7.10 The default Start-Stop trial rule. See explanation in the text.186 Chapter 7 - Trial Control Condition - Time - Infinite delay (condition never met) – This is the default Stop trackcondition. This condition is never met. The trial stops when you give the Stop commandor the time exceeds the Maximum trial duration (when this has been set). Action - Stop track – Marks the end of all tracks (and trial). Rule End - Start-Stop trial – This box is just the delimiter of the rule, it does not take anyaction.Trial Control with Activity analysisIf you selected Activity analysis in the Experiment Settings, the Condition - In zone box isremoved from the default Start-Stop rule. To carry out tracking and activity analysissimultaneously, and start tracking when your subject is detected in the arena for a specifictime, insert a new In Zone condition box in the Start-Stop rule. For more information onActivity analysis, see page 100.Note: If you also select Behavior recognition in the Experiment Settings, the Start-Stop rule isas described below.Trial Control with Rat behavior recognitionWhen you select Behavior recognition under Analysis Options in the Experiment Settings(page 101), a Time condition is added between the Condition - In zone box and the ActionStart Track box. This means that EthoVision XT waits 20 seconds after detecting the animalfor the first time, before starting actual tracking. This is done because the behaviorrecognition algorithms need a number of video frames equivalent to about 18 secondsbefore the current frame to recognize behavior.If this additional condition was absent, the first 18 seconds of the track would contain nobehavior data (see Figure 5.3 on page 104).Figure 7.11 Part of the Start-Stop trial rule of the Trial Control Settings when selecting Behavior recognitionin the Experiment Settings.The condition “After a delay of 20 seconds” is removed automatically from a TrialControl rule if you de-select Behavior recognition in the Experiment Settings.Chapter 7 - Trial Control 187An important distinction: Trial vs. track Trial – A Trial can be viewed as a container for the data collected in one recording session.It starts when you give the Start command in acquisition and stops when the tracks forall arenas and subjects have stopped. Track – A Track corresponds to the actual recording of a subject's position and behavior.The start of a track may or may not coincide with the start of the trial. This depends onyour Trial Control Settings. If you use the default Trial Control Settings, the track starts 1second after the animal has been detected in the arena and stops when you stop thetrial.A Trial may contain one or more tracks. For example, if you track two subjectssimultaneously, each trial includes two tracks, one per subject. Similarly, if your setupcontains four arenas with two subjects each, each trial includes 4 arenas x 2 subjects = 8tracks.In a multiple-arena setup, the end of a track does not necessarily mean the end of the trial.The trial ends when all tracks come to an end.customizing the start-stop trial ruleNote that you cannot delete the Rule Begin, Rule End, Start track and Stop track boxes.Furthermore, you cannot define an additional Start-Stop trial rule in the same Trial ControlSettings. To create a new rule, create new Trial Control Settings (see page 171).Modifying the Start track conditionThe default Start track condition is an In zone condition. To modify that condition, click the Settings button.In the window that appears:- Click Settings and specify the zone in which the animal should be.188 Chapter 7 - Trial Control- From the Statistic list, specify the time the animal should be in the zone (CumulativeDuration), or how many times it should visit the zone (Frequency) in order forEthoVision XT to start tracking. To use another condition (for example: start recording exactly 1 minute after starting thetrial), delete first the current condition (click that box and press Delete) and insert thenew one. To start recording as soon as you start the trial, delete the Start track condition: Click thebox immediately before the Start track box and press Delete. For an overview of conditions, see page 173.Modifying the Stop track conditionThe default Stop track condition is a Time condition. To modify that condition, click the Settings button, and choose the option you require. To use another condition, delete first the current condition (click that box and pressDelete), insert the new one (see page 172) and re-connect all the boxes (page 169).Chapter 7 - Trial Control 1897.5 Examples of Start-Stop trial rulesgeneralStarting data recording at a specific timeYou want to start recording at a time you are not in the lab, for example at 23:00 h.Delete the default Start track condition (see page 187). Define a Time condition (seepage 172). Select After clock time and enter 23:00:00. Click OK and place the resulting boxbefore the Start track boxBefore leaving the lab, click the green button to start the trial. The program waits till 23:00 tostart data recording.If you want to stop tracking when a specific time has elapsed, see page 27.Keep at least one condition between Start track and Stop track. If you do not dothis, tracking stops immediately after tracking starts, resulting in no data.If you have inserted Condition boxes based on Activity continuous in your TrialControl rule and then deselect Activity analysis in the Experiment settings (seepage 100), your rule becomes invalid. The Condition boxes based on Activitycontinuous are removed from your sequence and the connecting arrows areremoved. Redesign your Trial Control rule and connect the arrows between theboxes (see page 169).For more information on conditions, see Overview of conditions in theEthoVision XT Trial and Hardware Control Manual.190 Chapter 7 - Trial ControlStopping data recording after the maximum time has elapsedClick Settings in the Condition box immediately before the Stop track box. Select After adelay of and enter the maximum time.Instead of using a Time condition, you can also use the Maximum trial duration option (seepage 182).open field (multiple arenas)Starting data recording when the animal has been detected in the open field. The startcommand is given to each arena independently.In this setup, four open fields are treated as separate arenas. You want to start acquisitionwhen the animal is detected in the open field independent of what happens in other arenas.This can be achieved by using the default Start-Stop trial rule. As soon as an the subject isdetected in an arena, tracking starts for that arena, not the others. This way you do not haveto release all the animals at the same time.morris water mazeStopping the trial when the animal has found the platformIn the Arena Settings, make sure that the platform has been defined as zone. In the TrialControl Settings, delete the default Stop track condition (see page 187). Next, define an InZone condition (see page 172). If you want the program to stop recording as soon as the animal is over the platform,select Frequency as Statistic and choose >= 1. Click Settings and select the platform zone. Sometimes the animal swims over the platform, but it does not stop there. In such casesthe program would stop recording while the animal has not ‘found’ the platform. Insteadof selecting Frequency, choose Current duration and the minimum time the animal muststay on the platform (for example, 3 s). Click Settings and select the platform zone.Click OK and place the resulting box before the Stop track box.Chapter 7 - Trial Control 191Stopping the trial either when the rat has found the platform, or when it has beenswimming in the water maze for 60 seconds.The Arena Settings and the condition "the rat has found the platform" are similar to those inthe example above. The condition "rat swimming for 60 s" can be translated to "delay fromtracking >= 60 s".The track stops when either condition is met. The two conditions are combined with OR logic(see Figure 7.12).This solution results in tracks of different duration: less than 60 s for the animals that foundthe platform, and 60 s for the others.Instead of two Condition boxes in the example above, you can also define the In zonecondition box and set a Maximum Trial duration (see page 182).eight-arm radial mazeStopping the trial when the animal has been in four arms within 10 minutes.This can be done by combining eight conditions, that is, that the animal must be in the armspecifying that at least four must be met, no matter which arm the animal visits.1. Create an In zone condition (see page 173) and specify that the Frequency for Arm 1 mustbe >=1. That is, the animal must have visited Arm 1 at least once. Do the same for each ofthe other arms.Figure 7.12 Example of a Start-Stop trial rule for a water maze. The trial stops when the animal has been inthe platform zone for at least 3 s without break, or the time since the start of tracking is 60 s.A - In zone condition that specifies that the animal mist be for at least 3 seconds over het Platform zone.Select Current duration >= 3s. B - Time condition that specifies a delay of 60 s since the track started.C - ‘Any’ operator box.192 Chapter 7 - Trial Control2. Connect the resulting eight condition boxes in parallel using the N of All operator (seeFigure 7.13).3. Set the Maximum trial duration (see page 182) to 10 minutes to stop tracking in the casethe animal fails to visit four arms in the meantime.For more information on "N of All" operators, see page 178.Figure 7.13 Trial Control sequence for an eight-arm radial maze. The trial must stop when the animalhas visited four of the arms at least once.1, 2,... 8 - In zone condition boxes for Arm 1,2,... 8 respectively. A condition is met when the Frequency ofIn zone for that arm is greater than or equal to 1. A - Operator that checks that at least four of the eightconditions are met. B - Stop track box. When four conditions are met, the trial is stopped.Chapter 7 - Trial Control 1937.6 Analysis of Trial Control dataWith the EthoVision analysis function you can analyze the events that occur during a trial bymeans of statistics or time plots. Trial Control events – For example, when exactly does a condition become true? Trial Control states – To analyze the time between two Trial Control events. For example,how much time elapsed from the moment a condition became active to when thecondition became true?Analysis of Trial Control data is generally carried out for testing purposes, or to analyze thesubject's response to presentation of stimuli (for instance, in conditioning tests).To analyze Trial Control data, in the Analysis Profile choose Trial Control event to analyzesimple events, or Trial Control state to analyze time intervals between specific events. Next,calculate statistics (from the Analyze menu select Calculate Statistics) or visualize the data(from the Visualize menu select Plot Integrated Data). If you want to analyze the behavior of your subjects, see Chapter 14. If you want to calculate statistics/visualize data of dependent variables in portions of atrack defined by Trial Control events, then you must first define the Nesting intervals inthe Data Profile. See page 473.Exporting Trial Control dataYou can export Trial Control events (for example, Action becomes active, or Condition becomestrue) and Trial Control states (for example, From Action becomes active To Condition becomestrue). For more information, see page 654.For more information on analysis of trial control data, see “Analysis of TrialControl data” in the EthoVision XT Trial and Hardware Control Manual, which youcan find on your installation DVD.Chapter 8 - Configuring Detection Settings 195Chapter 8ConfiguringDetection Settings8.1 Why configure detection settings..................................................... 196Short introduction to the Detection settings.8.2 General procedure ............................................................................. 1988.3 Method settings ................................................................................ 201To specify how EthoVision XT detects the subject(s) and body points.8.4 Subject Identification settings .......................................................... 203To specify how EthoVision recognizes color-marked individuals.8.5 Video settings .................................................................................... 208Sample rate, image adjustments and Activity analysis settings.8.6 Detection settings (detection methods)........................................... 219Specify how EthoVision separates the subject from the background.8.7 Subject contour settings.................................................................... 234Pixel erosion and dilation to smooth the subject contour.8.8 Subject size settings .......................................................................... 237To specify the apparent size of the subjects. Includes settings for ratbehavior recognition.8.9 Working with Nose-tail base detection............................................ 241To optimize detection of nose-point and tail-base of rodents.8.10 Detection settings for Rat behavior recognition.............................. 2468.11 Customizing the Detection Settings screen ..................................... 249See also Managing Settings and Profiles (page 663).196 Chapter 8 - Configuring Detection Settings8.1 Why configure detection settingsEthoVision XT needs a few criteria to track moving subjects.For example, you need to specify how different the subject is from the background in termsof gray scale or color values, you need to select a method to distinguish the subject from thebackground, how many images per second you want EthoVision XT to analyze and to set theaverage subject size. Such criteria make up your Detection Settings.You can define different Detection settings in the same experiment. For example, you canhave one set for detecting white animals, and another to detect dark ones. For moreinformation, see page 663.Which settings are available in the Detection Settings window first of all depends on theversion of EthoVision XT: EthoVision XT Base version – In this version, you can track the center-point of the body ofa single animal. For the detection of the animal's body, four detection methods areavailable. The base version also allows tracking of a color marker on a single animal; inthis case the color marker is treated as the center-point of the animal. Multiple Body Points module – With this add-on module, you can track the center-point,the nose-point and the tail-base of a single animal. For the detection of multiple bodypoints, three detection methods are available. Social Interaction module – This add-on allows you to track two or more animals in onearena. You can use Color marker tracking or Marker assisted tracking. You can use thisadd-on in combination with the Multiple Body Points module to study social interactionsin detail. Rat Behavior Recognition module – For detecting a number of behaviors automatically,including rearing, grooming and sniffing. In the Detection Settings, the BehaviorSettings are enabled.Tracking multiple subjects requires that you carefully adjust the DetectionSettings. Make sure you follow the General procedure of configuring DetectionSettings in the order described below (see General procedure on page 4).We recommend to only use Tracking from video files if you use the MultipleBody Points module in combination with the Social Interaction module.Chapter 8 - Configuring Detection Settings 197opening the detection settingsBefore opening the Detection Settings, make sure that you have valid Arena Settings.To open the Detection Settings, do one of the following: In the Experiment Explorer, click the folder Detection Settings to expand it and click onone of the Detection Settings to open the Detection Settings screen. From the Setup menu, select Detection Settings. Select Open, select one of the DetectionSettings from the list and click OK.Result – The Detection Settings screen opens. By default, the Detection Settings window,the Video Source and Playback Control window are displayed. You can use the Show/Hide button on the component tool bar to change the view settings.The Detection Settings windowDepending on the number of subjects per arena and the tracked features selected in theExperiment Settings (see page 91), the layout of the Detection Settings window differs.Figure 8.1 The Detection Settings window. See the text for an explanation of the letters.198 Chapter 8 - Configuring Detection SettingsThe Detection Settings window contains the following sections (see also Figure 8.1): Method (A) – This section contains the methods for subject detection, nose-tail basedetection (if applicable), and options to use a scan window and to apply marker-assistedtracking. Detection (B) – In this section you configure the Subject Detection settings. Subject Identification (C) – This section is only available when you have multiple animals. Video (D) – In this section you can select your video if you track from video, adjust videosettings if you track live, set the Sample rate and Smoothing settings, and select settingsfor Activity analysis. Subject Size (E) – In this section you set the subject size for one or more animals. You alsoset important parameters for rat behavior recognition (when enabled). Subject Contour (F) – In this section you can erode and dilate the detected body tooptimize detection.8.2 General procedureSubject detection works well if there is good contrast between the subject and thebackground in the video image, and for the whole duration of the trials. Increasing thecontrast (for example, by changing the background so it differs as much as possible in colorfrom the subject) is far more effective than any detection setting.Experiment SettingsIn the Experiment Settings window (see also page 91):1. Select the Number of Subjects per Arena.You can use a pre-defined template to automatically configure detectionsettings for commonly used experimental setups (see “Creating a newexperiment based on a pre-defined template” on page 90). After you have donethis, you must still adjust the detection settings (as described in this chapter)before you can track any animal correctly.Make sure you carefully follow the order of steps as described below. If aparticular step does not apply to your setup, proceed to the next step.Chapter 8 - Configuring Detection Settings 1992. Select one of the options from Detected features.Method section - 1Which methods and options are available in the Method section, depends on the ExperimentSettings.3. Make the following selection:- Use scan window – Make sure this option is NOT selected while you are configuringDetection Settings.- Marker assisted tracking – Select this option when you want to track more than oneanimal in the same arena. In all other cases go to step 5.see page 202Subject Identification section4. You can use Subject Identification, if you have multiple subjects per arena and you haveeither selected Color marker tracking (treat marker as center-point) in the ExperimentSettings or Marker assisted tracking in the Detection Settings.see page 203Video section5. In the Video section, you can have the following options:- Select video (only if you track from video) - Click this button and browse to your videoif it is not automatically selected.- Image (only if you track live). Click this button to adjust the settings of your camera.Dependent on your camera or frame grabber board, some options may be greyed out.- Sample rate – The sample rate is the number of video images per second you wantEthoVision XT to analyze among those available.- Smoothing – Select the option you require.- Activity (only if you selected Activity analysis in the Experiment Settings – Click thisbutton to create and view settings for Activity analysis.see page 208Method section - 2Which methods and options are available in the Method section, depends on the ExperimentSettings.6. Select one of the following:- Method – These subject detection methods (Gray scaling: page 220, Static subtraction:page 221, Dynamic subtraction: page 226, Differencing: page 230) must always beselected.200 Chapter 8 - Configuring Detection Settings- Nose-Tail detection – These nose-tail detection methods (Shape-based (XT4), Modelbased(XT5), Advanced Model-based (XT6)) are only available when you have selectedCenter-point, nose-point and tail-base detection for a single animal in the ExperimentSettings.see page 219 for Detection methods andpage 241 for Nose-tail detectionmethodsDetection section7. In the Detection section, you can configure the subject detection method (Gray scaling:page 220, Static subtraction: page 221, Dynamic subtraction: page 226 and Differencing:page 230) you selected in the previous step.see page 220Subject Contour8. In the Subject Contour section, set the level of Erosion and Dilation. see page 234Subject Size9. In the Subject Size section, click the Edit button to set:- Detected subject size – Here you can set the Minimum and Maximum subject size.- Modeled subject size – Here you model the subject size when you have multiplesubjects or when you use the Nose-tail detection method Advanced Model-based(XT6) for one or more subjects.- Advanced Subject Size settings – Here you can set Maximum noise size, Shape stabilityand Modelling effort in case you have multiple subjects or when you use the Nose-taildetection method Advanced Model-based (XT6) for one or more subjects.Click the Behavior button (when present) to acquire the size and shape parameters forrat behavior recognition.10.Once the subject is detected well, in the Method section, select Use scan window (seepage 202) and click OK.see page 237If you select Center-point, nose-point and tail-base detection with 2 ormore Subjects per Arena in the Experiment Settings, the Nose-Taildetection in the Detection Settings is automatically set to AdvancedModel-based (XT6) and therefore the Nose-tail detection methods are notdisplayed.Chapter 8 - Configuring Detection Settings 201You are now ready to acquire data (see Chapter 9).Notes Every time you apply changes in the Detection Settings window, you can see theconsequences in the Video Source window. To save the detection settings, click the Save Changes button at the bottom of thewindow. If you have made more changes and you want to return to the last savedsettings, click the Undo Changes button. EthoVision XT offers a number real-time statistics on the quality of detection that youcan check while you adjust detection settings. Keep in mind that detection in the Detection Settings is real-time, whereas withDetection determines speed (page 280) during acquisition the quality of detection canbe better!8.3 Method settingsFor the detection methods, see page 219.marker assisted trackingWhen do I use Marker assisted tracking?You use marker assisted tracking when you have more than one subject per arena and whenyou have NOT selected Color marker tracking in the Experiment Settings (see page 91).Marker assisted tracking is optimized for use with rodents.202 Chapter 8 - Configuring Detection SettingsHow to use Marker assisted tracking?In the Method section of the Detection Settings window, select the Marker assisted trackingcheck box. The Identification button in the Subject Identification section now becomesenabled.Follow the steps in the Subject Identification section below to set up Marker assistedtracking.See also Tips for marker tracking on page 207.What is the difference between Marker assisted tracking and Color marker tracking? With Marker assisted tracking, EthoVision tracks the animal's body and uses the markerto determine the animal's identity. When you use Color marker tracking, EthoVisiontracks just the marker. With Color marker tracking, you can track any species (that can be marked) whereasMarker assisted tracking is optimized for rodents only. With color marker tracking, onlythe position of the marker is recorded. The actual shape and size of the animal is ignored.To use color marker tracking, select Color marker tracking (treat marker as center-point)in the Experiment Settings (see page 100). Next, in the Detection Settings window adjustthe Subject Identification and Video settings (page 203 and page 208).See also Tips for marker tracking on page 207.use scan windowWhen Use scan window is selected, Ethovision XT finds the subject, 'follows' it and searchesonly the area immediately around it in the following video image. Therefore, the scanwindow moves with the subject.Why use a scan window?Use a scan window for two purposes:When you do NOT select the Marker assisted tracking check box, you will carryout unmarked tracking. You can carry out unmarked tracking when you analyzethe variables on a group level (so the identity of the animals is not important) orwhen the animals cannot touch.Only select Scan Window after you have finished configuring the DetectionSettings. Scan window should not be selected while you configure the DetectionSettings.Chapter 8 - Configuring Detection Settings 203 To reduce problems with reflections – If a reflection occurs outside the scan window (forexample, waves in a water maze), this is ignored, resulting in fewer detection errors.However, make effort to improve lighting to eliminate reflections (see page 58). To increase the sample rate without missed samples – With a scan window, yourcomputer processes data from a small proportion of the video image. This reduces theaverage processor load, so you can increase the sample rate, if necessary, without missedsamples (remember that the higher the processor load, the more likely samples areskipped).Losing the subject – When the subject disappears from the scan window, EthoVision XTscans the whole arena to find the subject again, and then re-positions the scan window overthat new location.For user of previous EthoVision versions – The size of the scan window is automaticallydetermined by the program and changes during acquisition according to the subject size.Therefore, you do not need to specify it.8.4 Subject Identification settingssubject identificationYou carry out the procedure described below for either Marker assisted tracking or Colormarker tracking.1. Put the marked animals in the arena or play the video. Optimize the camera setup (seepage 55), lighting conditions (see page 58) and marker characteristics (page 207).Make sure you select a point in the video where the animals do not touch eachother!If you use multiple body point detection, it is normal that the nose is notcorrectly identified at this point.204 Chapter 8 - Configuring Detection Settings2. In the Subject Identification section, click the name of one of the subjects and click theIdentification button.Result – The Identification of Subject # window and the Marker detection window open.You should enlarge the Marker Detection window by dragging its bottom-right corner.3. Move the mouse pointer to the Marker Detection window so the pointer becomes aneyedropper.4. Move the eyedropper on top of the color marker of the subject you want to identify (seethe figure below) and click the left mouse button.The Identification window now displays the color you just picked and the pixels with theinitial color are highlighted in the Marker Detection window. In the Identificationwindow, you can change the following (see also Figure 8.2):- Hue – Hue is the predominant wavelength of the marker color and represents what isusually referred to as color in everyday life (red, green, blue, etc.). The range of valuesfor Hue of the picked color are shown and this range is represented by the box on thevertical color bar on the right.- Saturation – Saturation represents the purity of a color. Saturation decreases when apure color is mixed with white; "red" is saturated, "pink" is less saturated. The range ofvalues for Saturation are shown and this range is represented by the width of the boxon the Color map.- Brightness – Brightness (or Intensity) represents the amount of light reflected by thecolored surface. The range of values for Brightness are shown and this range isrepresented by the height of the box on the Color map. If you set this range too broad,you will not be able to separate the colors well.If the marker is not detected completely or not detected in all areas of the arena, expandthe range of Hue, Saturation and Brightness slightly.The detected marker can be eroded or dilated in order to compensate for specificscenarios. For example, you can dilate the marker if the marker is partly masked by cageChapter 8 - Configuring Detection Settings 205bars or you can use erosion to round the marker which will prevent the center-point fromjittering.See Fine-tuning color settings on page 205.Fine-tuning color settingsWhen you first pick a marker color in the Marker Detection window, EthoVision selects allpixels in the video image with the same initial color. Groups of pixels with this initial colorare highlighted by an outline with the opposite color. Because a marker in the video imagecan consist of different shades of the same color, it is possible that initially not the completemarker is selected (see Figure 8.3).Figure 8.2 The Identification window and its relation with the HSI color model. A = Color bar: the boxrepresents Hue which corresponds to an angle on the circle in the HSI color model (for example, 0 degreesmeans red, 240 degrees means blue). B = Color map: the height of the box represents the Brightness (orIntensity) range which corresponds to the vertical position of the color circle. The width of the boxrepresents the Saturation range which corresponds to the horizontal position on the circle between thecenter and the edge.206 Chapter 8 - Configuring Detection SettingsThe Figure 8.3 shows part of the Marker Detection window and part of the Identificationwindow.You can fine-tune the color settings by adjusting the Hue, Saturation and Brightness in theIdentification window.5. Change the range of color settings by changing the numbers or by resizing the Hue box onthe vertical color bar, or resizing/moving the box in the color map (horizontally to adjustSaturation, vertically to adjust Brightness). As a result, the outline covers (almost) thecomplete marker (see Figure 8.4).Figure 8.3 The initial color that is picked in the Marker Detection window (left picture) and thecorresponding range for color settings Hue, Saturation and Brightness in the Identificationwindow. The arrows indicate how changing the boxes changes the corresponding color settingChapter 8 - Configuring Detection Settings 2076. Next, play the video to see in the Marker Detection window whether the marker isdetected correctly in different parts of the arena.If the marker 'dances' then your color settings are too sensitive. Go back to step 5 andmake the box larger.7. Continue with setting the following:- Marker erosion – Set the number of pixels to erode. By selecting Erode first, thendilate, you can make the marker more round to prevent the center-point of the markerto start jittering.- Marker dilation – Set the number of pixels to dilate. By selecting Dilate first, thenerode, you can prevent the marker from being masked or divided in two separatemarkers by, for instance, a grid on top of the arena.- Minimal marker size – Set the Minimal marker size to prevent noise to be detected asthe marker. First, increase the Minimal marker size until noise is not detectedanymore. Next, enter a value for the Minimal marker size that is somewhere inbetween this lower threshold and the value of the Current marker size.- Marker pointer – Select a Marker pointer from the list. With relatively small markers itis useful to select Cross lines.8. Click OK when you are done.Repeat steps 2-8 for all subjects you want to identify.tips for marker trackingColor characteristics Use a color scale (for example from a paint company) to find out which colors are mosteasily recognized by EthoVision in your setup and lighting conditions. Do this beforeapplying color markers to your animals. Use colors that have different hue values. For example, use red and green and not redand orange. It may be wise to avoid using red for marking, since it looks like blood. Note that marking your animals may stress them, and therefore affect their behavior. Ifnecessary, ensure that you select a marking method that lasts for a longer period of time.Figure 8.4 The color of the marker after fine-tuning the color settings. Most of the marker isnow selected as indicated by the white outline (see also Figure 8.3)208 Chapter 8 - Configuring Detection SettingsMarker characteristics Make sure that the marker is as round as possible, this will ensure that the relativemovement of the center of gravity of the marker is the same in all directions when theedges of the marker change due to posture changes or otherwise. For color markertracking it will help to prevent the jitter of the marker. When you use marker assisted tracking, make sure the marker is not too big; the markercan interfere with proper detection of the body contour. For example, make sure that adark marker on a white animal does not cover the complete width of the animal becauseit can cause the body to be split in two.Lighting conditions Use a sensitive camera if possible. A low light intensity makes it difficult to separatedifferent colors. When it is not possible to use a sensitive camera or strong illuminationin your setup, try using fluorescent marker colors with UV lighting. For optimal color separation, illuminate your setup with lamps that approximate to daylightin color temperature, that is, have a wide spectrum range.Subject rolesThe names under Subjects in the Subject Identification section are the Subject roles enteredin the Experiment Settings (see page 91). You can use Subject roles "Control" and "Treated",for instance, to plan to give the control animals a blue marker in some trials and treatedanimals the blue marker in other trials. To do this, define multiple sets of Detection Settings,one for each combination of marker color*treatment level. Before acquiring the data, makesure that you use the Detection Settings that correspond to the current animals.8.5 Video settingssample rateThe Sample rate is the rate at which EthoVision analyzes the images to find the subject. It isexpressed in samples per second.Chapter 8 - Configuring Detection Settings 209Selecting a certain sample rate does not mean that the program can always analyze data atthat rate. If the computer processor load is too high, EthoVision XT may skip a sample andanalyze the next one. Skipped samples result in missed samples (see below).The maximum sample rate is the frame rate set by the TV standard of your video. For PALvideo, frame rate is 25 frames/s, therefore the maximum sample rate is 25 samples perseconds. For NTSC video, the maximum sample rate is 29.97 samples per seconds.The sample rate you set in EthoVision XT can only be the frame rate divided by an integer. Forexample, for PAL video it is 25, 12.5, 8.33, etc.What is the optimal sample rate?Setting the correct sample rate is very important. If the rate is too high, the noise caused bysmall movements of your animal will be picked up and give an overestimate of dependentvariables such as the distance moved. If the sample rate is too low, you will loose data andget an underestimate of the distance moved.The table below gives some general recommendations taken from the published literature.These sample rates have successfully been used to track animals with previous EthoVisionversions. However, we strongly recommend that you determine the optimum sample rate foryour specific setup and animals (see below). Note that if, for instance, your treatment causeshyperactivity, you will need a higher sample rate for hyperactive animals than somnolentanimals.Some digital cameras support very high frame rates. However, this requires a lotof processor capacity. To prevent that EthoVision XT discards samples whiletracking live, do not set the frame rate and sample rate too high. Check thepercentage of missed samples in the Trial list (see page 263) after tracking tomake sure the EthoVision XT can handle the selected frame rate.If you selected both Nose-tail tracking and Marker assisted tracking, werecommend a sample rate of 12.5 samples per second.For Rat behavior recognition, select a sample rate between 25 and 31 frames persecond.Animal Sample rate (samples/second)Damselfish 5Goldfish 0.5Zebrafish larvae (analog camera) 25Zebrafish larvae (FireWire camera) 30 or 60*Mites 1210 Chapter 8 - Configuring Detection Settings* For rapid movements you may want to track with a higher sample rate. It depends on the numberof tracked subjects, the video resolution, the camera settings and the processor speed of yourcomputer whether that is possible.The optimal sample rate is the minimum sample rate that provides an accurate estimation ofthe dependent variables (distance, velocity, etc.) without including the redundantinformation due to phenomena other than the 'real' locomotion. For example, for an animalwalking in a straight line the data points will never be in a straight line because the centerpointof the subject shifts laterally with each step. In order to distinguish between 'real'movement and effects like the one described above, you can calculate dependent variableslike distance moved using the maximum or a lower sample rate.1. Create new Detection Settings (see page 198) and specify the maximum sample rate (25or 29.97, depending on your TV standard). With a FireWire camera this sample rate maybe higher. However, whether this is possible depends on the performance of yourcomputer, the number of animals you track, and the video resolution.2. Start Acquisition and acquire data with those Detection Settings (see Chapter 10).3. Calculate the dependent variable you are interested in (see Chapter 19). Export the datafor example to Excel (see page 653) and plot the dependent variable values against thesample rate. In the example below, distance moved is used.4. Repeat steps 1 to 3 by selecting smaller sample rates.Once the data are plotted as in Figure 8.5, there should be a range of sample rates for whichthe dependent variable value does not change much (plateau). This means that slightchanges in the sample rate do no result in loss of information, or addition of redundantinformation (noise and movements like body wobble).Mouse 12Parasitic wasps 2Rat 5Rodent's nose 25 (PAL) 30 (NTSC)Tick 3Tree-shrew (Tupaia) 6-12Chapter 8 - Configuring Detection Settings 211Low sample rates result in loss of useful information, because the sinuosity of the originalpath is removed. Therefore, the total distance moved is usually decreased (see figure below).High sample rates result in acquisition of redundant information. In the case of bodywobbling, and assuming that the animal is moving along a straight line, the lateral shift ofthe body center causes the total distance moved to be longer than the 'real' one.With Track Smoothing (see page 401) you can filter out 'noise' as a result of body wobble.Figure 8.5 Detecting optimal sample rate from a collection of distance moved recorded with differentsample rates.212 Chapter 8 - Configuring Detection SettingsMissed samplesThe actual sample rate may be lower than the maximum you set, because an image cannotbe captured until the previous one is processed. If the sample rate you define is too high,EthoVision will miss samples (up to 1% is acceptable) and the processor load will be high. Thepercentage of missed samples is shown in the Analysis Results and Scoring pane (seepage 251) and in the Trial List as a System Variable (page 262). You can calculate the number ofmissed samples in acquired tracks with the Number statistic of continuous variables (e.g.,velocity). If your processor load is larger than 100, and there are large amounts of missedvalues, you will have to lower the sample rate. The following factors may cause the processorload to be too high: Computer memory, processor speed and video card capacity – See the systemrequirements on page 38. In general, using a computer with a dual-core CPU helps you towork with higher sample rates than normal computers do. Other programs installed – Do not install other video software (for example, videoediting programs, DVD burning software), because this can interfere with EthoVision'svideo processing and cause a reduction in performance. Other programs are running – Make sure you shut down all other programs, includingthose running in the background such as e-mail programs and virus scanners. These areusually shown in the System Tray in the bottom-right corner of your screen. Windows Classic – The performance will considerably increase if you set the WindowsTheme to Windows Classic when using Windows 7. Image resolution – For live video tracking, In the Experiment Settings you can choose theresolution for your video image (see page 95). Size of arenas – Make arenas as small as possible (but including the entire area theanimal can be in). Number of arenas – If you track live and use more than four arenas in a trial, check firstthat no samples are missed. If the number of missed samples is too high, first make aMPEG-4 file (provided that you have the Picolo Diligent board installed on your PC), thentrack from that. More generally, if you track from video files the number of arenas isnever a problem as long as you select Detection determines speed (see page 280).When making detection settings, you could start with making an arena definition withonly one area which speeds up the detection process. After you have finished configuringdetection settings for one arena, you can add the others to the arena definition. Display options – You can decrease processor load by minimizing the number of TrackFeatures to be displayed (see page 250) and by closing the Analysis Results and Scoringpane (see page 285).Chapter 8 - Configuring Detection Settings 213 Real time analysis – Hiding the Analysis Results and Scoring pane results in savingprocessor power. Detection method – If possible, use the Gray scaling method which requires lessprocessor load than Static subtraction. Static subtraction requires less processor loadthan Dynamic subtraction and Differencing. Area to search for subjects – If you cannot achieve the optimum sample rate, make surethat you select Use scan window (see page 202), but only after you are finishedconfiguring the detection settings.Tracking from video filesYou can switch the speed at which EthoVision acquires data from real time (1x) to the highestachievable by the computer, by selecting Detection determines speed (see page 280). Thisoption allows you to: Ensure that you do not lose any frames when the video frame rate is faster then yourprocessor can handle. The video is played slower than real time, without missed samples. Acquire data faster than in real time when the video frame rate is slower than theprocessor can handle.select videoIf you track from video, you may want to acquire data from a video that differs from the oneyou used to create the Arena Settings. By default, the Detection Settings uses the video yougrabbed a background image from in the Arena Settings. If you want to track from anothervideo file, click Select Video under Video. Browse to the location of your video and click Open.This option is only available if you chose track from video files in the Experiment Settings.image settingsIf you track live, you can adjust the live video signal before EthoVision XT analyzes it fordetection. For example, you can adjust contrast and brightness.Click the Image button under Video. In the window that appears, adjust the properties yourequire. Contrast enhances the lighter and darker parts of the image, Brightness makes theimage lighter, Saturation increases the color intensity. The Image Settings also affect theAfter acquisition you can see the proportion of missed samples in the Trial list(see Chapter 9) as one of the System Variables.214 Chapter 8 - Configuring Detection Settingsimage that you can save to a video file (see page 213). If you click the Default button, thesettings are reset to the defaults of the camera driver.The Image button is only available if your experiment is set to Live tracking. Dependent onthe camera, some settings may be greyed out.smoothingIn some cases you may want to adjust the quality of the video image before acquiring data. Ifyour video contains fine-grained noise, this may be improved by using Video pixelsmoothing. If the detected body contour is ‘flickering’, using Track noise reduction mayimprove the quality of the track. Click the Smoothing button and adjust one of the optionsbelow.Video pixel smoothingSelect a Video pixel smoothing value to reduce the influence of fine-grained noise ondetection. Because of fine-grained noise, adjacent pixels that are expected to have the sameAlways try adjusting the lighting and camera aperture settings before changingthe Video Adjustment Settings.If you change settings, you need to redefine your detection thresholds (seeabove) and make a new reference image.Chapter 8 - Configuring Detection Settings 215(or similar) gray scale value may have very different values. In such cases, EthoVision XT mayoccasionally detect groups of pixels as irrelevant subjects.The Video pixel smoothing option reduces the difference between adjacent pixels prior todetection, by smudging the image, that is, replacing the gray scale value of each pixel withthe median of the surrounding pixels.Pixel smoothing does not affect Color marker tracking. It does affect detecting the bodycontour in Marker assisted tracking.Choose one of the values: None (default) – No pixel smoothing. The video image is analyzed for subject detectionas it is. Low – Each pixel is blended with the 8 nearest pixels (pixel distance =1). Medium – Each pixel is blended with the 24 nearest pixels (pixel distance 1 or 2). High – Each pixel is blended with the 48 nearest pixels (pixel distance 1, 2 or 3).Example – A bright pixel (gray value= 240) is surrounded by dark pixels:If you select Video pixel smoothing= Low, that pixel gets the median value calculated amongthe 8 nearest pixels plus that pixel itself. In that case the median is 150, so that pixel will lookdarker. If you specify Video pixel smoothing= Medium, the median is calculated over the 24nearest pixels plus the pixel itself. If you specify Video pixel smoothing= High, an evenbigger group of surrounding pixels is considered.A high Video Pixel smoothing level requires a significant amount of processor capacity.Why use the Video pixel smoothing option? Select a moderate Video pixel smoothing value or leave None selected If adjacent pixelsin the background are relatively constant. Using more surrounding pixels for thesmoothing effect does not bring up better results. Select a high Video pixel smoothing value if adjacent pixels in the background are onaverage very different. For example, when the cage's bedding material looks grainy. In216 Chapter 8 - Configuring Detection Settingssuch cases you need to smooth each pixel using more surrounding pixels to compensatefor this variation.Track noise reductionIf the detected centre point of your animal is continuously moving, while in fact your animalis sitting still, the total distance moved will be overestimated. You can use track smoothingto correct for this after you have acquired your data (see page 401 for more information).In some cases better quality tracking can be obtained by reducing track noise duringacquisition. This may especially be the case if you use Trial and Hardware Control. As anexample, if the center point of an animal is detected in a zone, you want the pellet dispenserto drop a pellet. If the detected center point is moving rapidly because of noise, this mayresult in a number of consecutive pellets to be dropped, every time the center point crossesthe border of the zone. Track noise reduction may solve this problem.With Track noise reduction, rapid changes in the distance moved will be compensated forand the path will be smoothed. Using Track noise reduction in the Detection Settingsinfluences the acquired track, and therefore it is not possible to change it back afteracquisition. This is in contrast to post-acquisition smoothing (see page 401), where you canuse profiles to calculate analysis results with and without those filters applied. Also, do notuse Track noise reduction if you are particularly interested in rapid movements of youranimal, for example, if you study the startle response of zebra fish larvae.Figure 8.6 shows the effect of Track noise reduction on the walking path of a subject. In thisexample the effect on the X-coordinates of the animal is shown.Using Video Pixel smoothing may result in losing information in the video imageimportant for detection. For example, sharp borders of subjects, etc.Figure 8.6 the effect of Track noise reduction on the walking path of a subject.Chapter 8 - Configuring Detection Settings 217Track noise reduction makes use of the Gaussian Process Regression method. Track noisereduction is applied during acquisition. Hence, it alters the acquired tracks, which cannot beundone afterwards.With Gaussian Process Regression, the sample points are smoothed, using the x-ycoordinates of the previous 12 sample points. This differs from the Lowess post-acquisitionsmoothing method (see page 403) that uses samples before and after the sample point to besmoothed. This is not possible during acquisition, because the x-y coordinates of futuresamples are not yet known.If you use noise-tail tracking, the paths of the nose point and tail base are smoothedindependent of the path of the center point.activity settingsIf you selected Activity analysis in the Experiment Settings, you must create settings for thisanalysis. To make it easier to judge whether the settings are correct, make sure the detectedBody fill of your subject is not shown in the video window. Click the Show/Hide button in thetop-right corner of your window, select Detection Features and de-select Body fill and Noise.Then select Activity. Close the Detection Features window and play the video. The detectedpixel change between samples is shown in purple.Click the Activity button in the Detection Settings window. The Activity Settings windowopens (see Figure 8.7). Activity threshold – This value gives the threshold for the difference in grey scale values,between a sample and the previous sample. Background noise filter – Use this filter to remove noise in the video, or camera image.With the background noise filter, a pixel change is only counted as a change, if thesurrounding pixels also have changed. The pixels that are not fully surrounded withchanged pixels are removed and around the remaining pixels a layer of changed pixels isFigure 8.7 The Activity Settings window.218 Chapter 8 - Configuring Detection Settingsadded. The higher the setting for the background noise filter, the more surroundingpixels are used. See Figure 8.8 for an explanation. Compression artifacts filter – Use the compression artifacts filter to compensate forvideo artifacts that are regularly recurring. With the compression artifacts filter, only thechanges that are occurring in a number of consecutive frames are taken into account. Ifyou track live, we recommend to leave this setting on the default value Off. If you trackfrom video, or On if you select Redo tracking. However, if you are interested in very briefor fast occurring changes, leave the setting for the Compression artifacts filter on Off.Create settings in such a way that all activity of your animal is detected and some noise isleft. Also try whether lowering the sample rate (see page 208) and using Video PixelSmoothing (see page 214) improves Activity detection. Then, click the Show/Hide buttononce more and select Detection Features. De-select Activity and select Body fill. Then createdetection settings for your subject. Or, if you need different sample rates for activity analysisand tracking, create separate detection settings for tracking.It is also possibly to only carry out activity analysis and not create detection settings fortracking. However, if you do so, it may happen that EthoVision XT has so much difficulties todetect the animal, that this decreases the performance of acquisition. This may result inmany missed samples. Therefore, while creating activity settings, check that the proportionmissed samples does not become too high (see also “Missed samples” on page 212).Figure 8.8 Background noise filter with the value 1. The black squares represent pixels that havechanged in two consecutive samples. First, all pixels that are not completely surrounded by one layer ofchanged pixels are removed (red squares). Then, one layer of changed pixels is added around theremaining pixels. The thin red hairline shows the original changed pixels.Chapter 8 - Configuring Detection Settings 2198.6 Detection settings (detection methods)which detection method should i use?There are four methods available to distinguish the animal from the background:Use Gray scaling when: The animal's grayness differs from the background in all places that can be visited. The background cannot change during a trial. Lighting is even (minimal shadows and reflections) during the trial.Example – tracking a white rat in a uniform black open field with no bright objects.Use Static Subtraction when: The Gray scaling method does not work (because other objects in the arena have asimilar color as the animal). The background does not change in time. The light is constant during the trial.Example – Tracking a white rat in an open field with unavoidable reflections or brightobjects.Use Dynamic Subtraction when:During trials light conditions gradually change or the background changes (beddingmaterial is kicked around, food pellets are dropped, droppings appear etc.).Example – Tracking a mouse in a home cage provided with bedding material. Theactivity of the mouse causes the bedding to change appearance in the video image.Use Differencing when:There is a lot of variation in contrast between a subject and the background within anarena. Variation in contrast can be caused, for example, by a gradient in light intensity inthe arena or in the fur of the animal, e.g. hooded rats.220 Chapter 8 - Configuring Detection Settingsdetection method: gray scalingHow does the Gray scaling method work?The video image is converted to monochrome. Each pixel in the image has a gray scale value,ranging from 0 (black) to 255 (white). With Gray scaling, you define which range of gray scalevalues should be considered as the subject. The remaining gray scale values are consideredas background.Procedure1. Select Gray scaling in the Methods section of the Detection Settings window.2. Insert the subject in the arena, or position the media file at a point where the subject ismoving.With the Gray scaling method selected in the Detection settings window, it is not possible tograb a frame or to select another video file because the Gray scaling method does not use aReference image.3. In the Detection section, move the two sliders next to Select range or type the values inthe corresponding fields to define the lower and higher limits of gray scale values (rangefrom 0=black to 255=white) of the animal. The background cannot contain gray scalevalues outside these limits.4. Check on the Video window the quality of detection resulting from the current gray scalerange. The detected subject shows the features and colors you have chosen in the TrackFeatures window (see page 250).- If the detected area is too small relative to the real subject, you need to increase therange (at least in one direction - brighter or darker).- Areas marked as Noise (by default, these are shown in orange; see page 250), indicatethat the gray scale range is too wide – you need to narrow it in at least one direction.Chapter 8 - Configuring Detection Settings 2215. Move the sliders until the subject (or the part which is of interest) is detected fully, and thenoise is minimized. Check that the subject is properly detected in all parts of the arena bymoving the video slider, or by waiting for the live animal to move.detection method: static subtractionHow does the Static subtraction method work?The video image is converted to monochrome. Each pixel in the image has a gray scale value,ranging from 0 (black) to 255 (white). With the Static subtraction method, you choose animage of the arena without the subject, named Reference Image. When analyzing theimages, EthoVision XT subtracts the gray scale value of each pixel in the reference imagefrom the gray scale value of the corresponding pixel in the current image (live or from video).The pixels with non-zero difference are considered the subject.You can remove small non-zero differences by defining the contrast between current imageand background that must be considered as the subject (see the procedure below). Theremaining pixels are considered as the background (see Figure 8.9).Procedure1. Select Static subtraction in the Method section of the Detection Settings window.It is important that the complete animal's body is detected for optimaltracking. Proceed with the Contour adjustments (see page 41) to optimizebody detection.Figure 8.9 An example of how the Static subtraction detection method works. The gray scale value of eachpixel of the reference image is subtracted from the gray scale value of each pixel of the live image. Theresult is ‘0’ for every pixel; if the difference > ‘0’ and within the gray scale range you have set, these pixelsare considered to be the subject. So, with this method your task is to specify the contrast that optimizes thedetection of the subject.222 Chapter 8 - Configuring Detection Settings2. Under Detection, click the Settings button next to Reference Image. The image on the leftis the Reference Image that is used at the start of the track. The options on the right of thiswindow are greyed out.The aim is to obtain a reference image that does not contain images of the animals you wantto track. To do so, follow the instructions below in consecutive order. If A fails, move on to B,if that fails move on to C.1. Grab Current (A) - Scroll through the video until you find an image without animals. Ifyou track live, make sure that there are no animals in the arena. Click Grab Current (A).This image will be the initial reference image. Skip Steps 2 and 3. and click Close.If your video does not contain images without animals, continue with option 2. Alsocontinue with option 2 if you track live and you cannot start with an empty arena.2. Grab from other (B)- You may have a video with an identical background as the one youuse for tracking, but without animals. Or you may have an image of a background withoutanimals. If this is the case, click Grab from Other and select this video file or image file. Ifyou select a video file, the first frame of this file will be used as an initial reference image.If you select an image file, this has to have the same resolution as the video file you use fortracking. Browse to this file and click Open. Skip step 3 and click Close. If you do not havesuch video or image, proceed with option 3.Figure 8.10 The Reference Image window of static subtraction and live tracking. If you track from videofile, the text in this window is slightly different but the options are the same. Follow the procedure inconsecutive order until the left image is without animals.Chapter 8 - Configuring Detection Settings 223By default, the reference images are stored in the folder Bitmap Files of your experiment. Ifthe background has not changed, you can use these images as reference images in otherexperiments.3. Start learning (C) - With this option an average image of the entire video will be made. Ifthe animals are moving, learning will average out the pixels of the animals, resulting in aninitial reference image without animals.If you track live, you have to click Start Learning, and subsequently click Stop Learning assoon as you have obtained an initial reference image without animals. Click Close. I.4. Click Close when you are finished grabbing a Reference Image.5. From the Subject is … than background list, select one of the following, depending on thecolor of the subject you want to track:- Brighter than background – For example, to track a Wistar rat in a black open field.- Darker than background – For example, to track a C57BL6 mouse in an open field withwhite bedding.- Brighter and darker than background – For example, to track a DBA2 mouse in a homecage with white background and a black shelter, or a hooded (black and white) rat in auniform gray open field.Result – Depending on the selection above, different contrast sliders become available:- For Brighter than background – Bright Contrast slider.- For Darker than background – Dark Contrast slider.- For Brighter and darker than background – Both sliders.For each slider, the contrast varies from 0 (no contrast) to 255 (full contrast).Unlike with Gray scaling, the values selected with the sliders represent the differencebetween the current and the reference image, not absolute gray scale values.Figure 8.11 The Learning process in the Reference Image window. A-The video image in which theanimal is in the view at all times, B-The result of applying Learn: the moving animal is removed fromthe background.224 Chapter 8 - Configuring Detection SettingsWhen the subject is brighter and darker than the background, detection only works wellwhen there is enough contrast between the areas of different brightness and thebackground. For example, tracking a hooded rat works well when the background isintermediate between black and white.6. Release the subject in the arena, or position the media file at a point where the subject ismoving.7. Move the appropriate slider or type the values in the corresponding fields to define thelower and higher limits of the contrast that corresponds to the subject.In the Video window, check the quality of detection.Example 1 – The subject is brighter than the background. Only the whiter area of thesubject is detected. Move the Bright Contrast slider to the left to increase the range towards values oflower contrast between subject and background.Example 2 – The subject is darker than the background. Its body is detected only partially inthe area of lower contrast.Chapter 8 - Configuring Detection Settings 225 Move the Dark Contrast slider to the left to increase the range towards lower values ofcontrast between subject and background.Example 3 – The subject is brighter and darker than the background. Only the darker areasof the black fur are detected. Move the Bright Contrast slider to the left to increase the range towards less contrastbetween the subject's white areas and the gray background. Then, move the DarkContrast slider to the left to increase the range towards less contrast between thesubject's black areas and the background.8. Move the sliders until the subject (or the part which is of interest) is detected fully, andthe noise is minimized. Check that the subject is properly detected in all parts of the arenaby playing back different parts of the video file, or by waiting for the live animal to move.It is important that the complete animal's body is detected for optimaltracking. Proceed with the Contour adjustments (see page 41) to optimizebody detection.226 Chapter 8 - Configuring Detection Settingsdetection method: dynamic subtractionHow does the Dynamic subtraction method work?Like with Static subtraction (see page 221), the program compares each sampled image witha reference image, with the important difference that the reference image is updatedregularly. This compensates for temporal changes in the background.With Dynamic subtraction, the reference image is updated at every sample. You specify thepercentage contribution of the current video image to reference image.Procedure1. In the Method section of Detection Settings window, select Dynamic subtraction.2. In the Detection section, click the Reference Image Settings button. Create referenceimages without animals, following the procedure under “Reference image” on page 228.3. From the Subject is … than background list, select one of the options from the list,depending on the color of the subject you want to track (see step 6 at page 223 for details).Chapter 8 - Configuring Detection Settings 2274. Move the slider next to Current frame weight or enter the value in the appropriate field tospecify how the reference image is updated (range 0-100%):- In typical situations, a value between 1-5 gives a good result.- Select a low value if you want to have a large number of past images to contribute toeach reference image. As a result, changes in the background are diluted over manyimages. Choose a low value when the background changes slowly.- Select a high value if you want to have a small number of past images to contribute toeach reference image. As a result, changes in the background are captured over shorttime. Choose a high value when the background changes rapidly, for example, whenthe subject is very active and moves the bedding material around.- If you select 0, the reference image is not updated. This is the same as using StaticSubtraction.- If you select 100, each sample gets its own reference image with no contribution bythe past images.- Changing the Current frame weight does not affect the processor load significantly.Figure 8.12 In the Dynamic subtraction detection method, the Reference image is updated at each sample.The starting reference image is the one you specify by clicking the Grab from Video, Grab from Camera, orGrab from Other button in the Reference Image window (see page 39), otherwise it is the first frameanalyzed (not shown in the picture). For the general sample n, the reference image is obtained by summingthe reference image of the previous sample n–1 and the current image n where the area around the subjectestimated from the previous sample has been removed. The current image with subject removed is giventhe weight that you specify (see the procedure), while the previous reference image is given the weight(1-). Because of the way it is determined, each reference contains information on a number of pastimages, depending on the value of . See the text for more information.To find the optimal Current frame weight, set a value and carry out one ormore trials. Evaluate if the tracking was satisfactory. If not, increase ordecrease the setting by 20% and try again.It is important that as much of the animal's body is detected for goodtracking. Proceed with the Contour adjustments (see page 234) to optimizebody detection.228 Chapter 8 - Configuring Detection SettingsReference imageUnder Detection, click the Settings button next to Reference Image. You now see two videoimages. The image on the left is the Reference Image that is used at the start of the track.The image on the right is the Reference Image that is continuously updated during tracking.The aim is to obtain reference images that do not contain images of the animals you want totrack. To do so, follow the instructions below in consecutive order. If A fails, move on to B, ifthat fails move on to C etc.1. Grab Current (A) - Scroll through the video until you find an image without animals. Ifyou track live, make sure that there are no animals in the arena. Click Grab Current (A).This image will be the initial reference image. Skip Steps 2-4 and click Close.If your video does not contain images without animals, continue with option 2. Alsocontinue with option 2 if you track live and you cannot start with an empty arena.2. Grab from other (B)- You may have a video with an identical background as the one in thevideo you track from, but without animals. Or you may have an image of a backgroundwithout animals. If this is the case, click Grab from Other and select this video file orimage file. If you select a video file, the first frame of this file will be used as an initialreference image. If you select an image file, this has to have the same resolution as thevideo file you use for tracking. Browse to this file and click Open. Skip steps 3 and 4 andclick Close. If you do not have such video or image, proceed with option 3.Figure 8.13 The Reference Image window for dynamic subtraction and tracking from video file. If you tracklive, the text in this window is slightly different but the options are the same. Follow the procedure inconsecutive order until both images are without animals.Chapter 8 - Configuring Detection Settings 229By default, the reference images are stored in the folder Bitmap Files of your experiment. Ifthe background has not changed, you can use these images as reference images in otherexperiments.3. Start learning (C) - With this option an average image of the entire video will be made. Ifthe animals are moving, learning will average out the pixels of the animals, resulting in aninitial reference image without animals.If you track live, you have to click Start Learning, and subsequently click Stop Learning assoon as you have obtained an initial reference image without animals.If this step results in a satisfying initial reference image, skip step 4 and click Close. If not,proceed with step 4.4. Grab Dynamic Image (D) - If options 1 to 3 do not result in a satisfying initial referenceimage, using the current updated reference image as the initial reference image maysolve the problem. Click Grab Dynamic Image (D) below the dynamic reference image.Acquisition settings - If you run a number of consecutive trials, you may want to choosewhich image to use as initial reference image. Use saved reference image - Use this option if the background remains constantbetween the different trials. Use dynamic reference image - Use this option if the background changes between thedifferent trials.Grabbing the reference image is optional with the Dynamic Subtraction method. If you donot do that, EthoVision XT takes the first sample or video frame available and considers thatas the first reference image.If you are tracking from video files, you must play the video forward whilst making dynamicsubtraction settings. This is because the program needs to update the reference image. Donot skip through the file, since the reference image will then not be correctly made.How is the reference image updated?A video stream is composed of a number of video images (frames). During data acquisition,EthoVision XT analyzes one every x images according to the sample rate specified (seepage 208). When analyzing the sample (image) n, the reference image is obtained bysumming up the gray scale values of each pixel from two images: The reference image made of pixels which have an average value of previous images. The current image, where a square area around the subject detected in the previoussample has been removed. This provides a rough estimate of the current background.The gray scale values are summed up according to the formula:Referencei,n = (1-) * Referencei,n-1 + * Currenti,nfor each pixel I, where:230 Chapter 8 - Configuring Detection Settings Referencei,n = Gray scale value of pixel I in the reference image of sample n. Referencei,n-1 = Gray scale value of pixel I in the reference image of sample n–1. Currenti,n = Gray scale value of pixel I in sample n where a square area around thesubject previously detected has been removed. = Current Frame weight.The Current Frame weight determines the relative weight of the two components of the newreference image.Because the above formula is recursive, that is, each value of Referencei,n is also a function ofthe previous sample, the value of determines the number of past images that contribute tothe reference image for the sample n. The lower , the more past images contribute at leastpartially to the current reference image.The extent to which each past image contributes to the current reference image is a powerfunction of 1-. The older an image relative to the current one, the smaller its contribution tothe reference image. Example – If =20%, then 1- =80%. The first video image contributes by 80% to thesecond sample, by 80%2 =64% to the third sample, then by 80%3 =51% to the fourthsample, etc. At the 21th sample, the contribution by the first image gets below 1%.detection method: differencingHow does the Differencing method work?Like with Dynamic subtraction, the Differencing method updates the reference image overtime. Differencing makes a statistical (probabilistic) comparison between each pixel in thereference image and the pixels of the current image. The statistical comparison uses thevariance in the contrast between the current and reference image to calculate theprobability that each pixel is the subject.In most cases, the Differencing method works better than the other two subtractionmethods to detect the subject.The Differencing method takes more processor load than the subtraction methods.Therefore, when using Differencing, make sure you computer meets the systemrequirements as specified on page 38.Procedure1. In the Method section of the Detection Settings window, select Differencing.2. In the Detection section, click the Reference Image Settings button. Create referenceimages without animals, following the procedure under Reference image on page 231.Chapter 8 - Configuring Detection Settings 2313. From the Subject is … than background list, select one of the options from the list,depending on the color of the subject you want to track (see step 6 at page 223 for details).4. Next, if necessary, adjust the position of the Sensitivity slider and change the optionselected in the Background Changes list.\The Sensitivity slider determines what difference in contrast from the background isseen as the animal. For an image with good contrast, there is no need to change theslider. For images with less contrast, adjust the position of the slider to the right or theleft until the subject is properly detected.In the Background Changes list you can select options that reflect how fast thebackground changes. For example, a cage with bedding might change a lot because ofanimals kicking around the bedding material. If this case, to prevent changes in thebackground to interfere with detection, select 'Medium fast' or faster. Usually, 'Mediumslow' works just fine.Reference imageUnder Detection, click the Settings button next to Reference Image. You now see two videoimages. The image on the left is the Reference Image that is used at the start of the track.The image on the right is the Reference Image that is continuously updated during tracking.The aim is to obtain reference images that do not contain images of the animals you want totrack. To do so, follow the instructions below in consecutive order. If A fails, move on to B, ifthat fails move on to C etc.It is important that as much as possible of the animal's body is detected forgood tracking. Adjust the Subject Contour settings (see page 234) to optimizebody detection.232 Chapter 8 - Configuring Detection Settings1. Grab Current (A) - Scroll through the video until you find an image without animals. Ifyou track live, make sure that there are no animals in the arena. Click Grab Current (A).This image will be the initial reference image. Skip Steps 2-4 and click Close.If your video does not contain images without animals, continue with option 2. Alsocontinue with option 2 if you track live and you cannot start with an empty arena.2. Grab from other (B) - You may have a video with an identical background as the one in thevideo you track from, but without animals. Or you may have an frame of a backgroundwithout animals. If this is the case, click Grab from Other and select this video file orimage file. If you select a video file, the first image of this file will be used as an initialreference image. If you select an image file, this has to have the same resolution as thevideo file you use for tracking. Browse to this file and click Open. Skip steps 3 and 4 andclick Close. If you do not have such video or image, proceed with option 3.By default, the reference images are stored in the folder Bitmap Files of your experiment.If the background has not changed, you can use these images as reference images inother experiments.Figure 8.14 The Reference Image window for differencing and tracking from video file. If you track live, thetext in this window is slightly different but the options are the same. Follow the procedure in consecutiveorder until both images are without animals.Chapter 8 - Configuring Detection Settings 2333. Start learning (C) - With this option an average image of the entire video will be made. Ifthe animals are moving, learning will average out the pixels of the animals, resulting in aninitial reference image without animals.If you track live, you have to click Start Learning, and subsequently click Stop Learning assoon as you have obtained an initial reference image without animals.If this step results in a satisfying initial reference image, skip step 4 and click Close. If not,proceed with step 4.4. Grab Dynamic Image (D) - If options 1 to 3 do not result in a satisfying initial referenceimage, using the current updated reference image as the initial reference image maysolve the problem. Click Grab Dynamic Image (D) below the dynamic reference image.Acquisition settingsIf you run a number of consecutive trials, you may want to choose which image to use asinitial reference image. Use saved reference image - Use this option if the background remains constantbetween the different trials. Use dynamic reference image - Use this option if the background changes between thedifferent trials.Grabbing the reference image is optional with the Dynamic Subtraction method. If you donot do that, EthoVision XT takes the first sample or video frame available and considers thatas the first reference image.If you are tracking from video files, you must play the video forward whilst making dynamicsubtraction settings. This is because the program needs to update the reference image. Donot skip through the file, since the reference image will then not be correctly made.How is the reference image updated?The Differencing method uses a gaussian distribution of all pixels in a frame. EthoVision XTkeeps a running average of the mean and the variance 2 of the gray value of each pixel todetect unlikely pixels. These pixels are considered to be the subject.The mean of the gray values is summed up according to the same formula as for Dynamicsubtraction (see page 229).The variance of the gray values is summed up according to the following formula:Variancei,n = (1-) * Variancei,n-1 + * (Currenti,n – Referencei,n)2for each pixel I, where: Variancei,n = Variance of gray scale value of pixel I in the reference image of sample n. Currenti,n = Mean gray scale value of pixel I in sample n where a square area around thesubject previously detected has been removed.234 Chapter 8 - Configuring Detection Settings Referencei,n-1 = Mean gray scale value of pixel I in the reference image of sample n–1. = Current Frame weight.The Current Frame weight determines the relative weight of the two components of the newreference image (see the example on page 230).8.7 Subject contour settingscontour erosion and dilationBefore you start setting the Contour adjustmentsIt is important that the complete body of the animal is detected (indicated by the 'noise'color in the video window). If even after setting the Contour adjustments you do not achievethis, go back to the appropriate Detection method and adjust the contrast to improve bodydetection.Figure 8.15 The picture on the left shows a sub-optimal result of body detection (part of the right side of thebody is not detected). The picture on the right shows the result when the contrast settings are optimized;now the complete body is detected. The color of the body contour at this stage is orange (=noise) becausethe model parameters have not been configured yet.Chapter 8 - Configuring Detection Settings 235Why use Contour Adjustments? To give a smooth contour for accurate modeling and to remove individual pixels of noise– For this purpose, Erode first, then dilate is selected by default. To eliminate the detection of thin objects such as the rat's tail – Select Erode first, thendilate. A reason for why you may want to eliminate the animal's tail is that when theanimal sits still and its tail moves, it adds to distance moved. To remove indentations in the shape of the subject, such as those caused by the cagebars, or to 'join up' the stripes on the animal's body (for wasps, fish etc.) – Select Dilationand Erosion, and Dilate first, then erode. This removes indentations in the shape of thesubject, giving a smoother outline, or ensures that EthoVision XT detects them as oneanimal. To deal with occlusions of the animal's body – If you use nose-tail tracking (AdvancedModel-based) with rodents, optimize the Shape Stability (see page 243). To deal with two animals touching – When two animals touch, EthoVision loses theseparate shapes. By optimizing the Modelling effort (see page 243), EthoVision candetermine which part of the large body fill belongs to which animal.Figure 8.16 A – An example of a rat detected by EthoVision XT without any filtering applied.B – The same animal, after applying the Erosion filter on. C – The layer of pixels removed by Erosion. D –The same animal when first Erosion and then Dilation are applied. E – The net result of Erode first, thendilate: the pixels corresponding to the rat's tail are removed.236 Chapter 8 - Configuring Detection SettingsContour erosionThe Contour erosion function reduces the subject's area by setting the contour pixels of thesubject to the background value. The detected subject appears smaller in the Video window.To apply erosion, select Contour erosion and from the list select the thickness of the layer ofpixels to be removed, expressed in number of pixels (Minimum =1, Maximum =10).Figure 8.13A shows the subject as detected by EthoVision with no filtering. After applyingerosion, a layer of pixels is removed from the contour (Figure 8.13B). Figure 8.13C shows thepixels that were removed.Contour dilationThe Contour dilation function increases the subject's surface area by settings thebackground pixels adjacent to the subject's contour to the subject value. Therefore, thedetected subject appears larger in the Video window.To apply dilation, select Contour dilation and from the list select the thickness of the layer ofpixels to be added, expressed in number of pixels (Minimum =1, Maximum =10).Figure 8.13A shows the subject as detected by EthoVision XT with no filtering. After removingthe rat's tail with the erosion function (Figure 8.13B), a layer of pixels is added back usingdilation (Figure 8.13D), restoring the original size of the subject.Combining dilation and erosionSelect both Dilation and Erosion if you want to apply the two filters together. From the Orderlist, select one of the following: Erode first, then dilate – A layer of pixels is removed, then added to the contour. Dilate first, then erode – A layer of pixels is added, then removed.Use Erode first, the dilate when you use either the Model-based (XT 5) or theAdvanced Model-based (XT 6) nose-tail tracking method because in this case thetail can negatively affect tracking. When you use the Shape-based (XT4) method,make sure the tail is fully detected as part of the subject.Chapter 8 - Configuring Detection Settings 2378.8 Subject size settingssubject sizeThe Subject size settings use the result of the body detection to model the body size of theanimals. This prevents objects like droppings or large reflections from being detected duringtracking. Please note that the term size here means surface area in video pixels, not length orscreen pixels. Enlarging the Video window does not change the subject's size in video pixels.Setting the Subject size for a single animal Set the Detected subject size using the Minimum and Maximum subject size when youwant to carry out Center-point detection or Nose-tail detection with either the Shapebased(XT4) or Model-based (XT5) detection method. The Detected subject size sets theabsolute limits of the size that is possible to be detected as a subject. Set the Modeled subject size when you want to carry out Nose-tail detection using theAdvanced Model-based (XT6) detection method. The Modeled subject size is the size ofthe model that the program will try to fit to the detected subject.Setting the Subject size for multiple animalsSet the Modeled subject size when you want to track multiple animals. To set the Subjectsize:1. In the Subject size section, click the Edit button.In the Subject Size window, in the figure at the top, the thin red contour represents thecurrent size of what EthoVision XT assumes is the animal shape. If you want to set the Detected subject size, proceed with step 2. If you want to set the Modeled subject size, proceed with step 3.Before you set the Subject size, make sure all animal body contours are detectedproperly and, for multiple animals, the animals do not touch each other.If you selected Behavior recognition in the Experiment Settings, see page 246.238 Chapter 8 - Configuring Detection SettingsClick the info button for more information about setting the subject size.2. Set the Minimum and Maximum subject size (represented by a green contour):- Maximum subject size – The largest surface area (in pixels) that is detected as thesubject. Objects bigger than the Maximum subject size, for example, theexperimenter's arm, are detected as noise and not tracked. Decrease the Maximumsubject size until its thick green contour surrounds the thin red contour by a fairmargin.- Minimum subject size – The smallest surface area (in pixels) that is detected as thesubject. Objects smaller than the Minimum subject size, such as droppings ordisturbed sawdust, are detected as noise and not tracked. Increase the Minimumsubject size until its thick green contour is smaller than the thin red contour by a fairmargin.The two sliders are interdependent. So, after you have set the Minimum subject size, whenyou next change the Maximum subject size, the slider for the Minimum subject size alsomoves (although the size in pixels stays the same).Figure 8.17 The Subject size window with the current detected subject size, Minimum andMaximum subject size.Chapter 8 - Configuring Detection Settings 2393. In the Modeled subject size group, select Apply settings to all subjects if your multipleanimals have similar sizes.The Modeled subject size settings are only available when you use multiple subjects or theAdvanced Model-based (XT6) nose-tail detection.4. Select one of the subjects to model the subject size for, by clicking the name of thesubject.5. Next, adjust the modeled subject size (under Average - pixels) to the detected subject size(under Current - pixels):You do this by clicking the Grab button. Keep clicking the Grab button until the modeled(Average) subject size equals the detected (Current) subject size.When the modeled (Average) subject size equals the detected (Current) subject size, thisbecomes visible:- In the Modeled subject size group: the Average subject size now equals or is largerthan the Current subject size (see the table in Figure 8.16).- In the Video window: the modeled subject size now completely overlaps with thecurrent subject size (see the Video window in Figure 8.16).Figure 8.18 Part of the Modeled subject size group in the Subject size window (left) and the Videowindow. In the table, Current shows the current detected subject size in pixels, Average shows themodeled subject size in pixels. The arrows point to the visual feedback you get about the current andaverage subject size in the Video window.240 Chapter 8 - Configuring Detection Settings- In the picture at the top of the Subject size window: the bold yellow contourrepresents the modeled subject size. This now coincides with the detected subject sizeindicated by the thin red contour (see Figure 8.14).6. You can now set the Tolerance. Click the corresponding cell and enter a value.The Tolerance determines the deviation of the average subject size. When the Currentdetected size deviates more from the Average subject size than the Tolerance, then theobject is not considered to be the subject anymore and EthoVision starts making aneducated statistical guess about the body contour of the animal.This is visible in the video window by a wobbling marker-color area. When this happenswhen animals do not touch, you should increase the Tolerance.7. Select the Fix check box for each subject.8. You can now proceed to set the Maximum noise size, Shape stability and Modeling effort.Tips for setting the Subject Size Make sure you do not set the Tolerance too small; it is better get a wrong body size/shape than a wrong location of the animal. It is better to set you Average subject size slightly bigger than the actual subject size,especially when you carry out nose-tail tracking. If you want to carry out Live tracking with multiple similarly-sized animals, it isrecommended to first introduce one animal into the arena and make the Subject Sizesettings for this animal. If the subject size changes a lot between trials, it is recommended to create newDetection Settings for this new size.Figure 8.19 Part of the Modeled subject size group in the Subject size window (left) and the Videowindow. The modeled (Average) subject size is now adjusted to the detected (Current) subject size.Compare the table and video window in this figure with those in Figure 8.15Chapter 8 - Configuring Detection Settings 2418.9 Working with Nose-tail base detectionoverviewWhen you set an experiment for Nose-tail base detection, EthoVision XT analyzes thecontour of the area detected as subject at each sample, and assigns Nose-point and Tail-baseto two specific pixels of the contour. Furthermore, it determines the direction the animal issupposed to point to (Head direction). Nose- and tail-base points – The two points are detected independently through one oftwo complex algorithms. The nose-point is found in all cases, except when the centerpointis not found either. The tail-base may not be found in a few cases if detection isgood.Note:- You can have EthoVision detect the nose- and tail-base points of your subjects whenyou have upgraded to the Multiple Body Point Module. To do so, upgrade yourhardware key (see page 51). To set an experiment to Nose-tail base detection, in theExperiment Settings select Center-point, nose-point and tail-base detection (seepage 100).- Reliable tracking of nose and tail-base is limited by the size of the video image. You canmix four camera images like in the case of a group of PhenoTypers, with good results.Mixing 16 camera images makes the subjects too small for reliable nose and tail-basetracking. Head direction – Once the nose-point has been found, two points are determined alongthe contour lying at a specific distance from the nose-point. The Head direction is theline dividing equally the angle formed by the center and those additional points.The Head direction to zone is quantified as a dependent variable, and is expressed inunits of rotation (see page 610).methods of nose-tail detectionIn EthoVision XT, three methods for nose-tail base detection are available: Shape-based (XT 4) – This detection method analyzes the contour of the area detected assubject at each sample to assign the nose-point and tail-base. Make sure in the detectionsettings that the tail is fully detected. With this method is may be possible to track 'nonrodent'shapes but the method is not designed for it. Model-based (XT 5) – This detection method analyzes the varying shape of the contour ofthe area detected as subject and builds up a 'rodent model'. It is more robust than the242 Chapter 8 - Configuring Detection SettingsShape-based method because it does not require the nose and tail to be visible: it can'predict' the position of the nose and the tail based on previous samples. Make sure inthe detection settings that the tail is removed from the body contour with Erode andDilate (see page 236). Advanced Model-based (XT 6) – This detection method teaches the animal shape andhow it moves in the first 15 frames and continually updates its statistics. Therefore, it canhandle severe shape distortions, such as, for example, when the animal's body isoccluded or when multiple animal's touch. However, it requires a lot of computerperformance.This is the only method available when you track multiple animals with nose-taildetection. It is the preferred methods for rodents. Make sure in the detection settingsthat the tail is removed from the body contour with Erode and Dilate (see page 236).Which of the three methods should I use? When you want to track other animals than rodents, we recommend you use the Shapebased(XT 4) method. When you want to track a single rodent without occlusions or without difficult trackingconditions, we recommend you use the Model-based (XT 5) method. When you track rodents that can be occluded, for example, by bars or other objects in thecage, we recommend you use the Advanced Model-based method and to track Fromvideo file. When you want to track multiple rodents using Marker assisted tracking, EthoVisionautomatically selects the Advanced Model-based (XT 6) method. In this case, werecommend you track From video file.maximum noise sizeMaximum noise size is only available if you have chosen the Advanced Model-based (XT6)nose-tail detection method.You set the Maximum noise size in the Subject size window:1. Go to the Advanced section by clicking the little down-arrow at the bottom-right of theSubject size window.2. Set the Maximum subject noise. The value should be lower than the minimum subjectsize and but high enough to remove noise from the video image.Chapter 8 - Configuring Detection Settings 243shape stabilityThe Shape stability setting is only available if you have chosen the Advanced Model-based(XT6) nose-tail detection method.The Shape stability setting is used when you track animals whose body can be occluded by,for example, cage bars or part of the body of another animal. When this happens, theanimal's body consists of two separate objects that are close together.You set the Shape stability in the Subject size window:1. Go to the Advanced section by clicking the little down-arrow at the bottom-right of theSubject size window.2. The Shape stability optimized for slider has two extreme settings:- Occlusions – When you set the slider close to Occlusions, EthoVision considersseparate objects that are close together part of one animal.- Noise – When you set the slider close to Noise, EthoVision considers separate smallerparts not part of the animal.The figure below shows the animal model as a result of applying the two extreme Shapestability settings.If you are not sure which setting to select, leave Shape stability at the default value of 620.modeling effortThe Modeling effort setting is used when two animals touch and EthoVision loses theseparate shapes. At this point, EthoVision tries to determine which part of the big 'merged'body fill belongs to either animal. This costs a lot of processing load.The Modeling effort optimized for slider has two extreme settings:Figure 8.20 An example of the result of the two extreme Shapestability settings. 'Noise' shows that the front of the animal, onthe other side of the bar, is not considered to be part of theanimal. 'Occlusion' displays the animal body as a whole.244 Chapter 8 - Configuring Detection Settings Performance – When you set the slider close to Performance, EthoVision is only allowed ashort time to determine which part of the 'merged' body fill belongs to which animal.Therefore, Modeling quality is low. Modelling – When you set the slider to Modelling, EthoVision is allowed a longer timeper frame to determine which part of the 'merged' body fill belongs to which animal.Therefore, Modelling quality is good, but this costs a lot of processor load.We recommend to select Modelling only when you have a computer that exceeds theminimum system requirements.When you are not sure which setting to select, leave Modeling effort at the default value of‘500’.how to optimize nose-tail detectionBecause of the way the nose- and tail base points are found, nose-tail base detection is muchdepending on the quality of the video image and the experimental setup. Before using thisfeature, please check the following guidelines:Conditions related to the Arenas Light – Light conditions must be equal across the arena. Try to remove shades, light spotsand reflections. For proper detection, the subject's body contour must be kept asconstant as possible across the whole arena. Subject/background contrast – The color of the subject and of the background must becontrasting enough to get a full and clear body contour. Video quality – Noise and interference reduce the proportion of samples which arecorrectly detected. Noise reduction – The Video Pixel smoothing function (see page 214) can sometimes helpgetting a more appropriate body contour. However this is of little use if the video has toomuch noise or too little contrast. Areas hidden to the camera view – When the animal enters or exits areas hidden to thecamera (for instance, a shelter), nose-point and tail-base are wrongly assigned. Number of arenas – Reliable tracking of nose and tail-base is limited by the size of thevideo image. You can mix maximally four camera images like in the case of a group ofPhenoTypers, with good results.Chapter 8 - Configuring Detection Settings 245Conditions related to the Subjects Subject's apparent size – The subject must be large enough to get a constant bodycontour. Small animals and large arenas pose detection problems with nose- and tailbasepoints. When you mix the image of multiple cameras with a quad unit, like in thecase of a group of PhenoTypers, a group of 4 cameras gives good results. When mixing 16PhenoTypers, the apparent size of the subject is generally too small. Subject's color variation – For hooded rats, the light flanks and dark head must contrastwith the background, otherwise detection of body contour is sub-optimal, although theDifferencing detection method (see page 230) can help. Water maze – Tracking nose- and tail-base points in a water maze is impossible becausethe tail-base is under the water, and it is not possible to obtain a proper body contour. Subject's behavior – Immobile animals are hard to track because their body contourdiffers from that of a mobile animal. Nose-points are therefore hard to detect.Experiment Settings Detection methods – We recommend to track from video files if you use the AdvancedModel-based (XT 6) method. Sample rate – As high as possible (25 or 29.97 samples/s). For Nose-tail tracking incombination with Marker assisted tracking, you should use a sample rate of 12.5 or 14.98samples/s. Tracking live – When tracking requires high processor load, it may result in many missingpoints. Tracking from video files is preferred (see below), especially when you use theAdvanced Model-based (XT6) method. Tracking from video files – Keep the Detection Determines speed option selected. Missing tail-base points – The high percentage of missing tail-base points is anindication of poor detection. The higher this percentage, the greater the probability thatthe nose-point is not placed in the correct location. To estimate the proportion of missingtail-base points, run some test trials and visualize the Sample list (see Chapter 12). Youcan quantify this by selecting Number of samples as a statistic for a dependent variablesuch as Velocity for the nose point.In practice…The contour of the blob detected as subject is crucial for proper detection of nose- and tailbasepoints. If only part of the subject is detected, EthoVision may swap the pixels assignedas nose-point and tail-base. Or the nose-point is not placed on the subject's nose tip (for246 Chapter 8 - Configuring Detection Settingsclarity, the nose point is shown together with the Head direction; see page 250 for how toview this on the screen):Select a wider range of gray scale values (see page 220 or page 224) or adjust the sensitivity(see page 231) to increase the number of pixels detected as subject. As a result, the nose- andtail-base points are detected correctly: When you use the Shape-based (XT 4) method, make sure that the tail is fully detected. When you use the Model-based (XT 5) or the Advanced model-based (XT 6) method,remove the tail from the detected subject using the Erode and Dilate filters (pagepage 234).8.10 Detection settings for Rat behaviorrecognitionNose-tail detection methodRat behavior recognition works when nose-tail base detection is enabled.In the Detection Settings window, under Method select: Model-based (XT5) (default) — This is selected automatically when you select Ratbehavior recognition under Analysis Options in the Experiment Settings. Advanced Model-based (XT6) — Use only when there are occlusions in the arena thatmake the subject’s apparent size smaller, or when using the Model-based (XT5) detectionmethod does not provide good results.Chapter 8 - Configuring Detection Settings 247Sample rate settingsIn the Detection Settings window, under Video, select a value of sample rate between 25 and31 samples/second.Subject size settingsIn the Detection Settings window, under Subject Size:1. Click the Behavior button. The Behavior Recognition Settings window opens.2. If you work with video, play the video up to a frame where:- The subject is walking normally, and its hind limbs can be partially seen; see the figurebelow. It is important that the animal’s body is not contracted or stretched.- Nose- and tail-base points are correctly detected.If you track live, wait until the animal shows a posture like in the figure above.3. In the Behavior Recognition Settings window, click the Grab button.4. In the Behavior Recognition Settings window a still image appears showing the detectedsubject’s contour and the detected body points.Figure 8.21 Play the video until the subject walksnormally, and nose- and tail-base points arecorrectly detected.Figure 8.22 The Behavior Settings window.248 Chapter 8 - Configuring Detection SettingsYou can update the grabbed image at any time:- If you track from video files, position the video to another frame, and click Grab.- If you track live, wait that the posture of the animal is optimal and click Grab.EthoVision XT only stores the image that you grabbed last.5. In the Behavior Recognition Settings window, make sure that the calculated Subjectlength is greater than 60 pixels, and that the Posture index is between 70 and 90.If the Subject length is smaller than 60 pixels, move the camera closer to the animal, oruse a higher video resolution.6. Click OK to close the Behavior Recognition Settings window.Entering specific size values — If you know specific size values (for example, from a previousexperiment using the same animal size, camera, lighting, camera-arena distance and thesame calibration), click Manual in the Behavior Recognition Settings window and in theManual Settings window enter the following values (see the picture below for explanation): Subject area (in distance unit square) Center-nose length (in distance unit) Center-tail length (in distance unit) Posture (between 70-90).Chapter 8 - Configuring Detection Settings 249Then click OK. The Behavior Recognition Settings window says No image saved: Size settingswere manually set. Subject size is expressed in the unit selected in the Experiment Settings. The value of Subject length (min. 60 pixels) in the Behavior Recognition Settingswindow is the sum of Center-nose length and Center-tail length, expressed in pixels. Ifthis value is lower than 60, when opening the Data acquisition screen an error messageappears. To increase subject length, move the camera closer to the animal, or use ahigher video resolution.Making size-dependent detection settingsAccurate recognition of behavior is based on subject size settings. Since apparent sizeincreases with the subject age, all being equal, we advise you to create detection settingsspecific for a certain age class. Each Detection Settings profile can only be used for a limitedtime. For example, for Wistar rats, a Detection Settings profile for rats that are 3-5 weeks old,which can be used about one week, and a Detection Settings profile for rats older than 5weeks, which can be used for two weeks.Subjects should not vary in size for more than 10%. If that happens, create more DetectionSettings (for example, one for smaller animals and one for larger animals).Subject contour settingsFor optimal results, we recommend to select Erode first, then Dilate (see page 234 for details)to remove the tail from the detected body.WarningsEthoVision XT shows a warning message in the following cases: When the sample rate set is lower then 25 or higher than 31 samples/s. When the Subject length is smaller than 60 pixels. When the animal is larger than the arena.8.11 Customizing the Detection Settings screenTo achieve optimal subject detection, you need proper feedback about the effect of yoursettings on the quality of detection. EthoVision offers you a number of statistics for thispurpose.250 Chapter 8 - Configuring Detection SettingsCustomizing the detection features1. Open the Detection Settings (see page 197).2. Click the Show/Hide button on the component tool bar and select Track Features.3. Select View for the feature you want to view. Choose the color and (for body points) thetrail for the features you want to view.- Nose-point – To check that the nose tip is detected correctly (see page 241 for details).- Center-point – To check that the center-point of the subject is detected correctly.The center-point is the point whose X,Y coordinates are the arithmetic mean of the X,Ycoordinates of all pixels detected as subject. For more information on how the noseandtail-base points are calculated, see page 241.- Tail-base – To check that the base of the tail is detected correctly (see page 241 fordetails).- Head direction – To estimate at what the subject is sniffing. Select this optionespecially with novel object and orientation tests.- Body contour – To check that the subject's contour (or the part which should be found)is detected.- Body fill – To check that the subject's body (or part of it) is detected. For example, in atest where it is important to measure the change in the animal's shape to estimate itsmobility.If you do not select a color for Body fill, the body contour will be shown as noise.- Noise – To view the pixels that match the criteria for subject detection (depending onthe detection method), but other than those detected as subject.We recommend to keep Noise selected. This way you can see which parts of the videoimage have gray scale values similar to those of the subject(s) to be detected.Chapter 8 - Configuring Detection Settings 251- Activity - To view the pixels that match the criteria for activity detection (see page 217).This setting is only available if you selected Activity analysis in the ExperimentSettings.Some of the options above are not available if your experiment is set to Only centerpointdetection in the Experiment Settings (see page 91).4. If you have selected to view the body points' trail, choose the number of Samples youwant to be shown at a time.5. Check in the Video window the appearance of the detection features. When you aresatisfied with the options selected, close the Detection Features window. Next, continuewith the procedure below.viewing the detection statisticsThe detection statistics are displayed in the Analysis Results and Scoring pane, which is, bydefault, displayed at the bottom of the screen.The Trial Status tab shows immediate feedback when you change detection settings.Detection statistics Missed samples – The percentage and number of samples that were skipped due to lackof processor time. This information is useful to check whether the sample rate specified(see page 208) can be handled by your computer. See page 212 for tips on how to increasethe maximum sample rate handled by the PC. When you select another video file, or clickDisplaying detection features can use a lot of processor power and reduce themaximum possible sample rate if you are tracking live.If the Analysis Results and Scoring pane is not displayed, click the Show/Hidebutton on the component tool bar and select Analysis Results and Scoring.The tabs Independent Variables, Dependent Variables and Manual Scoring showno feedback in the Detection Settings, but they do when you acquire tracks (seepage 285 and page 314).252 Chapter 8 - Configuring Detection SettingsSave changes in the Detection Settings window, the value for Missed samples is reset tozero. Subject not found – The percentage and number of samples in which the subject was notfound. This information is useful to check the quality of detection in general. When asubject is not found, it means that EthoVision XT processed the image, but did not findanything matching the current Detection Settings. Use Subject not found to assess thequality of tracking. When you select another video file, or click Save changes, the valuefor Subject not found is reset to zero.Warning thresholdsThe percentages of missed samples and samples where the subject is not found are usuallydisplayed in green for each arena and subject. When the values are above the set threshold,they are highlighted in red.To change the thresholds, click the button under Missed samples or Subject not found andchange the value next to ‘Missed samples’ alert above.After acquisition you can view the proportion of missed samples and samples inwhich the subject was not found in different parts of the software.In the Trial list, click Show/Hide on the tool bar and select Variables and selectMissed samples and/or Subject not found.In the Statistics and Charts screen, click Show/Hide on the tool bar and selectIndependent Variables and select Missed samples and/or Subject not found.In the Track Visualization or in the Heatmaps screen, click Show/Hide on the toolbar and select Layout. Under Available, drag Missed samples and/or Subject notfound to the On Columns or On Rows box.Unitrans世聯(lián)翻譯公司在您身邊,離您近的翻譯公司,心貼心的專(zhuān)業(yè)服務(wù),專(zhuān)業(yè)的全球語(yǔ)言翻譯與信息解決方案供應(yīng)商,專(zhuān)業(yè)翻譯機(jī)構(gòu)品牌。無(wú)論在本地,國(guó)內(nèi)還是海外,我們的專(zhuān)業(yè)、星級(jí)體貼服務(wù),為您的事業(yè)加速!世聯(lián)翻譯公司在北京、上海、深圳等國(guó)際交往城市設(shè)有翻譯基地,業(yè)務(wù)覆蓋全國(guó)城市。每天有近百萬(wàn)字節(jié)的信息和貿(mào)易通過(guò)世聯(lián)走向全球!積累了大量政商用戶數(shù)據(jù),翻譯人才庫(kù)數(shù)據(jù),多語(yǔ)種語(yǔ)料庫(kù)大數(shù)據(jù)。世聯(lián)品牌和服務(wù)品質(zhì)已得到政務(wù)防務(wù)和國(guó)際組織、跨國(guó)公司和大中型企業(yè)等近萬(wàn)用戶的認(rèn)可。 專(zhuān)業(yè)翻譯公司,北京翻譯公司,上海翻譯公司,英文翻譯,日文翻譯,韓語(yǔ)翻譯,翻譯公司排行榜,翻譯公司收費(fèi)價(jià)格表,翻譯公司收費(fèi)標(biāo)準(zhǔn),翻譯公司北京,翻譯公司上海。