IEEE Standards - draft standard template



P™/D

Draft for < HMD based VR Sickness Reducing Technology>

Sponsor

of the

IEEE

Approved

IEEE-SA Standards Board

Copyright © 2018 by The Institute of Electrical and Electronics Engineers, Inc.

Three Park Avenue

New York, New York 10016-5997, USA

All rights reserved.

This document is an unapproved draft of a proposed IEEE Standard. As such, this document is subject to change. USE AT YOUR OWN RISK! IEEE copyright statements SHALL NOT BE REMOVED from draft or approved IEEE standards, or modified in any way. Because this is an unapproved draft, this document must not be utilized for any conformance/compliance purposes. Permission is hereby granted for officers from each IEEE Standards Working Group or Committee to reproduce the draft document developed by that Working Group for purposes of international standardization consideration. IEEE Standards Department must be informed of the submission for consideration prior to any reproduction for international standardization consideration (stds.ipr@). Prior to adoption of this document, in whole or in part, by another standards development organization, permission must first be obtained from the IEEE Standards Department (stds.ipr@). When requesting permission, IEEE Standards Department will require a copy of the standard development organization's document highlighting the use of IEEE content. Other entities seeking permission to reproduce this document, in whole or in part, must also obtain permission from the IEEE Standards Department.

IEEE Standards Department

445 Hoes Lane

Piscataway, NJ 08854, USA

Abstract:

Keywords:

(

Important Notices and Disclaimers Concerning IEEE Standards Documents

IEEE documents are made available for use subject to important notices and legal disclaimers. These notices and disclaimers, or a reference to this page, appear in all standards and may be found under the heading “Important Notices and Disclaimers Concerning IEEE Standards Documents.” They can also be obtained on request from IEEE or viewed at .

Notice and Disclaimer of Liability Concerning the Use of IEEE Standards Documents

IEEE Standards documents (standards, recommended practices, and guides), both full-use and trial-use, are developed within IEEE Societies and the Standards Coordinating Committees of the IEEE Standards Association (“IEEE-SA”) Standards Board. IEEE (“the Institute”) develops its standards through a consensus development process, approved by the American National Standards Institute (“ANSI”), which brings together volunteers representing varied viewpoints and interests to achieve the final product. IEEE Standards are documents developed through scientific, academic, and industry-based technical working groups. Volunteers in IEEE working groups are not necessarily members of the Institute and participate without compensation from IEEE. While IEEE administers the process and establishes rules to promote fairness in the consensus development process, IEEE does not independently evaluate, test, or verify the accuracy of any of the information or the soundness of any judgments contained in its standards.

IEEE Standards do not guarantee or ensure safety, security, health, or environmental protection, or ensure against interference with or from other devices or networks. Implementers and users of IEEE Standards documents are responsible for determining and complying with all appropriate safety, security, environmental, health, and interference protection practices and all applicable laws and regulations.

IEEE does not warrant or represent the accuracy or content of the material contained in its standards, and expressly disclaims all warranties (express, implied and statutory) not included in this or any other document relating to the standard, including, but not limited to, the warranties of: merchantability; fitness for a particular purpose; non-infringement; and quality, accuracy, effectiveness, currency, or completeness of material. In addition, IEEE disclaims any and all conditions relating to: results; and workmanlike effort. IEEE standards documents are supplied “AS IS” and “WITH ALL FAULTS.”

Use of an IEEE standard is wholly voluntary. The existence of an IEEE standard does not imply that there are no other ways to produce, test, measure, purchase, market, or provide other goods and services related to the scope of the IEEE standard. Furthermore, the viewpoint expressed at the time a standard is approved and issued is subject to change brought about through developments in the state of the art and comments received from users of the standard.

In publishing and making its standards available, IEEE is not suggesting or rendering professional or other services for, or on behalf of, any person or entity nor is IEEE undertaking to perform any duty owed by any other person or entity to another. Any person utilizing any IEEE Standards document, should rely upon his or her own independent judgment in the exercise of reasonable care in any given circumstances or, as appropriate, seek the advice of a competent professional in determining the appropriateness of a given IEEE standard.

IN NO EVENT SHALL IEEE BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO: PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE PUBLICATION, USE OF, OR RELIANCE UPON ANY STANDARD, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE AND REGARDLESS OF WHETHER SUCH DAMAGE WAS FORESEEABLE.

Translations

The IEEE consensus development process involves the review of documents in English only. In the event that an IEEE standard is translated, only the English version published by IEEE should be considered the approved IEEE standard.

Official statements

A statement, written or oral, that is not processed in accordance with the IEEE-SA Standards Board Operations Manual shall not be considered or inferred to be the official position of IEEE or any of its committees and shall not be considered to be, or be relied upon as, a formal position of IEEE. At lectures, symposia, seminars, or educational courses, an individual presenting information on IEEE standards shall make it clear that his or her views should be considered the personal views of that individual rather than the formal position of IEEE.

Comments on standards

Comments for revision of IEEE Standards documents are welcome from any interested party, regardless of membership affiliation with IEEE. However, IEEE does not provide consulting information or advice pertaining to IEEE Standards documents. Suggestions for changes in documents should be in the form of a proposed change of text, together with appropriate supporting comments. Since IEEE standards represent a consensus of concerned interests, it is important that any responses to comments and questions also receive the concurrence of a balance of interests. For this reason, IEEE and the members of its societies and Standards Coordinating Committees are not able to provide an instant response to comments or questions except in those cases where the matter has previously been addressed. For the same reason, IEEE does not respond to interpretation requests. Any person who would like to participate in revisions to an IEEE standard is welcome to join the relevant IEEE working group.

Comments on standards should be submitted to the following address:

Secretary, IEEE-SA Standards Board

445 Hoes Lane

Piscataway, NJ 08854 USA

Laws and regulations

Users of IEEE Standards documents should consult all applicable laws and regulations. Compliance with the provisions of any IEEE Standards document does not imply compliance to any applicable regulatory requirements. Implementers of the standard are responsible for observing or referring to the applicable regulatory requirements. IEEE does not, by the publication of its standards, intend to urge action that is not in compliance with applicable laws, and these documents may not be construed as doing so.

Copyrights

IEEE draft and approved standards are copyrighted by IEEE under U.S. and international copyright laws. They are made available by IEEE and are adopted for a wide variety of both public and private uses. These include both use, by reference, in laws and regulations, and use in private self-regulation, standardization, and the promotion of engineering practices and methods. By making these documents available for use and adoption by public authorities and private users, IEEE does not waive any rights in copyright to the documents.

Photocopies

Subject to payment of the appropriate fee, IEEE will grant users a limited, non-exclusive license to photocopy portions of any individual standard for company or organizational internal use or individual, non-commercial use only. To arrange for payment of licensing fees, please contact Copyright Clearance Center, Customer Service, 222 Rosewood Drive, Danvers, MA 01923 USA; +1 978 750 8400. Permission to photocopy portions of any individual standard for educational classroom use can also be obtained through the Copyright Clearance Center.

Updating of IEEE Standards documents

Users of IEEE Standards documents should be aware that these documents may be superseded at any time by the issuance of new editions or may be amended from time to time through the issuance of amendments, corrigenda, or errata. A current IEEE document at any point in time consists of the current edition of the document together with any amendments, corrigenda, or errata then in effect.

Every IEEE standard is subjected to review at least every ten years. When a document is more than ten years old and has not undergone a revision process, it is reasonable to conclude that its contents, although still of some value, do not wholly reflect the present state of the art. Users are cautioned to check to determine that they have the latest edition of any IEEE standard.

In order to determine whether a given document is the current edition and whether it has been amended through the issuance of amendments, corrigenda, or errata, visit IEEE Xplore at or contact IEEE at the address listed previously. For more information about the IEEE-SA or IEEE’s standards development process, visit the IEEE-SA Website at .

Errata

Errata, if any, for all IEEE standards can be accessed on the IEEE-SA Website at the following URL: . Users are encouraged to check this URL for errata periodically.

Patents

Attention is called to the possibility that implementation of this standard may require use of subject matter covered by patent rights. By publication of this standard, no position is taken by the IEEE with respect to the existence or validity of any patent rights in connection therewith. If a patent holder or patent applicant has filed a statement of assurance via an Accepted Letter of Assurance, then the statement is listed on the IEEE-SA Website at . Letters of Assurance may indicate whether the Submitter is willing or unwilling to grant licenses under patent rights without compensation or under reasonable rates, with reasonable terms and conditions that are demonstrably free of any unfair discrimination to applicants desiring to obtain such licenses.

Essential Patent Claims may exist for which a Letter of Assurance has not been received. The IEEE is not responsible for identifying Essential Patent Claims for which a license may be required, for conducting inquiries into the legal validity or scope of Patents Claims, or determining whether any licensing terms or conditions provided in connection with submission of a Letter of Assurance, if any, or in any licensing agreements are reasonable or non-discriminatory. Users of this standard are expressly advised that determination of the validity of any patent rights, and the risk of infringement of such rights, is entirely their own responsibility. Further information may be obtained from the IEEE Standards Association.

Participants

At the time this draft was completed, the Working Group had the following membership:

, Chair

, Vice Chair

1. Suk-Ju Kang

2. Keon-Woo Kang

Display

1. General

Movies and game contents using virtual reality (VR) have taken a center stage because they provide greater immersion and realism. Thus, the VR market is expected to expand rapidly [1]. Especially, the VR environment using a head-mounted display (HMD) is currently in the spotlight as a new growth market because of its reasonable price and accessibility, compared with any other VR equipment [2]. However, HMD devices may have several problems, such as a screen-door effect [3] caused by the low spatial resolution, a frame rate drop caused by the low computing performance, and a blurring artifact caused by the low temporal resolution. Among the problems, the motion-to-photon latency is the most significant one [4] because it results in motion sickness and dizziness caused by the inconsistency of human perception. Specifically, it refers to the difference between the starting time point of the head motion for a new orientation and the time point when generating an image on the display of an HMD system [5]. Figure 1 shows the overall process and motion-to-photon latency of the image rendering in an HMD system. First, the physical head movement occurs, and the head position is measured using an inertial measurement unit (IMU) sensor. Then, an HMD device transmits the measurement data to a PC via a USB connection. The PC generates the changed image in the virtual space based on the measured physical position using the graphics processing unit (GPU). Eventually, a new image is outputted to the display of the HMD system. In this case, each module has a latency, and the total summation of the latencies in the whole process is called the motion-to-photon latency [6,7].

[pic]

Figure 1. Overall process and motion-to-photon latency of the image rendering.

Reducing the latency requires an accurate measurement system that can consider the human physical movement [8]. The following section explains the conventional approaches and their problems.

As a conventional and commercial measurement device, the Oculus latency tester [9] is the only one available. Figure 2 shows the tester installed for the latency measurement, and Figure 3 shows its measurement procedure. The tester sends a starting signal to a PC via a USB connection by pushing a button. After receiving that signal, the PC generates a square-patterned image and combines it with the current output image. Then, it outputs this image to the display of the HMD system. As shown in Figure 2, a photosensor of the tester mounted on the left side of the fish lens in the HMD continuously measures whether the luminance value for the particular color on a specific location of the display is higher than the threshold value, which is predetermined by the Oculus tester. If it is satisfied, the difference, which is the motion-to-photon latency, between the starting time and the measured time in the display of the HMD system is calculated.

|[pic] |

|Figure 2. Conventional latency measurement device: Oculus latency tester (a) before installing latency tester and (b) after |

|installing latency tester |

|[pic] |

Figure 3. Measurement procedure of the Oculus latency tester.

However, the existing measurement system has three problems. First, it does not consider a physical head movement and simply measures the luminance change between the input and the output signals in the screen. The second one is compatibility. The Oculus latency tester is exclusively dependent on the Oculus Rift DK1 hardware. Thus, it cannot be used for other HMD devices, nor can it be used as a reference measurement system for measuring the latency. The third one is the low accuracy. The conventional method starts to measure the latency by pressing the button on top of the tester device. Using the signal activated from the tester device, the PC renders the specific patterns and sends them to the display of the HMD system. Therefore, it does not consider the physical movement of the HMD system (it may have several milliseconds of latency). In addition, it is impossible to measure the latency change by changing the workload of the rendered image.

In a previous research [10], a new measurement system that can solve the abovementioned problems was proposed. However, it cannot ensure the accuracy owing to the use of a low-performance servo motor and because the output image uses the mirroring mode, which is a method that outputs an image to a monitor rather than to the display of an HMD system. In addition, it cannot precisely model the head movement. In other methods, a pendulum-based system [11] can measure the latency using a camera, but the pendulum motion cannot model the head movement, and it is not appropriate as a measurement system for recent HMDs with a small latency change due to the low accuracy. In a similar way, there is a conventional method calculating the latency based on the phase changes, which are measured in the virtual space and real space. This method has high precision by using the photosensor and rotary potentiometer, but it cannot provide the accurate latency because it contains many approximations in the motion generation and mathematical modeling [12]. The end-to-end measuring instrument [13] cannot consider the head motion because only one axis can be considered. In addition, HMDs use small OLED panels and, hence, the measurement equipment must be attached in front of the panel to detect the pixel luminance change. However, this system uses the monitor instead of the panel of the HMD. The method of [7] can measure a latency of the HMD system with the high sampling rate. However, this method cannot be used in various types of commercially available HMDs.

Table 1. Comparison of conventional and proposed measurement methods

|Previous Methods |Measurement Equipment |Considerations 1 |Measurement Coverage |

| | | |(Reference to Figure 1) |

|Choi & Seo [10] |Photosensor |ⓐ, ⓑ, ⓓ, ⓕ |②~⑦ |

|Steed [11] |Video camera |ⓐ, ⓓ |④~⑦ |

|Zhao [12] |Photosensor |ⓑ, ⓔ, ⓕ |①~⑦ |

| |Rotary potentiometer | | |

|Giorgos & Leigh [13] |Photosensor |ⓑ, ⓒ, ⓓ |①~⑦ (Except ④) |

| |Rotary motor | | |

|Lincoln [7] |Photosensor |ⓑ, ⓒ, ⓔ |①~⑦ |

| |Rotary encoder | | |

|Proposed method |Photosensor |ⓑ, ⓒ, ⓔ, ⓕ |①~⑦ |

| |Rotary encoder | | |

Considerations 1: ⓐsimple implementation, ⓑhigh sampling rate, ⓒphysical movement consideration, ⓓ use of PC monitor (mirroring), ⓔ use of display panel (direct method), and ⓕ used for latency measurement of various types of commercially available HMDs.

In this paper, a novel latency measurement system with high accuracy for solving these problems is proposed. The proposed system can generate an accurate movement of the HMD and measure the movement using high-accuracy encoders and motors. Then, a photodetector system detects the luminance of the changed image for multiple movement directions and calculates the time difference between two events. Therefore, it has high accuracy and reliability and can accurately measure the motion-to-photon latency, which is a critical performance indicator of an HMD system. In addition, the proposed method is easy to apply to various kinds of HMD systems because it is developed considering the characteristics of common HMDs. Table 1 shows the summary for considerations of the conventional and proposed latency measurement methods. In addition, the rightmost column shows where each method can measure the latency in the entire HMD system with reference to Figure 1.

2. Scope

We will describe two main types of motion-to-photon latency and motion blur that cause cyber sickness from the display point of view in this standardization document. In particular, a method for quantifying these problems will be described, and an introduction and related contents of technologies that can be used in various HMD devices will be described.

3. Use cases

1 Motion-to-Photon Latency Measurement System

Figure 4 shows a conceptual architecture of the proposed latency measurement system. The proposed system largely consists of a control PC, a head position model-based rotary platform, a pixel luminance change detector, and a digital oscilloscope.

|[pic] |

Figure 4. Conceptual architecture of the proposed latency measurement system.

The control PC controls each module and analyzes the measured signals, and the rotary platform is a physical device for modeling a head movement and for measuring the movement determined by the high-accuracy encoders and motors. The pixel luminance change detector measures the luminance change in the display of the HMD system and converts it into a voltage value. The oscilloscope measures and displays the voltage values of the measured signals. Figure 5 shows the overall procedure for the proposed system shown in Figure 4. First, the head position model-based rotary platform generates a precise head movement as determined by the control PC. Second, the photosensors of the pixel luminance change detector measure the changes in pixel luminance in the display of the HMD system caused by rotating the platform. Then, the oscilloscope displays the measured voltages for the physical movement of the platform measured by the encoders and the luminance change measured by the photosensors. Finally, the control PC calculates the motion-to-photon latency. The detailed explanation is given in the following subsections.

|[pic] |

Figure 5. Overall procedure of the proposed system.

Head Position Model-Based Rotary Platform

Figure 6 shows the overall architecture of the proposed photosensor-based latency measurement system. The rotary platform fixes the HMD system to the circular top plate and uses two motors and encoders for the rotation and position detection. Specifically, it can rotate in two axes of the yaw and pitch directions for modeling the head movement, as shown in Figure 7.

|[pic] |

Figure 6. Overall architecture of the photosensor-based latency measurement system: (a) a yaw-direction encoder, (b) a pitch-direction encoder, (c) a HMD system, and (d) a plate holding the display of the HMD system.

| [pic] |

Figure 7. Euler angles and coordinates in the HMD system.

In this case, the typical range for the head movement is the rotation of the yaw direction, which is up to ±50°, and the rotation of the pitch direction, which is up to ±40°. The maximum angular velocity of the head movement is up to 780°/s in the yaw direction and up to 380°/s in the pitch direction [14]. The proposed system is operated considering these human constraints. It also uses high-performance encoders with high resolution to improve the measurement accuracy and to measure precisely the rotation angles. Specifically, it has an accuracy of 0.018°/step based on an optical incremental-type rotary encoder (the step is a unit slit width of the rotating disk in the encoder). Figure 6a shows the first encoder, which measures the rotation angle of the yaw direction, and Figure 6b shows the second encoder, which measures the rotation angle of the pitch direction. Figure 6c shows the circular top plate where the HMD system is fixed, and Figure 6d shows a plate holding the display of the HMD system, which performs the pixel luminance change detection. The detailed operation of the proposed head position model-based rotary platform is discussed as follows: First, the head movement scenario defined by users is inputted into a control PC, and the proposed platform drives the DC motors to control the rotary platform for performing the head movements. Then, the movements in the pitch and yaw direction are performed, and the HMD system attached to the top plat-form is also moved. At the same time, each axis encoder generates pulses with different phases according to the movement because it prevents the interference between two different movements (the phase difference is 90°). Therefore, the physical movement can be detected accurately.

Pixel Luminance Change Detector

Figure 8 shows the overall architecture of the pixel luminance change detector used in the proposed system. It is placed on the rotary platform. The upper deck supporter of the detector holds the HMD system’s display panel, which outputs the rendered image. Four separate photosensors are used to recognize the direction, and they are located in front of the HMD panel, as shown in Figure 8a,b shows the HMD panel, which outputs the images, and Figure 8c shows a chamber that connects the photosensors and the display panel. Figure 9 shows a cross-sectional diagram of an individual pixel luminance change detector. It blocks the entrance of the external light between a panel and a photosensor, and it only transfers the light emitted from the panel to the photosensor. To measure the low-level change in pixel luminance in the desired position, a small slit where light could enter from the panel to the chamber to generate a darkroom environment.

|[pic] |

Figure 8. Architecture of the pixel luminance change detector: (a) a photosensor, (b) an HMD panel, and (c) a chamber.

|[pic] |

Figure 9. Cross-sectional diagram of an individual pixel luminance change detector.

The operation process of the pixel luminance change detection is as follows. The output image of the HMD panel is changed according to the HMD movement, as shown in Figure 10 [15]. The HMD system outputs an image corresponding to the gaze of the user in the virtual space. For example, if a user looks at the front, the HMD system outputs an image like that shown in Figure 10a. Figure 11 illustrates the concept of the pixel luminance change measurement method in the display. A display consists of multiple pixels, and hence, the image movement is represented by on-off pixels. If an object in the display panel moves toward the sensing position of the photosensor, as shown in Figure 11, the on-off switching of the pixel would be changed from the (n-3)-th frame to the (n)-th frame. The changed luminance is converted into a voltage in the photosensor, and this voltage is measured using the oscilloscope. This permits the luminance change in the screen to be measured.

|[pic] |

Figure 10. Examples of image changes according to the physical movement of the HMD system: (a) an initial image, (b) an image generated when a user moves down, (c) an image generated when a user moves right, and (d) an image generated when a user moves left.

|[pic] |

Figure 11. Operation process of the pixel luminance change measurement method.

In this case, the luminance change in the virtual space should be considered instead of the pixel luminance change in the display panel because a change occurs in the image of the virtual space based on the head movement. In some cases, the luminance change could not be measured because the pixel luminance change is not sufficient enough to be detected by the photosensor. To solve this problem, a virtual lens technique is proposed as shown in Figure 12. This technique enlarges the pixel luminance as if using a magnifying glass, and hence, can measure the low-level luminance change in a pixel. The typical coordinate mapping between a 3D virtual space and a 2D space is as follows:

|[pic] |(1) |

where I3Dout and I2Dout denote the luminance in the 3D virtual space and 2D mapped space in the display, respectively. xsq and ysq denote the horizontal and vertical indexes, respectively, which are sampled and quantized. x and y denote the horizontal and vertical indices in the 2D space, respectively.

Using (1), the proposed method using the virtual lens is defined as follows:

|[pic] |(2) |

where p denotes the magnification of the virtual lens and Ifout denotes the final output luminance. The proposed system uses p = 5, which was experimentally selected. If p is higher, the output luminance is also higher, and it is changeable.

|[pic] |

Figure 12. Proposed virtual lens technique for improving the accuracy of the measurement.

Signal Analysis and Calculation

The signal analysis and calculation modules finally compute the motion-to-photon latency using two signals, namely, a pulse from the encoder for the physical movement and a pulse from the photosensor measuring the luminance change. In this case, it is important to separate exactly an original signal from a noise signal by considering the sensitivity of the signal change. The proposed system uses a thresholding technique to remove noise. It is based on the change in measured signals to set the optimal threshold value. Specifically, it measures the number of n signals without changing the signal during a specific period as follows:

|[pic] |(3) |

|[pic] | |

|[pic] | |

where Vin denotes an input signal at time t and N denotes the total number of input signals during a specific period. Vm denotes an average voltage of the input signals with noise. Nc denotes a candidate noise at time t, and Ne denotes an estimated noise, which is the largest absolute value among candidate noises. It is set to the threshold value, and the proposed method only selects the voltage beyond this value. Finally, the time difference between two signals, namely, a pulse from the encoder and a pulse from the photosensor measuring the luminance change in the display of the HMD system, is measured. It is defined as follows:

|[pic] |(4) |

where tphoto denotes the time point when the luminance changes in the HMD display, tencoder denotes the time point when the physical movement of the HMD occurs, and Δt denotes the final motion-to-photon latency.

1 Implementation

Figure 13 shows an implementation of the photosensor-based latency measurement system. The rotary platform was designed to make a movement, as shown in Figure 13a, and the detector was placed on this platform to measure the luminance change, as shown in Figure 13b. The oscilloscope and the amplifiers, shown in Figure 13c, were used to calculate and analyze the output signals of each part. Specifically, the Oculus Rift DK2 hardware, which is one of the most popular VR systems, was used as the target HMD. A rotary DC motor (RE40, Maxon, Sachseln, Switzerland) [16] was used to rotate the platform, and a controller (EPOS2 50/5, Maxon, Sachseln, Switzerland) was used to handle the platform [17]. In addition, incremental-type encoders (EIL580, Baumer, Southington, USA) [18] were used to generate pulses based on the movement of the HMD. Its maximum output frequency was 300 kHz and its resolution was 5000 steps/turn (0.018°/step). A photosensor (SM05PD2B, Thorlabs, Newton, USA) was used to measure the luminance change [10], and its spectral range was from 200 nm to 1000 nm. The PC-based oscilloscope was a PicoScope 4824 oscilloscope (Pico technology, St Neots, UK) [19]. For rendering the virtual space and analyzing the signals, a PC with an Intel i7-6700k 4.4-GHz CPU and an NVIDIA GeForce GTX 1080 GPU was used. In addition, the time-warp technique was not used for generating VR patterns used to measure the actual motion-to-photon latency of the HMD.

|[pic] |

Figure 13. Prototype of the proposed latency measurement system: (a) a head position model-based rotary platform, (b) a pixel luminance change detector, and (c) an oscilloscope and an amplifier.

2 Experiment Result

Two different experiments for the latency measurement were performed using the proposed system. First, the motion-to-photon latency for the commercial HMD was measured. Second, this latency was evaluated by changing the graphic rendering workload for the HMD system.

First, motion-to-photon latencies were measured by rotating in the yaw and pitch directions. Table 2 shows the statistical results for the experiment repeated 20 times to measure the rotation angle of the yaw direction. The measured average latencies were almost constant, and the standard deviations were also low. This means that the proposed instrument can accurately measure latency without any deviation. Figure 14 shows the motion-to-photon latency for a specific individual experiment. The blue pulse was generated by the encoder with high resolution, which was used for the physical movement. The red pulse was generated by changing the luminance of the display in the photosensor. The rotation angles were 20°, 40°, and 60°, and the results were 44.61, 46.83, and 46.46 ms, respectively. Each of the standard deviation was also calculated up to 1.45 ms. The latency was measured accurately regardless of the rotation angle in the yaw direction. Next, Table 3 shows the experimental statistical results for the rotation of the pitch direction. In this case, the rotation angles were 10°, 20°, and 30°, and the results were 46.48, 46.79, and 47.05 ms, respectively. The standard deviation was up to 1.09 ms. Figure 15 shows the latency in the pitch direction for a specific individual experiment. These results showed that the proposed measurement system could measure the latency precisely and also showed the reliability of the measurement because of the almost similar results.

Table 2. Average latencies and standard deviations for different rotation angles (yaw rotation).

|Conditions |Measured Time (ms) |

|Max rotation Angle (°) |Angular Velocity (°/s) |Average Latency |Standard Deviation |

|20 |42.82 |44.61 |1.45 |

|40 | |46.83 |0.74 |

|60 | |46.46 |0.90 |

[pic]

Figure 14. Latency measurement results when the yaw rotation angle was changed: (a) 20(, (b) 40(, and (c) 60(.

Table 3. Average latencies and standard deviations for different rotation angles (pitch rotation).

|Conditions |Measured Time (ms) |

|Max rotation Angle (°) |Angular Velocity (°/s) |Average Latency |Standard Deviation |

|10 |42.82 |46.48 |1.09 |

|20 | |46.79 |0.98 |

|30 | |47.05 |0.89 |

[pic]

Figure 15. Latency measurement results when the pitch rotation angle was changed: (a) 10(, (b) 20(, and (c) 30(.

2 Motion Blur Measurement System

Motion blur that humans perceive is different from motion blur which simply appears in a moving picture. A perceived image is a result for the temporally continuous image integration with the eye movement which is synchronized with motion picture [3]. Reflecting on this, the MPRT metric [4] is widely used to measure motion blur. It can also be used to measure motion blur of HMDs, but we need to consider physical rotation applied to HMDs. Figure 16 shows that the output image exists as a part of a virtual space and the output image changes by the rotation angle of HMDs. The test pattern in a virtual space is still in place but the test pattern shown to users moves in the opposite direction of the rotation. The rotation angle of HMDs depends on a head movement. Therefore, for motion blur measurement, a system to simulate the head movement is necessary.

[pic]

Figure 16. Conceptual image of the test pattern in a virtual space and a HMD

[pic]

Figure 17. Conceptual image of the MPRT difference according to frame rate

In addition, one of the most significant variable to the MPRT measurement is the display frame rate. Figure 17 shows that the display frame rate has a significant effect on the MPRT measurement, and the MPRT is measured to be large compared to the low frame rate and the high frame rate. In the simulation, the display frame rate is adjusted so that the MPRT can be measured according to the change of the frame rate. In the proposed system, we simulate the head movement using a head position model-based rotary platform for the motion-to-photon latency [5] and we use the MPRT metric to measure motion blur on HMDs.

[pic]

Figure 18. Overall block diagram of the proposed motion blur measurement system for HMD: (a) head movement simulation with head position model-based rotary platform, (b) test pattern rendered on the HMD and the half QHD panel (c) image acquisition with a high-speed camera (d) integration with photographed images (e) MPRT calculation and analysis part

Head Movement Simulation

Basically, the head movement is simulated by properly rotating the servo motor of the rotary platform. In this case, the rotation angle which is expressed by Euler angle is composed of yaw, pitch, and roll [6]. We only use one angle signal because the moving direction of an object is not related to the amount of motion blur. Figure 18 (a) shows that the control PC is used to signal the servo motor of the rotary platform. In addition, three devices (HMD, panel for mirroring mode, and high-speed camera) are attached to the platform to obtain an output image. These attached devices should have a dependency on the head movement because the output image should not be moved by external factors.

Figure 18 (b) shows that a bar-shaped white test pattern is generated in HMDs. This shape of the test pattern is to ensure that the horizontally blurred edges in the output image are well observed. Also, the size of the test pattern in HMDs should be constant. If we create the test pattern in a virtual space and do not directly match with the rotation angle, the size of this test pattern is not constant as shown in Figures 16 (a) and (b). Instead, the rotation angle should be directly mapped to the test pattern so that the width of the test pattern does not change.

Motion Blur Measurement

The MPRT measurement of motion blur on HMDs and the existing display system is quite different. In the case of the existing display, the measurement is performed in a constant environment where the scroll velocity of the test pattern is constant. In contrast, the scroll velocity that is not constant should be considered on HMDs because the image is rendered according to the rotation angle of the head movement simulation. There are three parts of the motion bluer measurement: image acquisition, image integration, and MPRT calculation.

Image Acquisition

The movement of the test pattern in HMDs should be captured using a high-speed camera with a high frame rate because the pixel-to-pixel variation of the display is measured as a sequence of consecutive images. The barrel distortion is applied in HMDs, and chromatic aberration occurs when the camera directly photographs the display. Because of the barrel distortion, the position of the test pattern can make the width of the test pattern different. Therefore, it is impossible to measure an accurate blurred edge. In addition, when chromatic aberration occurs on HMDs, unexpected luminance occurs. The luminance value of that part is recognized as the blur and affects the MPRT measurement. Figure 18 (b) represents that the test pattern is rendered on other panel through the mirroring mode to eliminate these two factors. Then, the high-speed camera photographs the panel. Figure 18 (c) shows that the camera can be operated through the infrared rays (IR) receiver. This is to prevent the image from being swayed by remote control when the rotary platform is working.

Image Integration

For motion blur measurements involving human cognitive characteristics, temporally continuous image integration is required, and the image should be synchronized to the displayed frame rate. Figure 18 (c) shows that the captured image is integrated into one image during one display frame. For example, at the display frame rate of 30 Hz and the camera frame rate of 960 Hz, an integrated image is obtained by synthesizing 32 images. The high frame rate makes more continuous integrated images.

MPRT Calculation

We calculate the final MPRT through the luminance of the integrated image as above. Figure 18 (d) shows that the luminance of the entire row is extracted from the A-B line at the center of the integrated image [7]. Figure 18 (e) shows the luminance value of pixels by plotting the luminance value on the y-axis. This graph will result in a change in the pixel value due to physical motion. In the graph, blurred-edge-width (BEW) is extracted from 10%-to-90%-pixel range based on the maximum value of y-axis luminance. In the conventional display system, the MPRT is calculated as

follows, based on the velocity calculated by BEW and pixel per frame (PPF). It is defined as follows:

| |(1) |

|[pic] | |

where NBEW denotes the normalized value of BEW to the scroll velocity, NBET denotes the normalized blur width. However, in the case of the rotary platform, the HMD is rotated at a non-constant velocity. Thus, the PPF of the test pattern also changes in velocity depending on the angle of rotation. The angle data is obtained from the servo motor of the rotary platform and the velocity is calculated from the angle data. Figure 19 shows that the test pattern moves according to the rotation angle based on the center of the screen at zero degree. The moving distance is represented by the following equation and the scroll velocity is obtained by differentiating it. It is defined as follows:

| |(2) |

|[pic] | |

where Npixel denotes the number of pixels that the test pattern moved relative to the center of the screen, θ denotes the angle at which the HMD rotates about the yaw axis, and all pixels denote the horizontal resolution of the display. The MPRT measurement can be used in all motion speed by converting these rotation angle data to scroll velocity.

[pic]

Figure 19. Relationship between rotation angle and scroll velocity

[pic]

Figure 20. Graph for rotation angle and time with scroll velocity

[pic]

Figure 21. Graph for scroll velocity and time with MPRT results

1 Experimental Results

We selected the Oculus Rift DK2 [9] as the target HMD device. The high-speed camera of SONY DSC-RX100M5 [10] was used to obtain the output image of a high frame rate. First, we made a physical movement on the platform and derived an image from the panel that is rendered correspondingly to the movement. We conducted two experiments to verify calculated MPRT. One is the MPRT measurement according to the scroll velocity of the test pattern and the other is the MPRT measurement according to the frame rate of HMDs. When the head movement simulation of various physical rotations was performed, the degree of rotation was converted into yaw angle data by the equation (2). Figure 20 was the graph of the calculated yaw angle data. Figure 21 shows that the data was transformed into scroll velocity and it was displayed. The MPRT was 22.98ms, 22.85ms, 20.74ms, and 19.58ms when the scroll velocity was 40PPF, 30PPF, 18PPF, and 8PPF. All of them had similar values near 20ms. Since the BEW and scroll velocity were proportional, the proposed system could make possible to measure the motion blur for high-speed head movement.

Table 1. MPRT according to scroll velocity

|Display frame rate (Hz) |Scroll velocity |MPRT |

| |(PPF) |(ms) |

|60 |40 |22.98 |

|60 |30 |22.85 |

|60 |18 |20.74 |

|60 |8 |19.58 |

Table 2. MPRT according to display frame rate

|Display frame rate (Hz) |Scroll velocity |MPRT |

| |(pixels / sec) |(ms) |

|60 |1080 |20.74 |

|30 |1080 |29.26 |

|10 |1080 |71.94 |

Table II shows the experimental results conducted with the same scroll velocity and different display frame rate. The display frame rates were set to 60Hz, 30Hz, and 10Hz, respectively. In this case, scroll velocity with a unit of PPF varies depending on the display frame rate. Therefore, the velocity at which the actual test pattern is scrolled (pixels/sec) was set to 1080. When the display frame rates were 60Hz, 30Hz, and 10Hz, the scroll velocities were 18PPF, 36PPF, and 108PPF and the MPRT was 20.74ms, 29.26ms, and 71.94ms, respectively. If the frame rate was high, the MPRT was small.

Bibliography

Bibliographical references are resources that provide additional or helpful material but do not need to be understood or used to implement this standard. Reference to these resources is made for informational use only.

1. Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. IEICE TRANS. Inf. Syst. 1994, E77-D, 1321–1329.

2. Azuma, R.; Baillot, Y.; Behringer, R.; Feiner, S. Recent advances in augmented reality. IEEE Comput. Graph. Appl. 2001, 21, 34–47.

3. Goradia, I.; Doshi, J.; Kurup, L. A review paper on oculus rift & project morpheus. Int. J. Curr. Eng. Technol. 2014, 4, 3196–3200.

4. Pohl, D.; Johnson, G.S.; Bolkart, T. Improved pre-warping for wide angle, head mounted displays. In Proceedings of the 19th ACM symposium on Virtual Reality Software and Technology, New York, NY, USA, 6–9 October 2013; pp. 259–262

5. Abrash, M. What VR Could, Should, and Almost Certainly Will Be within Two Years. In Proceedings of the Steam Dev Days, Seattle, WA, USA, 15–16 June 2014.

6. Kanter, D. Graphics Processing Requirements for Enabling Immersive VR. In AMD White Paper; AMD: Sunnyvale, CA, USA, 2015.

7. Lincoln, P.; Blate, A.; Singh, M.; Whitted, T.; State, A.; Lastra, A.; Fuchs, H. From Motion to Photons in 80 Microseconds: Towards Minimal Latency for Virtual and Augmented Reality. IEEE Trans. Visualiz. Comput. Graph. 2016, 22, 1367–1376.

8. Heim, M. The design of virtual reality. Body Soc. 1995, 1, 65–77.

9. De la Rubia, E. One More Step in Virtual Reality Interaction. In New Trends in Interaction, Virtual Reality and Modeling; Springer: London, UK, 2013; pp. 45–61.

10. Choi, S.; Seo, M.; Lee, S.; Park, J.; Oh, E.; Baek, J.; Kang, S. Head position model-based Latency Measurement System for Virtual Reality Head Mounted Display. SID 2016, 47, 1381–1384.

11. Steed, A. A simple method for estimating the latency of interactive, real-time graphics simulations. In Proceedings of the 2008 ACM symposium on Virtual reality software and technology, Bordeaux, France, 27–29 October 2008; pp. 123–129.

12. Zhao, J.; Allison, R.S.; Vinnikov, M.; Jennings, S. Estimating the motion-to-photon latency in head mounted displays. In Proceedings of the 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 18–22 March 2017; pp. 313–314.

13. Giorgos, P.; Katerina, M.; Eftichios, K. A system to measure, control and minimize end-to-end head tracking latency in immersive simulations. In Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry, Hong Kong, China, 11–12 December 2011; pp. 581–584.

14. Grossman, G.E.; Leigh, R.J.; Abel, L.A.; Lanska, D.J.; Thurston, S.E. Frequency and velocity of rotational head perturbations during locomotion. Exp. Brain Res. 1988, 70, 470–476.

15. Peek, E.; Lutteroth, C.; Wunsche, B. More for less: Fast image warping for improving the appearance of head tracking on hmds. Image Vis. Comput. N. Z. 2013, 41–46, doi:10.1109/IVCNZ.2013.6726990.

16. Indiveri, G.; Zanoli, S.M.; Parlangeli, G. DC motor control issues for UUVs. In Proceedings of the 2006 IEEE 14th Mediterranean Conference on Control and Automation, MED ’06, Ancona, Italy, 28–30 June 2006; pp. 1–5.

17. Gajamohan, M.; Merz, M.; Thommen, I.; D’Andrea, R. The cubli: A cube that can jump up and balance. Intell. Robot. Syst. 2012, 3722–3727, doi:10.1109/IROS.2012.6385896.

18. EIL580 Mounted Optical incremental encoders Specification. Available online: (accessed on 10, Jan., 2017).

19. PicoScope 4824 Data Sheet. Available online: (accessed on 10, Jan., 2017).

20. Angel, E. Interactive Computer Graphics, 5th ed.; Addison-Wesley: Boston, MA, USA, 2007.

21. O. Merhi, E. Faugloire, M. Flanagan, and T. A. Stoffregen, “Motion Sickness, Console Video Games, and Head-Mounted Displays,” Human Factors, 49(5), 920-934, (2007).

22. F. Zheng, T. Whitted, A. Lastra, P. Lincoln, A. State, A. Maimone, and H. Fuchs, “Minimizing latency for augmented reality displays: Frames considered harmful,” In Mixed and Augmented Reality (ISMAR), 195-200, (2014).

23. T. Yamamoto, Y. aono, and M. tsumura, “30.2: Guiding Principles for High Quality Motion PICTURE IN amlcdS Applicable to TV monitors,” In SID Symposium Digest of Technical Papers, 31(1), 456-459, (2000).

24. Y. Igrashi, T. Yamamoto, Y. Tanaka, J. Someya, Y. Nakakura, M. Yamakawa, S.Hasegawa, Y. Nishida, and T. Kurita, “Proposal of the Perceptive Parameter Motions picture Response Time (MPRT),” In SID Symposium Digest of Technical Papers, 34(1), 1038-1041, (2003).

25. M. W. Seo, S. W. Choi, S. L. Lee, E. Y. Oh, J. S. Baek, and S. J. Kang, “Photosensor-Based Latency Measurement System for Head-Mounted Displays,” Sensors, 17(5), 1112, (2017).

26. R. Mahony, T. Hame, and J. M. Pflimlin, “Complementary filter design on the special orthogonal group SO (3),” In Proceedings of the 44th IEEE Conference on Decision and Control, 1477-1484, (2005).

27. K. Oka, and Y. Enami, “Moving Picture Response Time (MPRT) Measurement System, In SID Symposium Digest of Technical Papers, 35(1), 1266-1269, (2004).

28. Y. Igarashi, T. Yamamoto, Y. Tanaka, J. Someya, Y. Nakakura, M. Yamakawa, Y. Nishida, and T. Kurita, “Summary of Moving Picture Response Time (MPRT) and Futures,” In SID Symposium Digest of Technical Papers, 35(1), 1262-1265, (2004).

29. Oculus Rift DK2 Developer Introduction. Available: .

30. SONY DSC-RX100M5 Full Specifications and Features. Available: .

The Institute of Electrical and Electronics Engineers, Inc.

3 Park Avenue, New York, NY 10016-5997, USA

Copyright © 2018 by The Institute of Electrical and Electronics Engineers, Inc.

All rights reserved. Published . Printed in the United States of America.

IEEE is a registered trademark in the U.S. Patent & Trademark Office, owned by The Institute of Electrical and Electronics

Engineers, Incorporated.

PDF: ISBN 978-0-XXXX-XXXX-X STDXXXXX

Print: ISBN 978-0-XXXX-XXXX-X STDPDXXXXX

IEEE prohibits discrimination, harassment, and bullying.

For more information, visit .

No part of this publication may be reproduced in any form, in an electronic retrieval system or otherwise, without the prior written permission of the publisher.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download

To fulfill the demand for quickly locating and searching documents.

It is intelligent file search solution for home and business.

Literature Lottery

Related searches