Realistic AR Makeup over Diverse Skin Tones on Mobile

Realistic AR Makeup over Diverse Skin Tones on Mobile

Bruno Evangelista

Instagram

Anaelisa Aburto

Instagram

Houman Meshkin

Instagram

Ben Max Rubinstein

Instagram

Helen Kim

Instagram

Andrea Ho

Instagram

Glitter

Gloss

Matte

Metallic

Figure 1: (Left) Results of AR makeup application. (Right) Different materials and respective rendering results.

CCS CONCEPTS

? Computing methodologies Mixed / augmented reality;

KEYWORDS

Augmented Reality, Makeup, Cosmetic Rendering, Skin Rendering

ACM Reference Format: Bruno Evangelista, Houman Meshkin, Helen Kim, Anaelisa Aburto, Ben Max Rubinstein, and Andrea Ho. 2018. Realistic AR Makeup over Diverse Skin Tones on Mobile. In Proceedings of SA '18 Posters. ACM, New York, NY, USA, 2 pages.

1 INTRODUCTION

We propose a novel approach to the application of realistic makeup over a diverse set of skin tones in mobile phones using augmented reality. The method we developed mimics the real world layering techniques and application that makeup artists use. We can accurately represent the five most commonly used materials found in commercial makeup products- Matte, Velvet, Glossy, Glitter, and Metallic. We apply skin smoothing to even out the natural skin tone and tone-mapping to further blend source and synthetic layers.

2 OUR APPROACH

Our makeup pipeline relies on a real-time mobile face-tracker, which allows us to run GPU shaders over a face-aligned mesh for each frame of a live video stream, as shown in figure 2. We

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SA '18 Posters, December 04-07, 2018, Tokyo, Japan ? 2018 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6063-0/18/12.

Figure 2: Our pipeline - makeup is applied to live video frames. Accessories, such as eyelashes, are rendered. The final image goes through skin smoothing and tone mapping.

provide as input, constructed maps that define face regions, such as lips, eyes, and cheeks, as well as makeup properties. Our light responsive makeup is then applied, generating our target image. Makeup accessories, such as eyelashes, are optionally rigged to the face and rendered on top. Lastly, we apply skin smoothing and tonemapping to further blend source and synthetic layers, increasing realism.

2.1 Light Responsive Makeup

Our algorithm works in RGB and LAB color space, where the base albedo color is applied in RGB and shading is done in LAB. Previous works have applied the makeup base color in HSV [KIM and CHOI 2008] or LAB color space [d. Campos and Morimoto 2014]. However,

SA '18 Posters, December 04-07, 2018, Tokyo, Japan

EVANGELISTA, B. et al.

Figure 3: Eyelashes attached to face's UV and rendered.

those approaches don't retain a consistent color over diverse skin tones (hues) or lighting conditions.

In our algorithm, we first desaturate the input image and extract mid and low luminance frequencies. Then, we combine the source makeup color with the extracted frequencies, using the mid frequency to highlight - by screening it on top, and the low frequency to darken - by multiplying it on top. This process is shown in equation 1 which uses artist provided frequencies.

L(v, min, max) = max(0, v - min)/max(, max - min)

F1 = L(luma, 0.2, 0.8), F2 = L(luma, 0.2, 0.5) X1 = Lerp(Screen(F1, Imakeup ), Imakeup , ) (1) X2 = Lerp(Multiply(X1, F2), F2, )

Our shading algorithm combines the makeup color with a precomputed ambient-occlusion map and converts the result to LAB space. To achieve the material looks shown on figure 1, we propose an empirical Gloss and Shine model which works by transforming LAB's lightness, this model is shown in the equation below.

shine

=

LabL shine Power 2

+

LabL 2

loss = H (0, 100, shine ) 100

LabL = loss + threshold lossAlpha luma1+ lossPower (2)

Lastly, we use an environment map to simulate reflections over very bright makeup areas. Our map contains a low-frequency studio light setup and is 3D oriented according to the user's mobile device.

2.2 Eyelashes

For eyelashes, we use a strip-like mesh with joints and a texture for lashes patterns. We use 4 joints to reliably attach our mesh to UV coordinates in the face tracker's face mesh, which is transformed at runtime via a series of blend shapes, mimicking the user's facial expressions as shown in figure 3. This allows us to control the length, curvature, and density of the eyelashes.

2.3 Skin Smoothing

We apply an edge preserving blur filter to even out the natural skin tone, mimicking foundation makeup products. We explored a few algorithms, including Bilateral Filter [Barash and Comaniciu 2004], Low-pass Filter and Guided Filter [He and Sun 2015]. The Bilateral filter provided good visual results, however, its O(N 2) complexity makes it computationally expensive for mobile devices. And although there is an approximate separable solution, it often

Figure 4: Tone mapping on different lightning environments with respective min, avg and max log luma.

generates artifacts [Yoshizawa et al. 2010]. The Low-pass filter was computationally efficient but didn't produce good visual results.

To achieve our desired visual look and performance, we used the Fast Guided Filter, which does the bulk of its computations in sub-sampled space, making it efficient. Our implementation further optimizes it by using the image's luma as the guiding image, allowing RGB and luma to be packed in a single texture.

2.4 Tone Mapping

We apply localized tone mapping similar to Reinhard's operator [Reinhard et al. 2002]. In our implementation, we first take advantage of GPU bilinear filtering to downsample the image to 1/16 of its size. To localize the tone mapping, we use a 4x8 grid and compute the min, max and average luma in log space per region. This localization improves visual quality and better utilizes GPU's parallelism. Finally, we remap the color of each pixel to a s-curve generated from the computed values. This results in rendered pixels that match the actual environment as shown in figure 4.

3 RESULTS

Our method was tested in a range of mobile devices, achieving over 45fps. The table below shows our makeup rendering time.

Device

Render (ms) Device

Render (ms)

2013 Nexus 5

20

2015 Pixel 1

16

2014 Galaxy S5

14

2017 Pixel 2

7

We have shown our method can realistically render materials commonly used by makeup artists (figure 1), and based on it, we developed a platform for mobile users to try-on commercial makeup products free of charge.

REFERENCES

Danny Barash and Dorin Comaniciu. 2004. A common framework for nonlinear diffusion, adaptive smoothing, bilateral filtering and mean shift. Image and Vision Computing 22, 1 (2004), 73 ? 81.

F. M. S. d. Campos and C. H. Morimoto. 2014. Virtual Makeup: Foundation, Eye Shadow and Lipstick Simulation. In 2014 XVI Symposium on Virtual and Augmented Reality. 181?189.

Kaiming He and Jian Sun. 2015. Fast Guided Filter. CoRR abs/1505.00996 (2015). arXiv:1505.00996

Jeong-Sik KIM and Soo-Mi CHOI. 2008. Interactive Cosmetic Makeup of a 3D PointBased Face Model. IEICE Transactions on Information and Systems E91.D, 6 (2008), 1673?1680.

Erik Reinhard, Michael Stark, Peter Shirley, and James Ferwerda. 2002. Photographic Tone Reproduction for Digital Images. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '02). ACM, New York, NY, USA, 267?276.

Shin Yoshizawa, Alexander Belyaev, and Hideo Yokota. 2010. Fast Gauss Bilateral Filtering. Computer Graphics Forum 29, 1 (2010), 60?74. . 1467- 8659.2009.01544.x

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download