Parcel #q5ycoki1rgqg1j7

Created by Anonymous
Public

Created March 8, 2024 Expires in -50 day

Loading editor...

Graphical Models 115 (2021) 101106 Contents lists available at ScienceDirect Graphical Models journal homepage: www.elsevier.com/locate/gmod Learning a shared deformation space for efficient design-preserving garment transfer Min Shi a * , a , Yukun Wei a , Lan Chen b , c , Dengming Zhu d , Tianlu Mao School of Control and Computer Engineering, North China Electric Power University, Beijing, China b c d Institute of Automation, Chinese Academy of Sciences, Beijing, China School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China d , Zhaoqi Wang d ARTICLE INFO ABSTRACT Keywords: Garment transfer Cloth deformation Shape analysis Garment transfer from a source mannequin to a shape-varying individual is a vital technique in computer graphics. Existing garment transfer methods are either time consuming or lack designed details especially for clothing with complex styles. In this paper, we propose a data-driven approach to efficiently transfer garments between two distinctive bodies while preserving the source design. Given two sets of simulated garments on a source body and a target body, we utilize the deformation gradients as the representation. Since garments in our dataset are with various topologies, we embed cloth deformation to the body. For garment transfer, the defor mation is decomposed into two aspects, typically style and shape. An encoder-decoder network is proposed to learn a shared space which is invariant to garment style but related to the deformation of human bodies. For a new garment in a different style worn by the source human, our method can efficiently transfer it to the target body with the shared shape deformation, meanwhile preserving the designed details. We qualitatively and quantitatively evaluate our method on a diverse set of 3D garments that showcase rich wrinkling patterns. Ex periments show that the transferred garments can preserve the source design even if the target body is quite different from the source one. 1. Introduction A garment usually has different sizes to fit customers of various body shapes. Most ready-to-wear garments are designed with reference to mannequins with standard body shapes. However, for customers with body shapes of different proportions, especially those who have distinctive somatotype characteristics, a base-size garment fails to pro vide superior fit and cannot preserve the original design. When choosing which garment to buy, the customer's decision largely depends on the design of sample (source) garment, and the customers expect the draped garments onto their bodies keep exactly the same design as the source garment. Generating 3D virtual garments according to the target body shape and then perform virtual try-on not only enables customers to preview the fitting effect before the garments are produced but also assist in the development of garments suitable for customers with distinctive body shapes, which can reduce the design cost but increase customer satisfaction. In addition, virtual fitting technology has * Corresponding author. attracted the interest of the entertainment industry [1--3], since it is a significant part of movies, video games, virtual- and augmented-reality applications, etc. A few techniques [4,5] have been proposed to automatically perform design-preserving garment transfer from source to target body. How ever, Brouet et al. [4] is sensitive to modeling setup and [5] will struggle in a case when the target body is very different from the source. Furthermore, both of them are computationally expensive, making them not suitable for online applications. Other established workflows [6--11] mainly focus on how to dress a given garment onto target body with a f ixed pose or shape, regardless of whether the source design is preserved. Moreover, existing methods either support garment retargeting for a garment with a fixed style [6,8,9] which has limited use, or implement garment remeshing using body topology [10,11] which is hard to be used for representing garments with complex styles or garments that are different from body topology. What is lacking is a garment transfer workflow that is design-preserving, capable of distinctive body-shapes E-mail addresses: [email protected] (M. Shi), [email protected] (Y. Wei), [email protected] (L. Chen), [email protected] (D. Zhu), [email protected] (T. Mao), [email protected] (Z. Wang). https://doi.org/10.1016/j.gmod.2021.101106 Received 5 February 2021; Received in revised form 1 April 2021; Accepted 6 April 2021 Available online 20 April 2021 1524-0703/© 2021 Elsevier Inc. All rights reserved. M. Shi et al. Graphical Models 115 (2021) 101106 and new garment type, efficient in time performance, and supports garment meshes with different topologies. To that end, we address the problem of efficient design-preserving garment transfer for mannequins with distinctive body shapes (see Fig. 1). In this paper, we select banana-shaped body as source body and pear-shaped body as target body to perform garment transfer. We first stitch sewing patterns onto the source and target body using a physical simulator, and we use deformation gradients to build cloth deformation from the source to target garment. Then, we define garment transfer as a decomposition of style-dependent term and shape-dependent term from cloth deformation (see Section 3). To handle garments with different topologies, we propose embedding deformation gradients into the body (see Section 4). By learning a shared deformation space using an encoder-decoder network, we separate shape-dependent deformation from embedded cloth deformation (see Section 5). Finally, by simply applying the learned shape-related cloth deformation to the source garment, our transfer method can deform the garment from the source to the target body within a short time period, while preserving source design. We qualitatively and quantitatively evaluate our method from different aspects in Section 6. In summary, the key contributions of our approach are as follows: - Problem formulation. We propose factoring cloth deformation into shape-dependent deformation and style-dependent deformation. Then, we define garment transfer as deforming source garment using shape-dependent deformation to generate garment on target body shape. Once the shape-related cloth deformation is learned, we can use it to deform new garments with arbitrary topology. - Feature description. By learning a shared deformation space, we separate shape-dependent deformation from embedded cloth defor mation. Our simple but efficient garment transfer workflow is suit able for characters with distinctive body shapes and garments that showcase very rich wrinkle details. - Data representation. A network usually takes fixed-size data as input, and unaligned 3D mesh would most probably struggle in this case. Remeshing garments with different topologies is difficult. We pro pose embedding deformation gradients into the body, providing a dimensional consistent deformation representation, which enables unaligned 3D mesh data for learning tasks (e.g. shape analysis, learning latent representation). 2. Related work 2.1. Physics-based cloth simulation Although various professional cloth simulation software is available [12,13], physics-based cloth simulation is still a popular research topic in the field of computer graphics. To simulate the movement of real cloth as well as possible, physics-based simulation (PBS) models different types of clothing deformation. Based on force analysis of particles, the mass-spring model [14,15] resolved cloth deformation with a low calculation complexity. The yarn-based method [16,17] was used to simulate woven clothes. Considering that the simulation of high-precision clothes would bring a heavy computational burden, adaptive cloth simulation [18,19] dynamically adjusts the accuracy of cloth to increase the calculation speed. In addition, many researchers have aimed to replace elastic forces using position-based constraints [20]. To eliminate collisions during cloth simulation, many collision handling strategies [21,22] have been proposed. As the most traditional method in clothing animation, PBS can obtain realistic and physics-compliant cloth dynamics, but it requires intensive computation [23,24], making it difficult for it to guarantee real-time performance. 2.2. Data-driven cloth deformation Data-driven cloth deformation methods are designed to reuse cloth deformation statistics. Compared with the physics-based method, a data- driven approach better ensures efficiency. By determining potential collision areas between garments and bodies, Cordier et al. [25] pro duced cloth deformation effect that is visually satisfactory. The tech nique DrivenShape [26] exploits the known correspondence between two sets of shape deformations to drive the deformation of the secondary object. Zhou et al. [27] proposed an image-based virtual fitting method that can synthesize the motion and deformation of a garment model by capturing the skeletal structure of the character. As a garment genera tion method for various body shapes and postures, the DRAPE model [6] can quickly put a garment onto mannequins with specified body shapes and postures with the help of the SCAPE [28] model. Given a specified pose and precomputed garment shape examples as input, Xu et al. [29] presented a sensitivity-based method to construct a pose-dependent rigging solution, which can synthesize real-time cloth deformation. The learning-based garment animation pipeline with deep neural net works [9] enables virtual fitting for characters with different body shapes and poses, producing realistic dynamics and wrinkle details. By learning a motion-invariant encoding network, Wang et al. [30] learned intrinsic properties that are independent of body motion, providing a semi-automatic solution for authoring garment animation. Recently, with the rise of geometric deep learning [31], data-driven cloth defor mation technology will usher in a new opportunity. ACAP [32] enables large-scale mesh deformation representation with both accuracy and efficiency. Tan et al. [33] proposed mesh-based autoencoder for local ized deformation component analysis. Mesh variational autoencoder [34,35] provides a new tool for analyzing deforming 3D meshes, which is widely used for tasks such as deformation transfer [36], shape gen eration [37], etc. Fig. 1. Given a base-size sewing pattern and an instance of its corresponding physics-simulated garment, our efficient solution can transfer the garment from a standard body to mannequins with distinctive body shapes, preserving the original garment design. Each pair of garments shows the transfer result from the source (left) to the target (right) body. 2 GraphicalModels115(2021)101106 3 2.3. Garment retargeting The easiest way to retarget a garment from a source body to a target body is to simply apply PBS (middle in Fig. 2) or perform direct defor mation transfer using deformation gradients between source and target body (right in Fig. 2). However, these solutions usually fail to preserve the garment design and wrinkle details, making it look like a person wearing a wrong size garment. For this reason, many researchers [38--40] have focused on how to automatically adjust garment patterns. Design-preserving garment transfer [4] can transfer garment models onto mannequins whose body shapes and proportions are obviously different, but this approach needs to set parameters according to the garment types and body shapes. Direct garment editing [41] enables users who have no experience in garment design to mix existing garment patterns in an interactive 3D editor, and then, automatically computed 2D sewing patterns that match the desired 3D form are generated. In addition, image-based virtual try-on networks [42--45] have absorbed the attention of many researchers because they allow garment recovery and transfer from 2D images. Wang [5] regarded garment pattern adjustment as a nonlinear optimization problem and directly minimized the object function that evaluates the fitting quality. Although the sys tem performed sewing pattern adjustment with efficiency and precision, it could not modify garment patterns properly if the difference between the body shapes was significant. 3. Overview 3.1. Problem formulation Considering that the garment style of different sewing patterns varies markedly, when stitched onto the human body, garments will present folds or wrinkles details of various forms. In addition, garments will also present the overall drape commonly caused by variations in body shape. Fig. 3 illustrates the cloth deformation of garment instances from the source to the target body. Regarding the four garments shown in Fig. 3, due to the change in body shape, the abdomen and hip of each garment undergoes significant deformation (encoded in hot colors), while the other parts remain largely in the original shape. Our method starts from an assumption: cloth deformation is composed of two components: high- frequency details (e.g. folds and wrinkles), which varies with different type of garments, is called style-dependent deformation; low-frequency characteristics (e.g. the overall drape of the garment), which is shared by different kind of garments, is called shape-dependent deformation. Following the formulation and the notion proposed in DRAPE, we use deformation gradients [6,28,46] to represent deformations between garment meshes. This allows our model to decouple deformations due to body shape and deformations induced by garment style. Deformation gradients are linear transformations that align trian gular faces between a source garment GSrc and its simulated mesh under the target body ̃ GTar sharing the same topology. Suppose that GSrc is a mesh with T triangles, (GSrc,̃ GTar)can be written as: ⎧ ⎨ ⎩ GSrc= ⋃ T t=1 ( x → t,1, x → t,2, x → t,3 ) ̃ GTar= ⋃ T t=1 ( y → t,1, y → t,2, y → t,3 ) , (1) where (x → t,1,x → t,2,x → t,3)represents the face of a given triangle t in GSrc, x → t,k(k=1,2,3)are the vertices of triangle t,(y → t,1, y → t,2, y → t,3)represents the face of triangle t in ̃ GTar and y → t,k(k=1,2,3)are the corresponding vertices. Our goal is to solve the following equation: [ Δy → t,2,Δy → t,3,Δy → t,4 ] =Qt [ Δx →t,2,Δx →t,3,Δx →t,4 ] , (2) where Qt is a 3×3 linear transformation of triangle t and Δx → t,k(k=2,3,4)is: ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ Δx → t,k=x → t,k x → t,1,k=2,3 Δx → t,4= ( x → t,2 x → t,1 ) × ( x → t,3 x → t,1 ) ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ |x → t,2 x → t,1|×|x → t,3 x → t,1| √ . (3) The fact that Qt is applied to the edge vectors makes it translation- invariant and each linear transformation encodes the change in orien tation, scale and skew of triangle t. The introduced virtual edge [46], Δx → t,4 adds the directional information of the triangular faces, making the problem well constrained. The key idea of our method is to learn a common deformation space that is independent of the garment style but shared by different type of garments. To do so, we define Qt as combination of linear trans formations, each corresponding to different aspects of the model. We factor Qt into style-dependent deformation and shape-dependent deformation: Qt=Wt⋅St, (4) Wt is the style-dependent term, which is garment type specific. St is the shape-dependent term, which is shared by different garments. Our goal is to separate shape-dependent deformation from cloth Fig. 2. Basic fitting strategies. Left: simulated garment on the source body; middle: simulated garment on the target body; right: direct deformation transfer from the source to target body using deformation gradients between source and target body. M. Shi et al. M. Shi et al. Graphical Models 115 (2021) 101106 Fig. 3. Visualizing cloth deformation from the source to the target body. Left: simulated garment on the source and the target body (noted as G Src and ̃ G Tar , respectively); right: front and back view of the garment examples. Per-vertex deformation are illustrated with hot/cold colors, representing large/small dis tance variations. deformation, so W t is identity and for a given mesh we can use Q to generate a 3D draped garment that fits the target body but retains the original wrinkles. Since the deformation is represented as per-triangle transformations, triangles may separate after applying cloth deforma tion onto the G t Src =S t . We solve for the vertex coordinates that best match the deformed triangles in a least squares sense to assure a consistent mesh: T ∑ argmin → y 1 → ,..., y V t=1 ∑ k=2,3 ‖ W t 2 ⋅S t ⋅Δ x → t,k Δy → t,k ‖ . (5) 3.2. Technical framework Fig. 4 demonstrates our technical architecture, and each part is explained as follows: (a) Data Generation: We select banana-shaped body as source body and pear-shaped body as target body to perform garment transfer (see source and target body in Fig. 4). Taking the source body as the base size, we make various garment patterns following the industrial garment-making process and we use a physical simu lator to stitch the garment patterns onto the source and target bodies, obtaining draped garment instances for the source and Fig. 4. Given a source body and its corresponding simulated garment, we present a method to transfer source garment to target body with distinctive body shape, preserving source design. To handle garments with different topologies, we propose embedding cloth deformation into the body. At the heart of our method is a decomposition of style-dependent term and shape-dependent term from embedded cloth deformation using an encoder-decoder network. By learning a shared deformation space, we eliminate style-dependent deformation from embedded cloth deformation so that the reconstructed cloth deformation is shape-related only. At the garment transfer phase, we use the learned shape-related cloth deformation to generate garment that fits the target body but still preserves source design. Once our shape-related deformation is learned, our method can transfer new garments with arbitrary topologies. 4 GraphicalModels115(2021)101106 5 target body (see source and target garment in Fig. 4). Fig. 6 shows our constructed garment data set. Then, we use deformation gradients to represent cloth deformation from GSrc to ̃ GTar. Since different garments do not share the same topology, we propose embedding cloth deformation into the body to obtain a dimen sional consistent deformation representation, which is achieved by matching the shortest distance between garment triangles and body triangles. The embedded cloth deformation is then fed into the garment transfer network for learning. For more details please refer to Section 4 (b) Garment Transfer Network: We separate the shape-dependent deformation from embedded cloth deformation using an encoder-decoder network. Given the embedded cloth deforma tion as input, the encoder tries to generate a compressed repre sentation of the cloth deformation. In this process, we expect the network to encode shape-dependent deformation only. Then, the decoder reconstruct cloth deformation from the shared defor mation space to obtain shape-related cloth deformation. For de tails about how this is achieved please refer to Section 5. Once the shape-related cloth deformation is learned, we can use it to deform a new source garment with arbitrary topology. The transferred garment both fits the target body shape and preserves the source design. In Section 6, we qualitatively and quantita tively evaluated our method. 4. Deformation gradient embedding In Section 3.1 we use deformation gradients to represent deforma tion from GSrc to ̃ GTar. Let the number of garment patterns in the training set be N; then, the training set GSample can be written as: GSample= ⋃ N i=1 ( G(i) Src,̃ G(i) Tar ) , (6) where (G(i) Src,̃ G(i) Tar)represents the ith group of garment instances sharing the same topology. Suppose that (G(i) Src,̃ G(i) Tar)has Ti triangles, the corre sponding cloth deformation is a 3×3×Ti-dimensional matrix, making it not suitable for learning tasks because the network usually takes a fixed-size data as input. One possible solution is to use aligned 3D garment meshes to get a dimensional-consistent data space, but this approach will limit the versatility of model because most of garment meshes are not homogeneous in an actual application scenarios. To handle garments with different topologies, we propose embedding deformation gradients into the body. This is achieved by matching the shortest distance between body triangles and garment triangles. Let |M| be the number of triangles of body, then, for a given garment G,we build two maps of triangle indices from cloth to body and body to cloth respectively: BodyToCloth= ⋃ |M| α=1 argmin α,β ⃒ ⃒ ⃒PM,αPG,β ̅̅̅̅̅→⃒ ⃒ ⃒ ClothToBody= ⋃ T β=1 argmin β,α ⃒ ⃒ ⃒PG,βPM,α ̅̅̅̅̅→⃒ ⃒ ⃒, (7) where BodyToCloth records the triangle indices from body to cloth for garment instance group (GSrc,̃ GTar),and it is used for building embedded cloth deformation; ClothToBody records the triangle indices from cloth to body for garment instance group (GSrc,̃ GTar), and it is used for building the recovered cloth deformation; PM,α is the centroid co ordinates of triangle α in M; PG,β is the centroid coordinates of triangle β in G. Algorithm 1 introduces the steps of deformation gradient embedding. For each garment type in GSample,we perform deformation gradient embedding. The embedded cloth deformation B is a |M|× 3 × Input:sourcebodyM,agarmentinstancegroup(GSrc, ˜ GTar). Output:embeddedclothdeformation,notedasB. ComputeclothdeformationmatrixC∈RT×3×3fromGSrc to ˜ GTar ; Initializean|M|×3×3dimensionalmatrixBastheembeddedclothdeformation; ComputeBodyToClothusingMandGSrc ; foreachbody_triangleinMdo Search(body_triangle,cloth_triangle)fromBodyToCloth; UpdateB[body_triangle]withC[cloth_triangle]; end Algorithm 1.Deformation gradient embedding. M. Shi et al. M. Shi et al. Graphical Models 115 (2021) 101106 3-dimensional matrix for all triangles, which is then concatenated into a single column vector γ |M|⋅3⋅3×1 i ∈ R collected into a matrix Γ = [γ 1 , ...,γ N . Finally, all γ i (i = 1,...,n) are ] as the network input. 5. Shape feature encoding 5.1. Feature representation In Section 3.1 we factor cloth deformation into style-dependent deformation term W t (considered as high-frequency deformations), and shape-dependent deformation term S t (considered as low-frequency deformations). The key idea of our model is that high-frequency de formations are garment type-specific but low-frequency deformations are shared by different garments, and we expect the reconstructed cloth deformation to contain low-frequency deformations only. We implicitly eliminate W t from Q t =W t ⋅S t so that Q t = S t . This is achieved by learning a shared deformation space using a encoder-decoder network. The network aims to learn a function set f W,b (Γ) = ̃ Γ ≈ Γ so that ̃ Γ constantly approximates Γ, where Γ is the original cloth deformation and ̃ Γ is the shape-related cloth deformation. The task of the encoder is to learn a compressed representation of the cloth deformation, building a shared feature space of body shapes. Then, the decoder is trained to replicate the original cloth deformation from the latent space. We impose constraints on the size of latent space so that the autoencoder cannot reconstruct all the deformation, more specifically, we expect it to lose high-frequency details. The loss function of our network is a simple mean-squared error (MSE) term: ‖ Γ ̃ Γ‖ 2 F , where ̃ Γ is the learned shape-related cloth deformation. As mentioned above, by learning f W,b i (8) (Γ) = ̃ Γ ≈ Γ, our garment transfer network tries to reconstruct cloth deformation with low- frequency deformations only. The reconstructed cloth deformation, ̃ Γ, is a |M|⋅3⋅3 × N-dimensional matrix. At the garment transfer phase, each ̃ γ (i = 1,...,N) in ̃ Γ is reshaped back to a |M|× 3× 3-dimensional matrix ̃ B i , which is then applied onto G Src to generate a new garment G that f its the target body but remains original high-frequency details. This step involves transmitting embedded cloth deformation back to a garment- specific space. Algorithm 2 describes the inverse embedding process. Search(cloth_triangle,body_triangle)fromClothToBody; Update ˜ C[cloth_triangle]with ˜ B[body_triangle]; end Makelineartransformationsusing ˜ CandsolveforthevertexcoordinateswithEq.5 Tar 5.2. Extension to multiple target bodies Though our workflow supports garment transfer for different body shapes, the learned shared deformation space has to be retrained per body shape. When performing garment transfer, it is often desirable to control the types of body shape to be generated. To handle a situation when there are multiple target bodies, we impose conditional con straints on input data. Since the shape types are discrete by nature, so we represent them using one-hot labels. More specifically, shape conditions are incorporated as additional input with Γ. More details are provided in Section 6.4. 6. Evaluation We propose embedding deformation gradients into the body to represent per-triangle deformation from G Src to ̃ G Tar using M in Section 4. Then, by decoupling shape-dependent deformation and style- dependent deformation from embedded cloth deformation, our method learns a shared deformation space that is invariant to garment style in Section 5. We now qualitatively and quantitatively evaluate the effectiveness of our method. We performed experiments on a consumer laptop with an Intel Core i7-8750H 2.2 GHz processor, 16 GB of RAM and an NVIDIA GeForce RTX 2070 with Max-Q Design Graphics Card. Input:sourcebodyM,learnedclothdeformation ˜ B∈R|M|×3×3,anarbitrarysourcegarmentwithTtriangles. Output:deformedgarmentmesh,notedasGTar. InitializeaT×3×3dimensionalmatrix ˜ Castherecoveredclothdeformation; ComputeClothToBodyusingMandGSrc; foreachcloth_triangleinT Algorithm 2. Inverse embedding. 6 M. Shi et al. Graphical Models 115 (2021) 101106 6.1. Experimental settings We first provide the details of how the data are generated and of the network structure: Data generation. We used DAZ Studio software [47] to generate A-pose mannequins of different body shapes. We built banana-shaped ((a) in Fig. 5) and pear-shaped ((b) in Fig. 5) as source and target body respectively. Our garment data set consists of 15 basic-style gar ments (left in Fig. 6) for training and 5 complex-style garments (right in Fig. 6) for evaluation. We selected different types of sewing patterns to make the training examples, which aims to cover various kinds of clothing commonly seen in daily life. All garment patterns are designed and simulated using the Marvelous Designer software [12]. The garment meshes in garment data set do not share the same topology (e.g., the number of triangles is 10 628 in a sweater and 28 044 in a shift dress). Network architecture. We implemented deformation gradient embedding in C++, and the cloth deformation of each garment was aligned to a |M|⋅9-dimensional vector (|M| = 37744 in our experiment). Our shape feature encoding network is composed of linear layers with sigmoid activation function. The encoder takes an |M|⋅9 × N-dimen sional matrix as input and translates it to a 30× N-dimensional latent space to represent shape feature descriptor. Then, the decoder tries to replicate the original cloth deformation from the shared deformation space. We use scaled conjugate gradient descent for the network back- propagation. It takes about 35 minutes for our network to finish training with the setting of 1000 epochs. 6.2. Qualitative evaluation Each column in ̃ Γ represents the reconstructed cloth deformation of a specified garment. By simply applying the reconstructed shape-related cloth deformation to its corresponding garment mesh using Algorithm 2, our workflow can generate garments that fit the target body but preserve source design. Fig. 7 demonstrates the garment transfer result on the training set. Our method successfully decoupled style-dependent deformation and shape-dependent deformation. One possible application of our garment transfer workflow is virtual f itting. As mentioned above, basic-style garments are used for learning the shape-related cloth deformation. Once the training process for a given body shape is finished, we can apply the learned cloth deformation to a complex-style garment to perform garment transfer. Fig. 8 shows the garment transfer results on the testing set. From left to right: Shift Dress, Formal Dress, Wrap Dress, Conjoined Shorts, T-shirt. Now, we evaluate our shape feature encoding network. We can deform a specified source garment (see (a) in Fig. 9) onto the target body (see (b) in Fig. 9) using its corresponding cloth deformation, and we can also deform garments of other designs (see (c)-(d) in Fig. 9). However, since the source design of different garments varies markedly, it is difficult to obtain the desired appearance by directly applying the cloth deformation of one garment onto other garments (see (c) in Fig. 9). Our deformation feature encoding network learns a shared deformation space from cloth deformation, which enables garment transfer among garments of different designs (see (d) in Fig. 9). We also invited a professional fashion design studio to manually transfer a garment from the source to the target body. Taking the wrap dress as reference, the pattern grader took more than ten hours to make 2D graded sewing patterns and restore the wrinkle details of the 3D garment on the target body (right in Fig. 10). We ask the pattern grader to do this because customers usually want the transferred garment to keep exactly the same design as source garment. According to the sur vey, most of the time was spent on handling the wrinkle elements, because the designer needed to repeatedly adjust the sewing pattern until the draped garment accorded with the desired shape. Compara tively, it took less than ten seconds for our garment transfer workflow to generate the draped garment (middle in Fig. 10). With the help of our garment transfer workflow, customers can quickly preview the fitting effects before real garments are designed and manufactured, thus saving time and design cost. 6.3. Quantitative evaluation To quantitatively measure the difference in wrinkle details before and after garment transfer, we computed the mean discrete curvature of our transferred garments G Tar and simulated garment ̃ G Tar for the target body. The mean discrete curvature is a tool for measuring the variation between the source and target mesh. The lower the mean discrete cur vature is, the less the garment details has changed. The mean discrete curvature between G Src T MDC(G Src , G Tar and G ) =1 T ∑ |r G Tar is : (s) r Src s=1 in which r G G Tar (s)|, (9) (s) represents the discrete curvature of the sth vertex in G Src and r G (s) represents the discrete curvature of the sth vertex in G Tar Src . Tar r G (s) can be expressed as: r → G (s) = argmin n i → ⋅n i,j j i, j ∈ p(s),1 ≤ i < j ≤ |p(s)|, where p(s) is the triangle set adjacent to the sth vertex. n (10) → i → and n j represent the unit normal of the ith and jth triangles in p(s). |p(s)| is the number of triangles in p(s). Since our ultimate goal is to make the transferred garment looks similar to the source, we also made user study on visual similarity be tween source and transferred / simulated garment. 18 subjects were asked to score the degree of visual similarity (0--9) between source and Fig. 5. We construct (a) banana-shaped body as source and (b) pear-shaped body as target for evaluation. The body is shown from a front view and a side view. Compared with the source body, the local shape (abdomen and hip) of target body has changed significantly. 7 M. Shi et al. Graphical Models 115 (2021) 101106 Fig. 6. Composition of garment dataset. Left: garments used for training; right: garments used for testing. Fig. 7. Garment transfer on the training set. (a) simulated garment on the source body (G simulated garment on the target body (̃ G Tar Src ); (b) our transfer result from the source to target body (G ); (d) transferred garment using decomposed high-frequency deformation term. In contrast to G outline of the target body while losing the original garment design. Our transferred results G Tar Src , ̃ G Tar Tar ); (c) revealed the not only have the somatotype characteristics of the target body but also preserve the source design. deformed / simulated garment. Taking G wrinkle details on the ̃ G Tar (or G Tar Src as reference, 0 represents the ) looks completely different from the reference, and 9 represents the wrinkle details on the ̃ G exactly the same as the reference. Tar (or G Tar ) looks Table 1 summarizes the statistics of the garment examples in the test set. The mean discrete curvature of the simulated garments on the target body is much greater than ours, and our garment transfer can be finished within a short time period even on a consumer laptop, which indicates that our garment transfer workflow can deform garments onto the target body both with efficiency and can preserve the source design. In the visual similarity study, Score Trans that in Score Sim for each garment is much higher than , which indicates that G Tar looks more similar to G Src . Previous works like [4,5] formulated garment transfer as a con strained optimization problem and solved it through iterative quadratic minimization, which takes hundreds of seconds. Comparatively, our learning-based garment transfer workflow only takes seconds to finish even for a complex-style garment with 156K faces (the Wrap Dress in our experiment). Besides, Brouet et al. [4] is sensitive to the setting of tight region tolerance, which varies with input body and garment model. Once our shared deformation is learned, we can use it to generate transferred garments that fit the target body but still preserve source design without additional modeling setup. [5] is developed for human bodies with limited differences from the source body, which will struggle in a case when the target body is very different from the source body. Our workflow enables design-preserving garment transfer even for mannequins with distinctive body shapes. 8 M. Shi et al. Graphical Models 115 (2021) 101106 Fig. 8. Garment transfer on the testing set. (a) simulated garment on the source body (G the source to target body (G Tar ); (c) simulated garment on the target body (̃ G Tar Src ), which showcase very rich wrinkling patterns; (b) our transfer result from ). With our garment transfer workflow, all wrinkles on G the target body shape without noticeable artifacts. Src are correctly transferred to Fig. 9. Validation of the shape feature encoding network. (a) simulated garment on the source body; (b) transferred garment on the target body; (c) garment transfer before shape feature encoding; (d) garment transfer after shape feature encoding. Details in (c) and (d) are enlarged to show the difference. 6.4. Multiple target shapes Given a sewing pattern and its corresponding simulated garment on a source body, our workflow can perform garment transfer from the source body to mannequins with distinctive body shapes. The experi ment results show that our pipeline works well on a single body-shape case. Now we evaluate our method on bodies with more shape varia tions. The body is represented using SMPL [48]. SMPL is a generative model that factors the body into shape (noted as β) and pose (noted as θ) parameters. We obtained A-pose θ from BMLmovi [49,50] dataset. Then, we sampled β 1 , ...,β 4 from range [4,1] to generate bodies with 6 different shape variations (noted as M(θ, β 1 ), ...,M(θ,β 4 )). Shape con ditions are encoded as one-hot labels from 000000 to 100000. The generated body meshes share the same topology (|M|=13776). We take M(θ,β 1 ) as source body and others as target to perform garment transfer. We trained network using the method proposed in Section 5.2. Since the deformed garment reflects only the general outline of the target body, it is difficult to guarantee that the transferred garment is always 9 GraphicalModels115(2021)101106 10 collision-free. To eliminate the penetration between the garment mesh and body surface, we iteratively update the vertex position until the garment vertices lie completely outside the target body with the Marvelous Designer software [12]. Fig. 11 demonstrates fitting effect from M(θ,β1)to M(θ,β0),...,M(θ,β 4). 7. Limitation and future work In this paper, we presented an automatic, design-preserving, efficient garment transfer workflow that enables unaligned garment retargeting between characters with significant differences in body shape. This is achieved by learning a shared deformation space from the embedded cloth deformation. Unlike existing methods, our method can generate design-preserving garments that showcase very rich wrinkle details with both accuracy and efficiency. However, our system has several limitations: (i) We embed cloth deformation into the body by matching triangles with shortest distance, which depends on the mesh quality. If the garment or body has a low-resolution, the deformation of some triangles would be lost. While we use a very high-resolution garment and body mesh, the garment transfer can be done in seconds on a consumer laptop. (ii) Our goal is to generate design-preserving 3D draped garments for distinctive body shapes before garments are made. We do not consider inverse pattern design problem. How to accurately Fig. 10.Manual vs. automatic. Left: simulated garment on the source body; middle: our transferred result; right: manually graded garment produced by a garment designer. Table 1 Statistics of the examples. #Vert: Number of vertices in the example. #Tri: Number of triangles in the example. ScoreTrans:Score of visual similarity between GSrc and GTar. MDCTrans: Mean discrete curvature between GSrc and GTar. ScoreSim: Score of visual similarity between GSrc and ̃ GTar. MDCSim: Mean discrete curvature between GSrc and ̃ GTar. T: Total time for deforming GSrc to GTar. Example Name #Vert #Tri ScoreTrans↑ MDCTrans↓ ScoreSim↑ MDCSim↓ T(s) Shift Dress 14 149 28 004 6.0 0.020 3 2.6 0.054 1 0.764 Formal Dress 23 262 46 081 6.3 0.022 3 4.0 0.064 0 1.301 Wrap Dress 78 169 15 605 4 6.8 0.010 8 2.7 0.115 6 5.928 Conjoined Shorts 13 289 26 288 6.2 0.022 4 1.4 0.193 9 0.711 T-shirt 41 970 83 814 6.5 0.011 9 2.3 0.079 7 2.822 Fig. 11. Retargeting a source garment to mannequins with different body shapes, the blue dress is simulated garment on source body (M(θ,β1)) and the yellow dresses are transferred garments on different target bodies (from left to right: M(θ,β0),...,M(θ,β 4)). Even if the target body is very different compared with the source body, our solution can still preserve the source design and wrinkle details. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) M. Shi et al. M. Shi et al. Graphical Models 115 (2021) 101106 translate these draped garments into graded sewing patterns needs to be further studied. CRediT authorship contribution statement Min Shi: Conceptualization, Methodology. Yukun Wei: Software, Validation, Writing - original draft. Lan Chen: Data curation, Valida tion. Dengming Zhu: Writing - review & editing, Resources. Tianlu Mao: Visualization, Investigation. Zhaoqi Wang: Supervision, Funding acquisition. Declaration of Competing Interest The authors declare they have no known competing financial in terests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment This work was supported by the National Natural Science Foundation of China [Grant number 61972379]. References [1] J. Liang, M.C. Lin, Machine learning for digital try-on: challenges and progress, Comput. Vis. Media (2020) 1--9, https://doi.org/10.1007/s41095-020-0189-1. [2] M. Wang, X.-Q. Lyu, Y.-J. Li, F.-L. Zhang, Vr content creation and exploration with deep learning: a survey, Comput. Vis. Media 6 (1) (2020) 3--28, https://doi.org/ 10.1007/s41095-020-0162-z. [3] M.Z. Lifkooee, C. Liu, Y. Liang, Y. Zhu, X. Li, Real-time avatar pose transfer and motion generation using locally encoded Laplacian offsets, J. Comput. Sci. Technol. 34 (2) (2019) 256--271, https://doi.org/10.1007/s11390-019-1909-9. [4] R. Brouet, A. Sheffer, L. Boissieux, M.-P. Cani, Design preserving garment transfer, ACM Trans. Graph. 31 (4) (2012) 36:1--36:11, https://doi.org/10.1145/ 2185520.2185532. [5] H. Wang, Rule-free sewing pattern adjustment with precision and efficiency, ACM Trans. Graph. 37 (4) (2018) 53:1--53:13, https://doi.org/10.1145/ 3197517.3201320. [6] P. Guan, L. Reiss, D.A. Hirshberg, A. Weiss, M.J. Black, Drape: dressing any person, ACM Trans. Graph. 31 (4) (2012) 35:1--35:10, https://doi.org/10.1145/ 2185520.2185531. [7] G. Pons-Moll, S. Pujades, S. Hu, M.J. Black, Clothcap: Seamless 4d clothing capture and retargeting, ACM Trans. Graph. 36 (4) (2017) 73:1--73:15, https://doi.org/ 10.1145/3072959.3073711. [8] Z. Lahner, D. Cremers, T. Tung, Deepwrinkles: accurate and realistic clothing modeling. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 2018, pp. 667--684. [9] I. Santesteban, M.A. Otaduy, D. Casas, Learning-based animation of clothing for virtual try-on, Comput. Graph. Forum 38 (2) (2019) 355--366, https://doi.org/ 10.1111/cgf.13643. [10] Q. Ma, J. Yang, A. Ranjan, S. Pujades, G. Pons-Moll, S. Tang, M.J. Black, Learning to dress 3D people in generative clothing. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 6469--6478. [11] C. Patel, Z. Liao, G. Pons-Moll, Tailornet: predicting clothing in 3d as a function of human pose, shape and garment style. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2020, pp. 7365--7375. [12] Marvelous designer software, 2020, https://www.marvelousdesigner.com/. [13] Optitex fashion design software, 2020, https://optitex.com/. [14] K.-J. Choi, H.-S. Ko, Stable but responsive cloth, ACM Trans. Graph. 21 (3) (2002) 604--611, https://doi.org/10.1145/566654.566624. [15] T. Liu, A.W. Bargteil, J.F. O'Brien, L. Kavan, Fast simulation of mass-spring systems, ACM Trans. Graph. 32 (6) (2013) 214:1--214:7, https://doi.org/10.1145/ 2508363.2508406. [16] J.M. Kaldor, D.L. James, S. Marschner, Simulating knitted cloth at the yarn level, ACM Trans. Graph. 27 (3) (2008) 65:1--65:9, https://doi.org/10.1145/ 1360612.1360664. [17] G. Cirio, J. Lopez-Moreno, D. Miraut, M.A. Otaduy, Yarn-level simulation of woven cloth, ACM Trans. Graph. 33 (6) (2014) 207:1--207:11, https://doi.org/10.1145/ 2661229.2661279. [18] R. Narain, A. Samii, J.F. O'Brien, Adaptive anisotropic remeshing for cloth simulation, ACM Trans. Graph. 31 (6) (2012) 152:1--152:10, https://doi.org/ 10.1145/2366145.2366171. [19] J. Li, G. Daviet, R. Narain, F. Bertails-Descoubes, M. Overby, G.E. Brown, L. Boissieux, An implicit frictional contact solver for adaptive cloth simulation, ACM Trans. Graph. 37 (4) (2018) 52:1--52:15, https://doi.org/10.1145/ 3197517.3201308. [20] M. Macklin, M. Müller, N. Chentanez, T.-Y. Kim, Particle physics for real-time applications, ACM Trans. Graph. 33 (4) (2014) 153:1--153:12, https://doi.org/ 10.1145/2601097.2601152. [21] M. Tang, t. wang, Z. Liu, R. Tong, D. Manocha, I-cloth: Incremental collision handling for GPU-based interactive cloth simulation, ACM Trans. Graph. 37 (6) (2018) 204:1--204:10, https://doi.org/10.1145/3272127.3275005. [22] L. Jiang, J. Ye, L. Sun, J. Li, Transferring and fitting fixed-sized garments onto bodies of various dimensions and postures, Computer-Aided Des. 106 (2019) 30--42, https://doi.org/10.1016/j.cad.2018.08.002. [23] N.J. Weidner, K. Piddington, D.I.W. Levin, S. Sueda, Eulerian-on-lagrangian cloth simulation, ACM Trans. Graph. 37 (4) (2018) 50:1--50:11, https://doi.org/ 10.1145/3197517.3201281. [24] Y.R. Fei, C. Batty, E. Grinspun, C. Zheng, A multi-scale model for simulating liquid- fabric interactions, ACM Trans. Graph. 37 (4) (2018) 51:1--51:16, https://doi.org/ 10.1145/3197517.3201392. [25] F. Cordier, N. Magnenat-Thalmann, A data-driven approach for real-time clothes simulation. 12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings., Seoul, Korea (South), 2004, pp. 257--266. [26] T.-Y. Kim, E. Vendrovsky, Drivenshape: a data-driven approach for shape deformation. Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Dublin, Ireland, 2008, pp. 49--55. [27] Z. Zhou, B. Shu, S. Zhuo, X. Deng, P. Tan, S. Lin, Image-based clothes animation for virtual fitting. SIGGRAPH Asia 2012 Technical Briefs, in: SA '12, Association for Computing Machinery, New York, NY, USA, 2012, pp. 33:1--33:4, https://doi.org/ 10.1145/2407746.2407779. [28] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, J. Davis, Scape: shape completion and animation of people, ACM Trans. Graph. 24 (3) (2005) 408--416, https://doi.org/10.1145/1073204.1073207. [29] W. Xu, N. Umentani, Q. Chao, J. Mao, X. Jin, X. Tong, Sensitivity-optimized rigging for example-based real-time clothing synthesis, ACM Trans. Graph. 33 (4) (2014) 107:1--107:11, https://doi.org/10.1145/2601097.2601136. [30] T.Y. Wang, T. Shao, K. Fu, N.J. Mitra, Learning an intrinsic garment space for interactive authoring of garment animation, ACM Trans. Graph. 38 (6) (2019) 220: 1--220:12, https://doi.org/10.1145/3355089.3356512. [31] Y. Xiao, Y. Lai, F. Zhang, C. Li, L. Gao, A survey on deep geometry learning: From a representation perspective, Comput. Vis. Media 6 (2) (2020) 113--133, https://doi. org/10.1007/s41095-020-0174-8. [32] L. Gao, Y.K. Lai, J. Yang, L.X. Zhang, S. Xia, L. Kobbelt, Sparse data driven mesh deformation, IEEE Trans. Vis. Comput. Graph. 27 (3) (2021) 2085--2100, https:// doi.org/10.1109/TVCG.2019.2941200. [33] Q. Tan, L. Gao, Y. Lai, J. Yang, S. Xia, Mesh-based autoencoders for localized deformation component analysis. AAAI Conference on Artificial Intelligence, AAAI Press, New Orleans, Louisiana, USA, 2018, pp. 2452--2459. [34] Q. Tan, L. Gao, Y. Lai, S. Xia, Variational autoencoders for deforming 3d mesh models. 2018 IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, Salt Lake City, Utah, USA, 2018, pp. 5841--5850. [35] Y. Yuan, Y. Lai, J. Yang, Q. Duan, H. Fu, L. Gao, Mesh variational autoencoders with edge contraction pooling. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2020, pp. 1105--1112. [36] L. Gao, J. Yang, Y.-L. Qiao, Y.-K. Lai, P.L. Rosin, W. Xu, S. Xia, Automatic unpaired shape deformation transfer, ACM Trans. Graph. 37 (6) (2018) 237:1--237:15, https://doi.org/10.1145/3272127.3275028. [37] L. Gao, J. Yang, T. Wu, Y.-J. Yuan, H. Fu, Y.-K. Lai, H. Zhang, Sdm-net: deep generative network for structured deformable mesh, ACM Trans. Graph. 38 (6) (2019) 243:1--243:15, https://doi.org/10.1145/3355089.3356488. [38] C.C. Wang, Y. Wang, M.M. Yuen, Design automation for customized apparel products, Computer-Aided Des. 37 (7) (2005) 675--691, https://doi.org/10.1016/j. cad.2004.08.007. [39] N. Umetani, D.M. Kaufman, T. Igarashi, E. Grinspun, Sensitive couture for interactive garment modeling and editing, ACM Trans. Graph. 30 (4) (2011) 90: 1--90:12, https://doi.org/10.1145/2010324.1964985. [40] Y. Meng, C.C. Wang, X. Jin, Flexible shape control for automatic resizing of apparel products, Computer-Aided Des. 44 (1) (2012) 68--76, https://doi.org/10.1016/j. cad.2010.11.008. [41] A. Bartle, A. Sheffer, V.G. Kim, D.M. Kaufman, N. Vining, F. Berthouzoz, Physics- driven pattern adjustment for direct 3Dgarment editing, ACM Trans. Graph. 35 (4) (2016) 50:1--50:11, https://doi.org/10.1145/2897824.2925896. [42] T.Y. Wang, D. Ceylan, J. Popovi' c, N.J. Mitra, Learning a shared shape space for multimodal garment design, ACM Trans. Graph. 37 (6) (2018) 203:1--203:13, https://doi.org/10.1145/3272127.3275074. [43] X. Han, Z. Wu, Z. Wu, R. Yu, L.S. Davis, Viton: An image-based virtual try-on network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, Utah, USA, 2018, pp. 7543--7552. [44] S. Yang, Z. Pan, T. Amert, K. Wang, L. Yu, T. Berg, M.C. Lin, Physics-inspired garment recovery from a single-view image, ACM Trans. Graph. 37 (5) (2018) 170: 1--170:14, https://doi.org/10.1145/3026479. [45] T. Kikuchi, Y. Endo, Y. Kanamori, T. Hashimoto, J. Mitani, Transferring pose and augmenting background for deep human-image parsing and its applications, Comput. Vis. Media 4 (1) (2018) 43--54, https://doi.org/10.1007/s41095-017- 0098-0. [46] R.W. Sumner, J. Popovi' c, Deformation transfer for triangle meshes, ACM Trans. Graph. 23 (3) (2004) 399--405, https://doi.org/10.1145/1015706.1015736. [47] Daz studio software, 2020, https://www.daz3d.com/. 11 M. Shi et al. Graphical Models 115 (2021) 101106 [48] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, M.J. Black, Smpl: a skinned multi-person linear model, ACM Trans. Graph. 34 (6) (2015) 248:1--248:16, https://doi.org/10.1145/2816795.2818013. [49] S. Ghorbani, K. Mahdaviani, A. Thaler, K. Kording, D.J. Cook, G. Blohm, N.F. Troje, Movi: a large multipurpose motion and video dataset, 2020, arXiv:2003.01888. [50] N. Mahmood, N. Ghorbani, N.F. Troje, G. Pons-Moll, M.J. Black, AMASS: archive of motion capture as surface shapes. International Conference on Computer Vision, Seoul, Korea, 2019, pp. 5442--5451. 12