Data augmentation with GAN

JR Jawad Rasheed
AH Alaa Ali Hameed
CD Chawki Djeddi
AJ Akhtar Jamil
FA Fadi Al-Turjman
request Request a Protocol
ask Ask a question
Favorite

GAN has been extensively utilized in numerous image generation functions that produces artificial data-like real data. This can help to overcome the issue of small number of training samples thus eliminates class imbalance problem. The GAN architecture consists of multilayer perceptron which has two main elements: generator (G) and discriminator (D) [30]. These two components compete with each other during training. The generator is trained to produce data similar to the original data and the discriminator should be able to distinguish between fake and actual data.

According to [30], x be the input data. For learning the generator’s distribution pg, a prior is defined on input noise variables pz(z). G(z; θg), defines the data space mapping, where G corresponds to differentiable function having network parameters θg. This differentiable function is implemented as a multilayer perceptron network. Also, another multilayer perceptron is define, D (x; θd) that takes the input from G and produces the final output. D(x) represents the probability that x is original or fake. We trained D to enhance the possibility of assigning the appropriate label to both training instances as well as instances from G. The objective function V (G,D) for both G and D can be combined to train the GAN model as [30]:

where E is the expectation. Equation (1) indicates that we should train the model in such a way that error for G is minimized and D should be maximized so that it should not be able to distinguish between fake and real data.

In this study, the synthetic data were generated using the same architectural setup for both G and D as summarized in Table Table2.2. The data augmentation was achieved using horizontal and vertical shifts, and random r.

GAN parameter settings

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A