摘要:在這里匯總了一個(gè)現(xiàn)在和經(jīng)常使用的論文,所有文章都鏈接到了上面。如果你對(duì)感興趣,可以訪問(wèn)這個(gè)專題。作者微信號(hào)簡(jiǎn)書地址是一個(gè)專注于算法實(shí)戰(zhàn)的平臺(tái),從基礎(chǔ)的算法到人工智能算法都有設(shè)計(jì)。加入實(shí)戰(zhàn)微信群,實(shí)戰(zhàn)群,算法微信群,算法群。
作者:chen_h
微信號(hào) & QQ:862251340
微信公眾號(hào):coderpai
簡(jiǎn)書地址:https://www.jianshu.com/p/b7f...
關(guān)于生成對(duì)抗網(wǎng)絡(luò)(GAN)的新論文每周都會(huì)出現(xiàn)很多,跟蹤發(fā)現(xiàn)他們非常難,更不用說(shuō)去辨別那些研究人員對(duì) GAN 各種奇奇怪怪,令人難以置信的創(chuàng)造性的命名!當(dāng)然,你可以通過(guò)閱讀 OpanAI 的博客或者 KDNuggets 中的概述性閱讀教程,了解更多的有關(guān) GAN 的信息。
在這里匯總了一個(gè)現(xiàn)在和經(jīng)常使用的GAN論文,所有文章都鏈接到了 Arxiv 上面。
GAN?—?Generative Adversarial Networks
3D-GAN?—?Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
AC-GAN?—?Conditional Image Synthesis With Auxiliary Classifier GANs
AdaGAN?—?AdaGAN: Boosting Generative Models
AffGAN?—?Amortised MAP Inference for Image Super-resolution
AL-CGAN?—?Learning to Generate Images of Outdoor Scenes from Attributes and Semantic Layouts
ALI?—?Adversarially Learned Inference
AMGAN?—?Generative Adversarial Nets with Labeled Data by Activation Maximization
AnoGAN?—?Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery
ArtGAN?—?ArtGAN: Artwork Synthesis with Conditional Categorial GANs
b-GAN?—?b-GAN: Unified Framework of Generative Adversarial Networks
Bayesian GAN?—?Deep and Hierarchical Implicit Models
BEGAN?—?BEGAN: Boundary Equilibrium Generative Adversarial Networks
BiGAN?—?Adversarial Feature Learning
BS-GAN?—?Boundary-Seeking Generative Adversarial Networks
CGAN?—?Conditional Generative Adversarial Nets
CCGAN?—?Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks
CatGAN?—?Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
CoGAN?—?Coupled Generative Adversarial Networks
Context-RNN-GAN?—?Contextual RNN-GANs for Abstract Reasoning Diagram Generation
C-RNN-GAN?—?C-RNN-GAN: Continuous recurrent neural networks with adversarial training
CVAE-GAN?—?CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training
CycleGAN?—?Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
DTN?—?Unsupervised Cross-Domain Image Generation
DCGAN?—?Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
DiscoGAN?—?Learning to Discover Cross-Domain Relations with Generative Adversarial Networks
DR-GAN?—?Disentangled Representation Learning GAN for Pose-Invariant Face Recognition
DualGAN?—?DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
EBGAN?—?Energy-based Generative Adversarial Network
f-GAN?—?f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
GAWWN?—?Learning What and Where to Draw
GoGAN?—?Gang of GANs: Generative Adversarial Networks with Maximum Margin Ranking
GP-GAN?—?GP-GAN: Towards Realistic High-Resolution Image Blending
IAN?—?Neural Photo Editing with Introspective Adversarial Networks
iGAN?—?Generative Visual Manipulation on the Natural Image Manifold
IcGAN?—?Invertible Conditional GANs for image editing
ID-CGAN-?Image De-raining Using a Conditional Generative Adversarial Network
Improved GAN?—?Improved Techniques for Training GANs
InfoGAN?—?InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
LAPGAN?—?Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
LR-GAN?—?LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation
LSGAN?—?Least Squares Generative Adversarial Networks
LS-GAN?—?Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities
MGAN?—?Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks
MAGAN?—?MAGAN: Margin Adaptation for Generative Adversarial Networks
MAD-GAN?—?Multi-Agent Diverse Generative Adversarial Networks
MalGAN?—?Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN
MARTA-GAN?—?Deep Unsupervised Representation Learning for Remote Sensing Images
McGAN?—?McGan: Mean and Covariance Feature Matching GAN
MedGAN?—?Generating Multi-label Discrete Electronic Health Records using Generative Adversarial Networks
MIX+GAN?—?Generalization and Equilibrium in Generative Adversarial Nets (GANs)
MPM-GAN?—?Message Passing Multi-Agent GANs
MV-BiGAN?—?Multi-view Generative Adversarial Networks
pix2pix?—?Image-to-Image Translation with Conditional Adversarial Networks
PPGN?—?Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space
PrGAN?—?3D Shape Induction from 2D Views of Multiple Objects
RenderGAN?—?RenderGAN: Generating Realistic Labeled Data
RTT-GAN?—?Recurrent Topic-Transition GAN for Visual Paragraph Generation
SGAN?—?Stacked Generative Adversarial Networks
SGAN?—?Texture Synthesis with Spatial Generative Adversarial Networks
SAD-GAN?—?SAD-GAN: Synthetic Autonomous Driving using Generative Adversarial Networks
SalGAN?—?SalGAN: Visual Saliency Prediction with Generative Adversarial Networks
SEGAN?—?SEGAN: Speech Enhancement Generative Adversarial Network
SeGAN?—?SeGAN: Segmenting and Generating the Invisible
SeqGAN?—?SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
SketchGAN?—?Adversarial Training For Sketch Retrieval
SL-GAN?—?Semi-Latent GAN: Learning to generate and modify facial images from attributes
Softmax-GAN?—?Softmax GAN
SRGAN?—?Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
S2GAN?—?Generative Image Modeling using Style and Structure Adversarial Networks
SSL-GAN?—?Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks
StackGAN?—?StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks
TGAN?—?Temporal Generative Adversarial Nets
TAC-GAN?—?TAC-GAN?—?Text Conditioned Auxiliary Classifier Generative Adversarial Network
TP-GAN?—?Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis
Triple-GAN?—?Triple Generative Adversarial Nets
Unrolled GAN?—?Unrolled Generative Adversarial Networks
VGAN?—?Generating Videos with Scene Dynamics
VGAN?—?Generative Adversarial Networks as Variational Training of Energy Based Models
VAE-GAN?—?Autoencoding beyond pixels using a learned similarity metric
VariGAN?—?Multi-View Image Generation from a Single-View
ViGAN?—?Image Generation and Editing with Variational Info Generative AdversarialNetworks
WGAN?—?Wasserstein GAN
WGAN-GP?—?Improved Training of Wasserstein GANs
WaterGAN?—?WaterGAN: Unsupervised Generative Network to Enable Real-time Color Correction of Monocular Underwater Images
如果你對(duì) GAN 感興趣,可以訪問(wèn)這個(gè)專題。歡迎交流。
作者:chen_h
微信號(hào) & QQ:862251340
簡(jiǎn)書地址:https://www.jianshu.com/p/b7f...
CoderPai 是一個(gè)專注于算法實(shí)戰(zhàn)的平臺(tái),從基礎(chǔ)的算法到人工智能算法都有設(shè)計(jì)。如果你對(duì)算法實(shí)戰(zhàn)感興趣,請(qǐng)快快關(guān)注我們吧。加入AI實(shí)戰(zhàn)微信群,AI實(shí)戰(zhàn)QQ群,ACM算法微信群,ACM算法QQ群。長(zhǎng)按或者掃描如下二維碼,關(guān)注 “CoderPai” 微信號(hào)(coderpai)
文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。
轉(zhuǎn)載請(qǐng)注明本文地址:http://systransis.cn/yun/41154.html
摘要:特征匹配改變了生成器的損失函數(shù),以最小化真實(shí)圖像的特征與生成的圖像之間的統(tǒng)計(jì)差異。我們建議讀者檢查上使用的損失函數(shù)和相應(yīng)的性能,并通過(guò)實(shí)驗(yàn)驗(yàn)證來(lái)設(shè)置。相反,我們可能會(huì)將注意力轉(zhuǎn)向?qū)ふ以谏善餍阅懿患褧r(shí)不具有接近零梯度的損失函數(shù)。 前 ?言GAN模型相比較于其他網(wǎng)絡(luò)一直受困于三個(gè)問(wèn)題的掣肘:?1. 不收斂;模型訓(xùn)練不穩(wěn)定,收斂的慢,甚至不收斂;?2. mode collapse; 生成器產(chǎn)生的...
摘要:元旦假期即將來(lái)臨,我們精心準(zhǔn)備了這本阿里巴巴機(jī)器智能計(jì)算機(jī)視覺技術(shù)精選,收錄了頂級(jí)會(huì)議阿里論文,送給計(jì)劃在假期充電的同學(xué)們,也希望能和更多學(xué)術(shù)界工業(yè)界同仁一起探討交流。 當(dāng)下計(jì)算機(jī)視覺技術(shù)無(wú)疑是AI浪潮中最火熱的議題之一。視覺技術(shù)的滲透,既可以對(duì)傳統(tǒng)商業(yè)進(jìn)行改造使之看到新的商業(yè)機(jī)會(huì),還可以創(chuàng)造全新的商業(yè)需求和市場(chǎng)。無(wú)論在電商、安防、娛樂(lè),還是在工業(yè)、醫(yī)療、自動(dòng)駕駛領(lǐng)域,計(jì)算機(jī)視覺技術(shù)都...
摘要:判別器勝利的條件則是很好地將真實(shí)圖像自編碼,以及很差地辨識(shí)生成的圖像。 先看一張圖:下圖左右兩端的兩欄是真實(shí)的圖像,其余的是計(jì)算機(jī)生成的。過(guò)渡自然,效果驚人。這是谷歌本周在 arXiv 發(fā)表的論文《BEGAN:邊界均衡生成對(duì)抗網(wǎng)絡(luò)》得到的結(jié)果。這項(xiàng)工作針對(duì) GAN 訓(xùn)練難、控制生成樣本多樣性難、平衡鑒別器和生成器收斂難等問(wèn)題,提出了改善。尤其值得注意的,是作者使用了很簡(jiǎn)單的結(jié)構(gòu),經(jīng)過(guò)常規(guī)訓(xùn)練...
摘要:另外,在損失函數(shù)中加入感知正則化則在一定程度上可緩解該問(wèn)題。替代損失函數(shù)修復(fù)缺陷的最流行的補(bǔ)丁是。的作者認(rèn)為傳統(tǒng)損失函數(shù)并不會(huì)使收集的數(shù)據(jù)分布接近于真實(shí)數(shù)據(jù)分布。原來(lái)?yè)p失函數(shù)中的對(duì)數(shù)損失并不影響生成數(shù)據(jù)與決策邊界的距離。 盡管 GAN 領(lǐng)域的進(jìn)步令人印象深刻,但其在應(yīng)用過(guò)程中仍然存在一些困難。本文梳理了 GAN 在應(yīng)用過(guò)程中存在的一些難題,并提出了的解決方法。使用 GAN 的缺陷眾所周知,G...
摘要:二是精度查全率和得分,用來(lái)衡量判別式模型的質(zhì)量。精度查全率和團(tuán)隊(duì)還用他們的三角形數(shù)據(jù)集,測(cè)試了樣本量為時(shí),大范圍搜索超參數(shù)來(lái)進(jìn)行計(jì)算的精度和查全率。 從2014年誕生至今,生成對(duì)抗網(wǎng)絡(luò)(GAN)熱度只增不減,各種各樣的變體層出不窮。有位名叫Avinash Hindupur的國(guó)際友人建立了一個(gè)GAN Zoo,他的動(dòng)物園里目前已經(jīng)收集了多達(dá)214種有名有姓的GAN。DeepMind研究員們甚至將...
閱讀 1890·2021-11-25 09:43
閱讀 3180·2021-11-15 11:38
閱讀 2723·2019-08-30 13:04
閱讀 497·2019-08-29 11:07
閱讀 1510·2019-08-26 18:37
閱讀 2748·2019-08-26 14:07
閱讀 598·2019-08-26 13:52
閱讀 2295·2019-08-26 12:09