成人国产在线小视频_日韩寡妇人妻调教在线播放_色成人www永久在线观看_2018国产精品久久_亚洲欧美高清在线30p_亚洲少妇综合一区_黄色在线播放国产_亚洲另类技巧小说校园_国产主播xx日韩_a级毛片在线免费

資訊專欄INFORMATION COLUMN

深度學(xué)習(xí)精要之CapsuleNets理論與實(shí)踐(附Python代碼)

luffyZh / 1869人閱讀

摘要:摘要本文對膠囊網(wǎng)絡(luò)進(jìn)行了非技術(shù)性的簡要概括,分析了其兩個(gè)重要屬性,之后針對手寫體數(shù)據(jù)集上驗(yàn)證多層感知機(jī)卷積神經(jīng)網(wǎng)絡(luò)以及膠囊網(wǎng)絡(luò)的性能。這是一個(gè)非結(jié)構(gòu)化的數(shù)字圖像識(shí)別問題,使用深度學(xué)習(xí)算法能夠獲得最佳性能。作者信息,數(shù)據(jù)科學(xué),深度學(xué)習(xí)初學(xué)者。

摘要: 本文對膠囊網(wǎng)絡(luò)進(jìn)行了非技術(shù)性的簡要概括,分析了其兩個(gè)重要屬性,之后針對MNIST手寫體數(shù)據(jù)集上驗(yàn)證多層感知機(jī)、卷積神經(jīng)網(wǎng)絡(luò)以及膠囊網(wǎng)絡(luò)的性能。

神經(jīng)網(wǎng)絡(luò)于上世紀(jì)50年代提出,直到最近十年里才得以發(fā)展迅速,正改變著我們世界的方方面面。從圖像分類到自然語言處理,研究人員正在對不同領(lǐng)域建立深層神經(jīng)網(wǎng)絡(luò)模型并取得相關(guān)的突破性成果。但是隨著深度學(xué)習(xí)的進(jìn)一步發(fā)展,又面臨著新的瓶頸——只對成熟網(wǎng)絡(luò)模型進(jìn)行加深加寬操作。直到最近,Hinton老爺子提出了新的概念——膠囊網(wǎng)絡(luò)(Capsule Networks),它提高了傳統(tǒng)方法的有效性和可理解性。

本文將講解膠囊網(wǎng)絡(luò)受歡迎的原因以及通過實(shí)際代碼來加強(qiáng)和鞏固對該概念的理解。

為什么膠囊網(wǎng)絡(luò)受到這么多的關(guān)注?

對于每種網(wǎng)絡(luò)結(jié)構(gòu)而言,一般用MINST手寫體數(shù)據(jù)集驗(yàn)證其性能。對于識(shí)別數(shù)字手寫體問題,即給定一個(gè)簡單的灰度圖,用戶需要預(yù)測它所顯示的數(shù)字。這是一個(gè)非結(jié)構(gòu)化的數(shù)字圖像識(shí)別問題,使用深度學(xué)習(xí)算法能夠獲得最佳性能。本文將以這個(gè)數(shù)據(jù)集測試三個(gè)深度學(xué)習(xí)模型,即:多層感知機(jī)(MLP)、卷積神經(jīng)網(wǎng)絡(luò)(CNN)以及膠囊網(wǎng)絡(luò)(Capsule Networks)。

多層感知機(jī)(MLP)

使用Keras建立多層感知機(jī)模型,代碼如下:

# define variables
input_num_units = 784
hidden_num_units = 50
output_num_units = 10

epochs = 15
batch_size = 128

# create model
model = Sequential([
 Dense(units=hidden_num_units, input_dim=input_num_units, activation="relu"),
 Dense(units=output_num_units, input_dim=hidden_num_units, activation="softmax"),
])

# compile the model with necessary attributes
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])

打印模型參數(shù)概要:

在經(jīng)過15次迭代訓(xùn)練后,結(jié)果如下:

Epoch 14/15
34300/34300 [==============================] - 1s 41us/step - loss: 0.0597 - acc: 0.9834 - val_loss: 0.1227 - val_acc: 0.9635
Epoch 15/15
34300/34300 [==============================] - 1s 41us/step - loss: 0.0553 - acc: 0.9842 - val_loss: 0.1245 - val_acc: 0.9637

可以看到,該模型實(shí)在是簡單!

卷積神經(jīng)網(wǎng)絡(luò)(CNN)

卷積神經(jīng)網(wǎng)絡(luò)在深度學(xué)習(xí)領(lǐng)域應(yīng)用十分廣泛,表現(xiàn)優(yōu)異。下面構(gòu)建卷積神經(jīng)網(wǎng)絡(luò)模型,代碼如下:

# define variables
input_shape = (28, 28, 1)

hidden_num_units = 50
output_num_units = 10

batch_size = 128

model = Sequential([
 InputLayer(input_shape=input_reshape),

Convolution2D(25, 5, 5, activation="relu"),
 MaxPooling2D(pool_size=pool_size),

Convolution2D(25, 5, 5, activation="relu"),
 MaxPooling2D(pool_size=pool_size),

Convolution2D(25, 4, 4, activation="relu"),

Flatten(),

Dense(output_dim=hidden_num_units, activation="relu"),

Dense(output_dim=output_num_units, input_dim=hidden_num_units, activation="softmax"),
])

model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])

打印模型參數(shù)概要:


從上圖可以發(fā)現(xiàn),CNN比MLP模型更加復(fù)雜,下面看看其性能:

Epoch 14/15
34/34 [==============================] - 4s 108ms/step - loss: 0.1278 - acc: 0.9604 - val_loss: 0.0820 - val_acc: 0.9757
Epoch 15/15
34/34 [==============================] - 4s 110ms/step - loss: 0.1256 - acc: 0.9626 - val_loss: 0.0827 - val_acc: 0.9746

可以發(fā)現(xiàn),CNN訓(xùn)練耗費(fèi)的時(shí)間比較長,但其性能優(yōu)異。

膠囊網(wǎng)絡(luò)(Capsule Network)

膠囊網(wǎng)絡(luò)的結(jié)構(gòu)比CNN網(wǎng)絡(luò)更加復(fù)雜,下面構(gòu)建膠囊網(wǎng)絡(luò)模型,代碼如下:

def CapsNet(input_shape, n_class, routings):
   x = layers.Input(shape=input_shape)

   # Layer 1: Just a conventional Conv2D layer
   conv1 = layers.Conv2D(filters=256, kernel_size=9, strides=1, padding="valid", activation="relu", name="conv1")(x)

   # Layer 2: Conv2D layer with `squash` activation, then reshape to [None, num_capsule, dim_capsule]
   primarycaps = PrimaryCap(conv1, dim_capsule=8, n_channels=32, kernel_size=9, strides=2, padding="valid")

   # Layer 3: Capsule layer. Routing algorithm works here.
   digitcaps = CapsuleLayer(num_capsule=n_class, dim_capsule=16, routings=routings,
   name="digitcaps")(primarycaps)

   # Layer 4: This is an auxiliary layer to replace each capsule with its length. Just to match the true label"s shape.
   # If using tensorflow, this will not be necessary. :)
   out_caps = Length(name="capsnet")(digitcaps)

   # Decoder network.
   y = layers.Input(shape=(n_class,))
   masked_by_y = Mask()([digitcaps, y]) # The true label is used to mask the output of capsule layer. For training
   masked = Mask()(digitcaps) # Mask using the capsule with maximal length. For prediction

   # Shared Decoder model in training and prediction
   decoder = models.Sequential(name="decoder")
   decoder.add(layers.Dense(512, activation="relu", input_dim=16*n_class))
   decoder.add(layers.Dense(1024, activation="relu"))
   decoder.add(layers.Dense(np.prod(input_shape), activation="sigmoid"))
   decoder.add(layers.Reshape(target_shape=input_shape, name="out_recon"))

   # Models for training and evaluation (prediction)
   train_model = models.Model([x, y], [out_caps, decoder(masked_by_y)])
   eval_model = models.Model(x, [out_caps, decoder(masked)])

   # manipulate model
   noise = layers.Input(shape=(n_class, 16))
   noised_digitcaps = layers.Add()([digitcaps, noise])
   masked_noised_y = Mask()([noised_digitcaps, y])
   manipulate_model = models.Model([x, y, noise], decoder(masked_noised_y))

   return train_model, eval_model, manipulate_model

打印模型參數(shù)概要:

該模型耗費(fèi)時(shí)間比較長,訓(xùn)練一段時(shí)間后,得到如下結(jié)果:

Epoch 14/15
34/34 [==============================] - 108s 3s/step - loss: 0.0445 - capsnet_loss: 0.0218 - decoder_loss: 0.0579 - capsnet_acc: 0.9846 - val_loss: 0.0364 - val_capsnet_loss: 0.0159 - val_decoder_loss: 0.0522 - val_capsnet_acc: 0.9887
Epoch 15/15
34/34 [==============================] - 107s 3s/step - loss: 0.0423 - capsnet_loss: 0.0201 - decoder_loss: 0.0567 - capsnet_acc: 0.9859 - val_loss: 0.0362 - val_capsnet_loss: 0.0162 - val_decoder_loss: 0.0510 - val_capsnet_acc: 0.9880

可以發(fā)現(xiàn),該網(wǎng)絡(luò)比之前傳統(tǒng)的網(wǎng)絡(luò)模型效果更好,下圖總結(jié)了三個(gè)實(shí)驗(yàn)結(jié)果:

這個(gè)實(shí)驗(yàn)也證明了膠囊網(wǎng)絡(luò)值得我們深入的研究和討論。

膠囊網(wǎng)絡(luò)背后的概念

為了理解膠囊網(wǎng)絡(luò)的概念,本文將以貓的圖片為例來說明膠囊網(wǎng)絡(luò)的潛力,首先從一個(gè)問題開始——下圖中的動(dòng)物是什么?

它是一只貓,你肯定猜對了吧!但是你是如何知道它是一只貓的呢?現(xiàn)在將這張圖片進(jìn)行分解:

情況1——簡單圖像

你是如何知道它是一只貓的呢?可能的方法是將其分解為多帶帶的特征,如眼睛、鼻子、耳朵等。如下圖所示:

因此,本質(zhì)上是把高層次的特征分解為低層次的特征。比如定義為:

P(臉) = P(鼻子) & ( 2 x P(胡須) ) & P(嘴巴) & ( 2 x P(眼睛) ) & ( 2 x P(耳朵) )

其中,P(臉) 定義為圖像中貓臉的存在。通過迭代,可以定義更多的低級別特性,如形狀和邊緣,以簡化過程。

情況2——旋轉(zhuǎn)圖像

將圖像旋轉(zhuǎn)30度,如下圖所示:

如果還是按照之前定義的相同特征,那么將無法識(shí)別出它是貓。這是因?yàn)榈讓犹卣鞯姆较虬l(fā)生了改變,導(dǎo)致先前定義的特征也將發(fā)生變化。

綜上,貓識(shí)別器可能看起來像這樣:

更具體一點(diǎn),表示為:

P(臉) = ( P(鼻子) & ( 2 x P(胡須) ) & P(嘴巴) & ( 2 x P(眼睛) ) & ( 2 x P(耳朵) ) ) OR

( P(rotated_鼻子) & ( 2 x P(rotated_胡須) ) & P(rotated_嘴巴) & ( 2 x P(rotated_眼睛) ) & ( 2 x P(rotated_耳朵) ) )

情況3——翻轉(zhuǎn)圖像

為了增加復(fù)雜性,下面是一個(gè)完全翻轉(zhuǎn)的圖像:

可能想到的方法是靠蠻力搜索低級別特征所有可能的旋轉(zhuǎn),但這種方法耗時(shí)耗力。因此,研究人員提出,包含低級別特征本身的附加屬性,比如旋轉(zhuǎn)角度。這樣不僅可以檢測特征是否存在,還可以檢測其旋轉(zhuǎn)是否存在,如下圖所示:

更具體一點(diǎn),表示為:

P(臉) = [ P(鼻子), R(鼻子) ] & [ P(胡須_1), R(胡須_1) ] & [ P(胡須_2), R(胡須_2) ] & [ P(嘴巴), R(嘴巴) ] & …

其中,旋轉(zhuǎn)特征用R()表示,這一特性也被稱作旋轉(zhuǎn)等價(jià)性。

從上述情況中可以看到,擴(kuò)大想法之后能夠捕捉更多低層次的特征,如尺度、厚度等,這將有助于我們更清楚地理解一個(gè)物體的形象。這就是膠囊網(wǎng)絡(luò)在設(shè)計(jì)時(shí)設(shè)想的工作方式。

膠囊網(wǎng)絡(luò)另外一個(gè)特點(diǎn)是動(dòng)態(tài)路由,下面以貓狗分類問題講解這個(gè)特點(diǎn)。


上面兩只動(dòng)物看起來非常相似,但存在一些差異。你可以從中發(fā)現(xiàn)哪只是狗嗎?

正如之前所做的那樣,將定義圖像中的特征以找出其中的差異。


如圖所示,定義非常低級的面部特征,比如眼睛、耳朵等,并將其結(jié)合以找到一個(gè)臉。之后,將面部和身體特征結(jié)合來完成相應(yīng)的任務(wù)——判斷它是一只貓或狗。

現(xiàn)在假設(shè)有一個(gè)新的圖像,以及提取的低層特征,需要根據(jù)以上信息判斷出其類別。我們從中隨機(jī)選取一個(gè)特征,比如眼睛,可以只根據(jù)它來判斷其類別嗎?


答案是否定的,因?yàn)檠劬Σ⒉皇且粋€(gè)區(qū)分因素。下一步是分析更多的特征,比如隨機(jī)挑選的下一個(gè)特征是鼻子。


只有眼睛和鼻子特征并不能夠完成分類任務(wù),下一步獲取所有特征,并將其結(jié)合以判斷所屬類別。如下圖所示,通過組合眼睛、鼻子、耳朵和胡須這四個(gè)特征就能夠判斷其所屬類別?;谝陨线^程,將在每個(gè)特征級別迭代地執(zhí)行這一步驟,就可以將正確的信息路由到需要分類信息的特征檢測器。

在膠囊構(gòu)件中,當(dāng)更高級的膠囊同意較低級的膠囊輸入時(shí),較低級的膠囊將其輸入到更高級膠囊中,這就是動(dòng)態(tài)路由算法的精髓。

膠囊網(wǎng)絡(luò)相對于傳統(tǒng)深度學(xué)習(xí)架構(gòu)而言,在對數(shù)據(jù)方向和角度方面更魯棒,甚至可以在相對較少的數(shù)據(jù)點(diǎn)上進(jìn)行訓(xùn)練。膠囊網(wǎng)絡(luò)存在的缺點(diǎn)是需要更多的訓(xùn)練時(shí)間和資源。

膠囊網(wǎng)絡(luò)在MNIST數(shù)據(jù)集上的代碼詳解

首先從識(shí)別數(shù)字手寫體項(xiàng)目下載數(shù)據(jù)集,數(shù)字手寫體識(shí)別問題主要是將給定的28x28大小的圖片識(shí)別出其顯示的數(shù)字。在開始運(yùn)行代碼之前,確保安裝好Keras。

下面打開Jupyter Notebook軟件,輸入以下代碼。首先導(dǎo)入所需的模塊:

然后進(jìn)行隨機(jī)初始化:

# To stop potential randomness
seed = 128
rng = np.random.RandomState(seed)

下一步設(shè)置目錄路徑:

root_dir = os.path.abspath(".")
data_dir = os.path.join(root_dir, "data")

下面加載數(shù)據(jù)集,數(shù)據(jù)集是“.CSV”格式。

train = pd.read_csv(os.path.join(data_dir, "train.csv"))
test = pd.read_csv(os.path.join(data_dir, "test.csv"))

train.head()

展示數(shù)據(jù)表示的數(shù)字:

img_name = rng.choice(train.filename)
filepath = os.path.join(data_dir, "train", img_name)

img = imread(filepath, flatten=True)

pylab.imshow(img, cmap="gray")
pylab.axis("off")
pylab.show()

現(xiàn)在將所有圖像保存為Numpy數(shù)組:

temp = []
for img_name in train.filename:
   image_path = os.path.join(data_dir, "train", img_name)
   img = imread(image_path, flatten=True)
   img = img.astype("float32")
   temp.append(img)
 
train_x = np.stack(temp)

train_x /= 255.0
train_x = train_x.reshape(-1, 784).astype("float32")

temp = []
for img_name in test.filename:
   image_path = os.path.join(data_dir, "test", img_name)
   img = imread(image_path, flatten=True)
   img = img.astype("float32")
   temp.append(img)
 
test_x = np.stack(temp)

test_x /= 255.0
test_x = test_x.reshape(-1, 784).astype("float32")

train_y = keras.utils.np_utils.to_categorical(train.label.values)

這是一個(gè)典型的機(jī)器學(xué)習(xí)問題,將數(shù)據(jù)集分成7:3。其中70%作為訓(xùn)練集,30%作為驗(yàn)證集。

split_size = int(train_x.shape[0]*0.7)

train_x, val_x = train_x[:split_size], train_x[split_size:]
train_y, val_y = train_y[:split_size], train_y[split_size:]

下面將分析三個(gè)不同深度學(xué)習(xí)模型對該數(shù)據(jù)的性能,分別是多層感知機(jī)、卷積神經(jīng)網(wǎng)絡(luò)以及膠囊網(wǎng)絡(luò)。

1.多層感知機(jī)

定義一個(gè)三層神經(jīng)網(wǎng)絡(luò),一個(gè)輸入層、一個(gè)隱藏層以及一個(gè)輸出層。輸入和輸出神經(jīng)元的數(shù)目是固定的,輸入為28x28圖像,輸出是代表類的10x1向量,隱層設(shè)置為50個(gè)神經(jīng)元,并使用梯度下降算法訓(xùn)練。

# define vars
input_num_units = 784
hidden_num_units = 50
output_num_units = 10

epochs = 15
batch_size = 128

# import keras modules

from keras.models import Sequential
from keras.layers import InputLayer, Convolution2D, MaxPooling2D, Flatten, Dense

# create model
model = Sequential([
 Dense(units=hidden_num_units, input_dim=input_num_units, activation="relu"),
 Dense(units=output_num_units, input_dim=hidden_num_units, activation="softmax"),
])

# compile the model with necessary attributes
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])

打印模型參數(shù)概要:

trained_model = model.fit(train_x, train_y, nb_epoch=epochs, batch_size=batch_size, validation_data=(val_x, val_y))

在迭代15次之后,結(jié)果如下:

Epoch 14/15
34300/34300 [==============================] - 1s 41us/step - loss: 0.0597 - acc: 0.9834 - val_loss: 0.1227 - val_acc: 0.9635
Epoch 15/15
34300/34300 [==============================] - 1s 41us/step - loss: 0.0553 - acc: 0.9842 - val_loss: 0.1245 - val_acc: 0.9637

結(jié)果不錯(cuò),但可以繼續(xù)改進(jìn)。

2.卷積神經(jīng)網(wǎng)絡(luò)

把圖像轉(zhuǎn)換成灰度圖(2D),然后將其輸入到CNN模型中:

# reshape data
train_x_temp = train_x.reshape(-1, 28, 28, 1)
val_x_temp = val_x.reshape(-1, 28, 28, 1)

# define vars
input_shape = (784,)
input_reshape = (28, 28, 1)


pool_size = (2, 2)

hidden_num_units = 50
output_num_units = 10

batch_size = 128

下面定義CNN模型:下面定義CNN模型:

model = Sequential([
 InputLayer(input_shape=input_reshape),

Convolution2D(25, 5, 5, activation="relu"),
 MaxPooling2D(pool_size=pool_size),

Convolution2D(25, 5, 5, activation="relu"),
 MaxPooling2D(pool_size=pool_size),

Convolution2D(25, 4, 4, activation="relu"),

Flatten(),

Dense(output_dim=hidden_num_units, activation="relu"),

Dense(output_dim=output_num_units, input_dim=hidden_num_units, activation="softmax"),
])

model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])

#trained_model_conv = model.fit(train_x_temp, train_y, nb_epoch=epochs, batch_size=batch_size, validation_data=(val_x_temp, val_y))
model.summary()

打印模型參數(shù)概要:

通過增加數(shù)據(jù)來調(diào)整進(jìn)程:

# Begin: Training with data augmentation ---------------------------------------------------------------------#
def train_generator(x, y, batch_size, shift_fraction=0.1):
   train_datagen = ImageDataGenerator(width_shift_range=shift_fraction,
   height_shift_range=shift_fraction) # shift up to 2 pixel for MNIST
   generator = train_datagen.flow(x, y, batch_size=batch_size)
   while 1:
     x_batch, y_batch = generator.next()
     yield ([x_batch, y_batch])
 
# Training with data augmentation. If shift_fraction=0., also no augmentation.
trained_model2 = model.fit_generator(generator=train_generator(train_x_temp, train_y, 1000, 0.1),
 steps_per_epoch=int(train_y.shape[0] / 1000),
 epochs=epochs,
 validation_data=[val_x_temp, val_y])
# End: Training with data augmentation -----------------------------------------------------------------------#

CNN模型的結(jié)果:

Epoch 14/15
34/34 [==============================] - 4s 108ms/step - loss: 0.1278 - acc: 0.9604 - val_loss: 0.0820 - val_acc: 0.9757
Epoch 15/15
34/34 [==============================] - 4s 110ms/step - loss: 0.1256 - acc: 0.9626 - val_loss: 0.0827 - val_acc: 0.9746

3.膠囊網(wǎng)絡(luò)

建立膠囊網(wǎng)絡(luò)模型,結(jié)構(gòu)如圖所示:

下面建立該模型,代碼如下:

def CapsNet(input_shape, n_class, routings):
   """
   A Capsule Network on MNIST.
   :param input_shape: data shape, 3d, [width, height, channels]
   :param n_class: number of classes
   :param routings: number of routing iterations
   :return: Two Keras Models, the first one used for training, and the second one for evaluation.
   `eval_model` can also be used for training.
   """
   x = layers.Input(shape=input_shape)

   # Layer 1: Just a conventional Conv2D layer
   conv1 = layers.Conv2D(filters=256, kernel_size=9, strides=1, padding="valid", activation="relu", name="conv1")(x)

   # Layer 2: Conv2D layer with `squash` activation, then reshape to [None, num_capsule, dim_capsule]
   primarycaps = PrimaryCap(conv1, dim_capsule=8, n_channels=32, kernel_size=9, strides=2, padding="valid")

   # Layer 3: Capsule layer. Routing algorithm works here.
   digitcaps = CapsuleLayer(num_capsule=n_class, dim_capsule=16, routings=routings,
   name="digitcaps")(primarycaps)

   # Layer 4: This is an auxiliary layer to replace each capsule with its length. Just to match the true label"s shape.
   # If using tensorflow, this will not be necessary. :)
   out_caps = Length(name="capsnet")(digitcaps)

   # Decoder network.
   y = layers.Input(shape=(n_class,))
   masked_by_y = Mask()([digitcaps, y]) # The true label is used to mask the output of capsule layer. For training
   masked = Mask()(digitcaps) # Mask using the capsule with maximal length. For prediction

   # Shared Decoder model in training and prediction
   decoder = models.Sequential(name="decoder")
   decoder.add(layers.Dense(512, activation="relu", input_dim=16*n_class))
   decoder.add(layers.Dense(1024, activation="relu"))
   decoder.add(layers.Dense(np.prod(input_shape), activation="sigmoid"))
   decoder.add(layers.Reshape(target_shape=input_shape, name="out_recon"))

   # Models for training and evaluation (prediction)
   train_model = models.Model([x, y], [out_caps, decoder(masked_by_y)])
   eval_model = models.Model(x, [out_caps, decoder(masked)])

   # manipulate model
   noise = layers.Input(shape=(n_class, 16))
   noised_digitcaps = layers.Add()([digitcaps, noise])
   masked_noised_y = Mask()([noised_digitcaps, y])
   manipulate_model = models.Model([x, y, noise], decoder(masked_noised_y))
   return train_model, eval_model, manipulate_model


def margin_loss(y_true, y_pred):
   """
   Margin loss for Eq.(4). When y_true[i, :] contains not just one `1`, this loss should work too. Not test it.
   :param y_true: [None, n_classes]
   :param y_pred: [None, num_capsule]
   :return: a scalar loss value.
   """
   L = y_true * K.square(K.maximum(0., 0.9 - y_pred)) + 
   0.5 * (1 - y_true) * K.square(K.maximum(0., y_pred - 0.1))

   return K.mean(K.sum(L, 1))
model, eval_model, manipulate_model = CapsNet(input_shape=train_x_temp.shape[1:],
 n_class=len(np.unique(np.argmax(train_y, 1))),
 routings=3)
# compile the model
model.compile(optimizer=optimizers.Adam(lr=0.001),
 loss=[margin_loss, "mse"],
 loss_weights=[1., 0.392],
 metrics={"capsnet": "accuracy"})

model.summary()

打印模型參數(shù)概要:

膠囊模型的結(jié)果:

Epoch 14/15 34/34 [==============================] - 108s 3s/step - loss: 0.0445 - capsnet_loss: 0.0218 - decoder_loss: 0.0579 - capsnet_acc: 0.9846 - val_loss: 0.0364 - val_capsnet_loss: 0.0159 - val_decoder_loss: 0.0522 - val_capsnet_acc: 0.9887 Epoch 15/15 34/34 [==============================] - 107s 3s/step - loss: 0.0423 - capsnet_loss: 0.0201 - decoder_loss: 0.0567 - capsnet_acc: 0.9859 - val_loss: 0.0362 - val_capsnet_loss: 0.0162 - val_decoder_loss: 0.0510 - val_capsnet_acc: 0.9880

為了便于總結(jié)分析,將以上三個(gè)實(shí)驗(yàn)的結(jié)構(gòu)繪制出測試精度圖:

plt.figure(figsize=(10, 8))
plt.plot(trained_model.history["val_acc"], "r", trained_model2.history["val_acc"], "b", trained_model3.history["val_capsnet_acc"], "g")
plt.legend(("MLP", "CNN", "CapsNet"),
 loc="lower right", fontsize="large")
plt.title("Validation Accuracies")
plt.show()


從結(jié)果中可以看出,膠囊網(wǎng)絡(luò)的精度優(yōu)于CNN和MLP。

總結(jié)

本文對膠囊網(wǎng)絡(luò)進(jìn)行了非技術(shù)性的簡要概括,分析了其兩個(gè)重要屬性,之后針對MNIST手寫體數(shù)據(jù)集上驗(yàn)證多層感知機(jī)、卷積神經(jīng)網(wǎng)絡(luò)以及膠囊網(wǎng)絡(luò)的性能。

作者信息

Faizan Shaikh,數(shù)據(jù)科學(xué),深度學(xué)習(xí)初學(xué)者。

文章原標(biāo)題《Essentials of Deep Learning: Getting to know CapsuleNets (with Python codes)》,作者:Faizan Shaikh

詳情請閱讀原文

文章版權(quán)歸作者所有,未經(jīng)允許請勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。

轉(zhuǎn)載請注明本文地址:http://systransis.cn/yun/41551.html

相關(guān)文章

  • 分享AI有道干貨 | 126 篇 AI 原創(chuàng)文章精選(ML、DL、資源、教程)

    摘要:值得一提的是每篇文章都是我用心整理的,編者一貫堅(jiān)持使用通俗形象的語言給我的讀者朋友們講解機(jī)器學(xué)習(xí)深度學(xué)習(xí)的各個(gè)知識(shí)點(diǎn)。今天,紅色石頭特此將以前所有的原創(chuàng)文章整理出來,組成一個(gè)比較合理完整的機(jī)器學(xué)習(xí)深度學(xué)習(xí)的學(xué)習(xí)路線圖,希望能夠幫助到大家。 一年多來,公眾號(hào)【AI有道】已經(jīng)發(fā)布了 140+ 的原創(chuàng)文章了。內(nèi)容涉及林軒田機(jī)器學(xué)習(xí)課程筆記、吳恩達(dá) deeplearning.ai 課程筆記、機(jī)...

    jimhs 評論0 收藏0

發(fā)表評論

0條評論

最新活動(dòng)
閱讀需要支付1元查看
<