seq2seq기반 덧셈 모형 빌드(with Gluon)

필자가 Gluon을 시작한 계기는 바로 Keras로 seq2seq기반의 네트워크를 구축하기가 매우 어렵거나 모호해서 였다. 사실 간단한 영한 기계번역기를 Keras로 만들다가 한계에 부딧혀 포기했다. 그 원인은 아래 도표를 보면 알겠지만 학습과 예측의 네트웍 플로우가 다른데 있는데, 이럴 경우 예측 코드는 Keras에서 작성하기가 어려워진다. 사실 어렵다기 보다는 모호한 코드를 짜게 될 가능성이 많다. 그래서 해당 코드를 작성하고 이틀만 지나도 내가 뭘 했는지 알기 어렵다. 아마도 이런 이유때문에 특히 NLP하는 분들은 PyTorch를 선호하는게 아닐까 한다. 하지만 필자는 올해 10월에 나온 Gluon을 기반으로 구축해 봤다. 왜냐면 아직 이 예제는 Gluon으로 작성된게 없기 때문이다.

해당 예제는 이미 keras example에 아주 잘 작성되어 있다. 간단히 설명하면 encoder의 마지막 시퀀스 출력 벡터를 디코더 길이만큼 반복해 context 벡터를 만들어 입력을 넣어주는 방식이다. 다만 구현이 해당 방식이 아니면 복잡해지는 단점이 있다는 것이다. 그래서 기계번역에서 성능이 좋게 나온다는 오리지널 seq2seq 방식을 Keras 개발자기 직접 블로그에 올리기도 하였다. 하지만 Keras에서 inference loop 쪽 코드를 보면 한계가 여실히 드러나는 것을 알 수 있는데, 필자도 해당 코드 기반으로 영한번역기를 구축하다 결국 디버깅 및 inference code가 복잡하고 어려워지면서 포기했다.

사실 Gluon으로 작업을 하면서도 이 간단한 문제도 예측성능이 나오지 않아 정말 많은 시행착오를 했다(퇴근 후 3일 동안 새벽 3시까지…). 아래 코드를 보면 알겠지만 마지막 레이어에서 fully connected가 3차원일 경우 어떻게 코드를 작성해야 마지막 차원에서 원하는 Softmax를 찾을지.. (사실 이 부분은 tensorflow 코드와 Keras 코드를 알고 있었는데, Gluon에서는 dense layer의 옵션만으로도 적용이 가능했다. 그런데 Keras는 TimeDistribution 레이어로 해결된다. 예제가 많기 때문에 그냥 아무 생각없이 가져다 쓰기 바빴던 내가 참.. 원망 스럽다.) 그리고 encoder,decoder state를 어떻게 전달해야 되는지, 초기 코드의 loss가 80까지 나오는데 (결과적으로 원인은 데이터 생성에서 버그에 있었지만), 원인을 어떻게 찾을지 ….그리고 중간 중간 reshape이나 concat을 하게 되는데 다음 레이어에 맞게 dimension만 맞추려 하다가는 잘못 변환된 데이터를 입력하는 경우가 많고, 이 때문에 학습이 안되는 경우가 많았다. 역시나 이런 부분에 대한 고민과 훈련이 안된 티가 난다. 사실 이런 부분은 Keras를 쓸때는 거의 신경쓰지 않는 문제긴 하지만, 이쪽 영역을 한다면 반드시 익숙해져야 될 부분이다. 머리속에 3차원 4차원의 행렬을 생각하면서 코딩해야 되는 어려움은 해본 분들만 알 수 있을 것이다.

일단 하소연은 이정도로 하겠고… seq2seq에 대한 문제정의를 해보도록 하겠다.

일단 학습 코드인데, 학습시에는 3가지 데이터가 쓰인다. 먼저 “S1+5E”와 같은 수식 텍스트 그리고 decoder에 입력으로 들어갈 정답 “S6E”. 그리고 실제 loss를 계산할 Y인 “6E”.여기서 “S”, “E”는 각각 텍스트의 시작과 마지막을 의미한다.

decoder쪽은 입력으로 t-1의 텍스트가 입려되면, t 시점의 텍스트를 예측하게끔 훈련이 되며, encoder에서는 입력 수식의 데이터의 시계열적인 정보를 2개의 state(여기선 LSTM을 사용했다)로 decoder에 전달하게 된다.

그림에서 마지막 예측 결과를 종합하는 dense layer는 생략되었는데, 이 부분은 코드를 참고하길 바란다.

예측시에는 Encoder에 입력되는 텍스트만 들어가게 되는데, Decoder의 t-1시점의 출력이 t시점의 입력으로 다시 들어가게 되는 부분이 네트워크 구조가 학습시와 달라지는 부분이다. 이 부분 때문에 Keras 구현 난이도가 200% 올라가는 것이다.

그럼 코드를 보면서 주석으로 설명을 올려 보겠다.

In [1]:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
from mxnet import nd as  F
import mxnet as mx
from  mxnet import gluon
from mxnet.gluon import nn, rnn
import mxnet.autograd as autograd
from mxnet import nd as F

아래는 학습 데이터 생성 코드이다. 데이터 생성 예제 코드는 인터넷에 많은데… 이 생성 부분은 주로 이곳코드를 참고했다.

In [2]:
def n(digits=3):
    number = ''
    for i in range(np.random.randint(1, digits + 1)):
        number += np.random.choice(list('0123456789'))
    return int(number)


def padding(chars, maxlen):
    return chars + ' ' * (maxlen - len(chars))


N = 50000
N_train = int(N * 0.9)
N_validation = N - N_train

digits = 3  # 최대 자릿수
input_digits = digits * 2 + 3  # 예: 123+456
output_digits = digits + 3 # 500+500 = 1000 이상이면 4자리가 된다

added = set()
questions = []
answers = []
answers_y = []

while len(questions) < N:
    a, b = n(), n()  # 두 개의 수를 적당히 생성한다

    pair = tuple(sorted((a, b)))
    if pair in added:
        continue

    question = 'S{}+{}E'.format(a, b)
    question = padding(question, input_digits)
    answer = 'S' + str(a + b) + 'E'
    answer = padding(answer, output_digits)
    answer_y = str(a + b) + 'E'
    answer_y = padding(answer_y, output_digits)
    

    added.add(pair)
    questions.append(question)
    answers.append(answer)
    answers_y.append(answer_y)

chars = '0123456789+SE '
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))

X = np.zeros((len(questions), input_digits, len(chars)), dtype=np.integer)
Y = np.zeros((len(questions), digits + 3, len(chars)), dtype=np.integer)
Z = np.zeros((len(questions), digits + 3, len(chars)), dtype=np.integer)

for i in range(N):
    for t, char in enumerate(questions[i]):
        X[i, t, char_indices[char]] = 1
    for t, char in enumerate(answers[i]):
        Y[i, t, char_indices[char]] = 1
    for t, char in enumerate(answers_y[i]):
        Z[i, t, char_indices[char]] = 1
    
X_train, X_validation, Y_train, Y_validation, Z_train, Z_validation = \
    train_test_split(X, Y, Z, train_size=N_train)

예시 테스트 데이터를 생성하는 함수

In [4]:
def gen_n_test(N):
    q = []
    y = []
    for i in range(N):
        a, b = n(), n() 
        question = '{}+{}'.format(a, b)
        answer_y = str(a + b)
        q.append(question)
        y.append(answer_y)
    return(q,y)

적중 유무를 색으로 표시하기 위해…

In [6]:
class colors:
    ok = '\033[92m'
    fail = '\033[91m'
    close = '\033[0m'
In [52]:
class calculator(gluon.Block):
    
    def __init__(self, n_hidden,in_seq_len, out_seq_len, vacab_size , **kwargs):
        super(calculator,self).__init__(**kwargs)
        #입력 시퀀스 길이
        self.in_seq_len = in_seq_len
        #출력 시퀀스 길이 
        self.out_seq_len = out_seq_len
        # LSTM의 hidden 개수 
        self.n_hidden = n_hidden
        #고유문자개수 
        self.vacab_size = vacab_size
        
        with self.name_scope():
            self.encoder = rnn.LSTMCell(hidden_size=n_hidden)
            self.decoder = rnn.LSTMCell(hidden_size=n_hidden)
            self.batchnorm = nn.BatchNorm(axis=2)
            #flatten을 false로 할 경우 마지막 차원에 fully connected가 적용된다. 
            self.dense = nn.Dense(self.vacab_size,flatten=False)
            
    def forward(self, inputs, outputs):
        """
        학습 코드 
        """
        #encoder LSTM
        enout, (next_h, next_c) = self.encoder.unroll(inputs=inputs, length=self.in_seq_len, merge_outputs=True)
        
        #decoder LSTM
        for i in range(self.out_seq_len):
            #out_seq_len 길이만큼 LSTMcell을 unroll하면서 출력값을 적재한다. 
            deout, (next_h, next_c) = self.decoder(outputs[:,i,:], [next_h, next_c], )
            if i == 0:
                deouts = deout
            else:
                deouts = F.concat(deouts, deout, dim=1)
        #2dim -> 3dim 으로 reshape 
        deouts = F.reshape(deouts, (-1, self.out_seq_len, self.n_hidden))
        deouts = self.batchnorm(deouts)
        deouts_fc = self.dense(deouts)
        return(deouts_fc)
    
    def calulation(self, input_str, char_indices, indices_char, input_digits=9, lchars=14, ctx=mx.gpu(0)):
        """
        inference 코드 
        """
        #앞뒤에 S,E 코드 추가 
        input_str = 'S' + input_str + 'E'
        #string to one-hot coding 
        X = F.zeros((1, input_digits, lchars), ctx=ctx)
        for t, char in enumerate(input_str):
            X[0, t, char_indices[char]] = 1
        #디코더의 초기 입력값으로 넣을 'S'를 one-hot coding한다. 
        Y_init = F.zeros((1, lchars), ctx=ctx)
        Y_init[0,char_indices['S']] = 1
        #인코더 출력값을 도출한다. 
        enout, (next_h, next_c) = self.encoder.unroll(inputs=X, length=self.in_seq_len, merge_outputs=True)
        deout = Y_init
        #출력 시퀀스 길이만큼 순회 
        for i in range(self.out_seq_len):
            deout, (next_h, next_c) = self.decoder(deout, [next_h, next_c])
            #batchnorm을 적용하기 위해 차원 증가/원복 
            deout = F.expand_dims(deout,axis=1)
            deout = self.batchnorm(deout)
            deout = deout[:,0,:]
            #'S'의 다음 시퀀스 출력값도출 
            deout_sm = self.dense(deout)
            deout = F.one_hot(F.argmax(F.softmax(deout_sm, axis=1), axis=1), depth=self.vacab_size)
            if i == 0:
                ret_seq = indices_char[F.argmax(deout_sm, axis=1).asnumpy()[0].astype('int')]
            else:
                ret_seq += indices_char[F.argmax(deout_sm, axis=1).asnumpy()[0].astype('int')]
            if ret_seq[-1] == ' ' or ret_seq[-1] == 'E':
                break
            #공백 padding 제거 후 리턴
        return(ret_seq.strip('E').strip())

                

data iterator 생성

In [53]:
tr_set = gluon.data.ArrayDataset(X_train, Y_train, Z_train)
tr_data_iterator = gluon.data.DataLoader(tr_set, batch_size=256, shuffle=True)

te_set =gluon.data.ArrayDataset(X_validation, Y_validation, Z_validation)
te_data_iterator = gluon.data.DataLoader(te_set, batch_size=256, shuffle=True)
In [61]:
ctx = mx.gpu()

#모형 인스턴스 생성 및 트래이너, loss 정의 
model = calculator(300, 9, 6, 14)
model.collect_params().initialize(mx.init.Xavier(), ctx=ctx)

trainer = gluon.Trainer(model.collect_params(), 'rmsprop') 
loss = gluon.loss.SoftmaxCrossEntropyLoss(axis = 2, sparse_label=False)
In [62]:
print(model)
calculator(
  (encoder): LSTMCell(None -> 1200)
  (decoder): LSTMCell(None -> 1200)
  (batchnorm): BatchNorm(axis=2, eps=1e-05, momentum=0.9, fix_gamma=False, in_channels=None)
  (dense): Dense(None -> 14, linear)
)
In [63]:
def calculate_loss(model, data_iter, loss_obj, ctx=ctx):
    test_loss = []
    for i, (x_data, y_data, z_data) in enumerate(data_iter):
        x_data = x_data.as_in_context(ctx).astype('float32')
        y_data = y_data.as_in_context(ctx).astype('float32')
        z_data = z_data.as_in_context(ctx).astype('float32')
        with autograd.predict_mode():
            z_output = model(x_data, y_data)
            loss_te = loss_obj(z_output, z_data)
        curr_loss = mx.nd.mean(loss_te).asscalar()
        test_loss.append(curr_loss)
    return(np.mean(test_loss))
In [64]:
epochs = 201

### 학습 코드 
tot_test_loss = []
tot_train_loss = []
for e in range(epochs):
    train_loss = []
    for i, (x_data, y_data, z_data) in enumerate(tr_data_iterator):
        x_data = x_data.as_in_context(ctx).astype('float32')
        y_data = y_data.as_in_context(ctx).astype('float32')
        z_data = z_data.as_in_context(ctx).astype('float32')
        with autograd.record():
            z_output = model(x_data, y_data)
            loss_ = loss(z_output, z_data)
        loss_.backward()
        trainer.step(x_data.shape[0])
        curr_loss = mx.nd.mean(loss_).asscalar()
        train_loss.append(curr_loss)
    if e % 10 == 0:
        # 매 10 에폭마다 예제 테스트 결과 출력 
        q, y = gen_n_test(10)
        for i in range(10):
            with autograd.predict_mode():
                p = model.calulation(q[i], char_indices, indices_char).strip()
                iscorr = 1 if p == y[i] else 0
                if iscorr == 1:
                    print(colors.ok + '☑' + colors.close, end=' ')
                else:
                    print(colors.fail + '☒' + colors.close, end=' ')
                print("{} = {}({}) 1/0 {}".format(q[i], p, y[i], str(iscorr) ))
    #caculate test loss
    test_loss = calculate_loss(model, te_data_iterator, loss_obj = loss, ctx=ctx) 

    print("Epoch %s. Train Loss: %s, Test Loss : %s" % (e, np.mean(train_loss), test_loss))    
    tot_test_loss.append(test_loss)
    tot_train_loss.append(np.mean(train_loss))
 93+276 = 1000(369) 1/0 0
 67+7 = 1000(74) 1/0 0
 98+765 = 1000(863) 1/0 0
 33+4 = 432(37) 1/0 0
 2+9 = 100(11) 1/0 0
 5+378 = 1000(383) 1/0 0
 598+2 = 1000(600) 1/0 0
 176+389 = 1326(565) 1/0 0
 5+24 = 550(29) 1/0 0
 2+62 = 228(64) 1/0 0
Epoch 0. Train Loss: 1.18849, Test Loss : 1.12554
Epoch 1. Train Loss: 1.12178, Test Loss : 1.10778
Epoch 2. Train Loss: 1.09253, Test Loss : 1.07927
Epoch 3. Train Loss: 1.0283, Test Loss : 0.991627
Epoch 4. Train Loss: 0.921772, Test Loss : 0.893707
Epoch 5. Train Loss: 0.831767, Test Loss : 0.873785
Epoch 6. Train Loss: 0.758187, Test Loss : 0.747362
Epoch 7. Train Loss: 0.687181, Test Loss : 0.694492
Epoch 8. Train Loss: 0.606952, Test Loss : 0.554216
Epoch 9. Train Loss: 0.497511, Test Loss : 0.481963
 831+966 = 1797(1797) 1/0 1
 682+49 = 731(731) 1/0 1
 98+445 = 545(543) 1/0 0
 9+310 = 319(319) 1/0 1
 31+9 = 119(40) 1/0 0
 17+452 = 460(469) 1/0 0
 1+757 = 758(758) 1/0 1
 1+806 = 806(807) 1/0 0
 242+504 = 753(746) 1/0 0
 6+6 = 122(12) 1/0 0
Epoch 10. Train Loss: 0.37133, Test Loss : 0.342722
Epoch 11. Train Loss: 0.275218, Test Loss : 0.230928
Epoch 12. Train Loss: 0.206047, Test Loss : 0.163056
Epoch 13. Train Loss: 0.155631, Test Loss : 0.125244
Epoch 14. Train Loss: 0.120561, Test Loss : 0.188475
Epoch 15. Train Loss: 0.0960923, Test Loss : 0.0812157
Epoch 16. Train Loss: 0.081216, Test Loss : 0.0795511
Epoch 17. Train Loss: 0.0671884, Test Loss : 0.138703
Epoch 18. Train Loss: 0.0583087, Test Loss : 0.0596575
Epoch 19. Train Loss: 0.0525172, Test Loss : 0.0361313
 0+4 = 5(4) 1/0 0
 225+68 = 294(293) 1/0 0
 3+408 = 411(411) 1/0 1
 76+1 = 78(77) 1/0 0
 4+959 = 963(963) 1/0 1
 31+136 = 167(167) 1/0 1
 5+586 = 591(591) 1/0 1
 58+58 = 115(116) 1/0 0
 35+69 = 104(104) 1/0 1
 90+0 = 99(90) 1/0 0
Epoch 20. Train Loss: 0.0461855, Test Loss : 0.0317076
Epoch 21. Train Loss: 0.0405695, Test Loss : 0.0274729
Epoch 22. Train Loss: 0.0351415, Test Loss : 0.0297918
Epoch 23. Train Loss: 0.0349448, Test Loss : 0.0228843
Epoch 24. Train Loss: 0.0290138, Test Loss : 0.0269108
Epoch 25. Train Loss: 0.030374, Test Loss : 0.0295408
Epoch 26. Train Loss: 0.0242152, Test Loss : 0.0199652
Epoch 27. Train Loss: 0.0238406, Test Loss : 0.0275519
Epoch 28. Train Loss: 0.0234274, Test Loss : 0.0450086
Epoch 29. Train Loss: 0.0208283, Test Loss : 0.0372322
 84+599 = 683(683) 1/0 1
 732+210 = 942(942) 1/0 1
 3+4 = 7(7) 1/0 1
 7+252 = 259(259) 1/0 1
 1+58 = 69(59) 1/0 0
 440+44 = 484(484) 1/0 1
 44+9 = 53(53) 1/0 1
 106+4 = 100(110) 1/0 0
 122+8 = 220(130) 1/0 0
 68+9 = 76(77) 1/0 0
Epoch 30. Train Loss: 0.0212195, Test Loss : 0.0145791
Epoch 31. Train Loss: 0.0169189, Test Loss : 0.0142156
Epoch 32. Train Loss: 0.0157236, Test Loss : 0.119613
Epoch 33. Train Loss: 0.0146479, Test Loss : 0.0452408
Epoch 34. Train Loss: 0.014292, Test Loss : 0.0181429
Epoch 35. Train Loss: 0.0164574, Test Loss : 0.0203357
Epoch 36. Train Loss: 0.0141047, Test Loss : 0.0113768
Epoch 37. Train Loss: 0.0127254, Test Loss : 0.0210824
Epoch 38. Train Loss: 0.0113013, Test Loss : 0.012517
Epoch 39. Train Loss: 0.0109072, Test Loss : 0.0163044
 314+41 = 355(355) 1/0 1
 4+70 = 74(74) 1/0 1
 406+36 = 442(442) 1/0 1
 712+44 = 756(756) 1/0 1
 43+2 = 45(45) 1/0 1
 5+340 = 345(345) 1/0 1
 456+4 = 450(460) 1/0 0
 339+1 = 340(340) 1/0 1
 1+3 = 4(4) 1/0 1
 891+628 = 1519(1519) 1/0 1
Epoch 40. Train Loss: 0.0107261, Test Loss : 0.0111639
Epoch 41. Train Loss: 0.0108476, Test Loss : 0.00825096
Epoch 42. Train Loss: 0.00988811, Test Loss : 0.0102361
Epoch 43. Train Loss: 0.00900932, Test Loss : 0.0676999
Epoch 44. Train Loss: 0.00982492, Test Loss : 0.0156328
Epoch 45. Train Loss: 0.0102352, Test Loss : 0.0114041
Epoch 46. Train Loss: 0.00736601, Test Loss : 0.0172779
Epoch 47. Train Loss: 0.00944774, Test Loss : 0.012209
Epoch 48. Train Loss: 0.00857665, Test Loss : 0.0202206
Epoch 49. Train Loss: 0.00679102, Test Loss : 0.0154347
 843+215 = 1058(1058) 1/0 1
 4+9 = 12(13) 1/0 0
 457+89 = 546(546) 1/0 1
 12+234 = 246(246) 1/0 1
 2+9 = 11(11) 1/0 1
 86+651 = 737(737) 1/0 1
 500+1 = 501(501) 1/0 1
 1+105 = 106(106) 1/0 1
 90+927 = 1017(1017) 1/0 1
 370+7 = 377(377) 1/0 1
Epoch 50. Train Loss: 0.00842717, Test Loss : 0.00807731
Epoch 51. Train Loss: 0.00688481, Test Loss : 0.00955924
Epoch 52. Train Loss: 0.0059042, Test Loss : 0.0110426
Epoch 53. Train Loss: 0.00722511, Test Loss : 0.0132007
Epoch 54. Train Loss: 0.00804315, Test Loss : 0.00719513
Epoch 55. Train Loss: 0.0077892, Test Loss : 0.00777933
Epoch 56. Train Loss: 0.00528711, Test Loss : 0.00848834
Epoch 57. Train Loss: 0.00471256, Test Loss : 0.007944
Epoch 58. Train Loss: 0.0057872, Test Loss : 0.00857201
Epoch 59. Train Loss: 0.00637925, Test Loss : 0.00862721
 97+0 = 97(97) 1/0 1
 9+65 = 74(74) 1/0 1
 55+30 = 85(85) 1/0 1
 209+9 = 218(218) 1/0 1
 644+617 = 1261(1261) 1/0 1
 59+57 = 116(116) 1/0 1
 53+13 = 66(66) 1/0 1
 4+976 = 980(980) 1/0 1
 9+48 = 57(57) 1/0 1
 558+802 = 1360(1360) 1/0 1
Epoch 60. Train Loss: 0.00502064, Test Loss : 0.00580972
Epoch 61. Train Loss: 0.00607394, Test Loss : 0.012593
Epoch 62. Train Loss: 0.00454351, Test Loss : 0.00767156
Epoch 63. Train Loss: 0.00435373, Test Loss : 0.014334
Epoch 64. Train Loss: 0.00417846, Test Loss : 0.0102815
Epoch 65. Train Loss: 0.00415637, Test Loss : 0.0133149
Epoch 66. Train Loss: 0.00555201, Test Loss : 0.0107528
Epoch 67. Train Loss: 0.00496103, Test Loss : 0.0110011
Epoch 68. Train Loss: 0.00416868, Test Loss : 0.0106568
Epoch 69. Train Loss: 0.00340409, Test Loss : 0.00849371
 910+287 = 1297(1197) 1/0 0
 8+93 = 101(101) 1/0 1
 1+4 = 5(5) 1/0 1
 407+0 = 407(407) 1/0 1
 86+62 = 148(148) 1/0 1
 8+5 = 13(13) 1/0 1
 5+7 = 12(12) 1/0 1
 12+90 = 102(102) 1/0 1
 51+8 = 69(59) 1/0 0
 486+2 = 488(488) 1/0 1
Epoch 70. Train Loss: 0.00397048, Test Loss : 0.00886579
Epoch 71. Train Loss: 0.00355671, Test Loss : 0.0147429
Epoch 72. Train Loss: 0.00346305, Test Loss : 0.00793579
Epoch 73. Train Loss: 0.00381371, Test Loss : 0.00584106
Epoch 74. Train Loss: 0.00278036, Test Loss : 0.0234801
Epoch 75. Train Loss: 0.00336997, Test Loss : 0.0181428
Epoch 76. Train Loss: 0.00283961, Test Loss : 0.00660451
Epoch 77. Train Loss: 0.00572103, Test Loss : 0.005219
Epoch 78. Train Loss: 0.00281588, Test Loss : 0.0140664
Epoch 79. Train Loss: 0.00340437, Test Loss : 0.0066866
 0+1 = 2(1) 1/0 0
 354+4 = 358(358) 1/0 1
 348+3 = 351(351) 1/0 1
 8+69 = 77(77) 1/0 1
 12+53 = 65(65) 1/0 1
 4+375 = 379(379) 1/0 1
 8+655 = 653(663) 1/0 0
 4+70 = 74(74) 1/0 1
 9+71 = 80(80) 1/0 1
 8+5 = 13(13) 1/0 1
Epoch 80. Train Loss: 0.00306732, Test Loss : 0.0088079
Epoch 81. Train Loss: 0.00261282, Test Loss : 0.0104031
Epoch 82. Train Loss: 0.00278683, Test Loss : 0.00651883
Epoch 83. Train Loss: 0.00248736, Test Loss : 0.0101634
Epoch 84. Train Loss: 0.00333953, Test Loss : 0.00823778
Epoch 85. Train Loss: 0.00282169, Test Loss : 0.0104558
Epoch 86. Train Loss: 0.00264165, Test Loss : 0.00954762
Epoch 87. Train Loss: 0.0028707, Test Loss : 0.0245796
Epoch 88. Train Loss: 0.00283279, Test Loss : 0.0141948
Epoch 89. Train Loss: 0.00198803, Test Loss : 0.00606726
 105+0 = 105(105) 1/0 1
 9+1 = 10(10) 1/0 1
 2+8 = 10(10) 1/0 1
 8+49 = 57(57) 1/0 1
 3+97 = 100(100) 1/0 1
 13+892 = 905(905) 1/0 1
 653+491 = 1144(1144) 1/0 1
 6+975 = 981(981) 1/0 1
 20+52 = 72(72) 1/0 1
 1+58 = 69(59) 1/0 0
Epoch 90. Train Loss: 0.00283533, Test Loss : 0.010756
Epoch 91. Train Loss: 0.00268884, Test Loss : 0.00581835
Epoch 92. Train Loss: 0.00323414, Test Loss : 0.0075214
Epoch 93. Train Loss: 0.00276694, Test Loss : 0.00572802
Epoch 94. Train Loss: 0.00156762, Test Loss : 0.00773455
Epoch 95. Train Loss: 0.00151749, Test Loss : 0.00421858
Epoch 96. Train Loss: 0.00132186, Test Loss : 0.00577639
Epoch 97. Train Loss: 0.00245539, Test Loss : 0.00644012
Epoch 98. Train Loss: 0.00250708, Test Loss : 0.00470248
Epoch 99. Train Loss: 0.00177719, Test Loss : 0.00682936
 16+0 = 16(16) 1/0 1
 30+332 = 362(362) 1/0 1
 0+1 = 1(1) 1/0 1
 13+3 = 16(16) 1/0 1
 1+62 = 63(63) 1/0 1
 874+4 = 878(878) 1/0 1
 923+2 = 925(925) 1/0 1
 188+6 = 194(194) 1/0 1
 7+743 = 750(750) 1/0 1
 568+735 = 1303(1303) 1/0 1
Epoch 100. Train Loss: 0.00195897, Test Loss : 0.0095622
Epoch 101. Train Loss: 0.00125458, Test Loss : 0.0054107
Epoch 102. Train Loss: 0.00154039, Test Loss : 0.00911017
Epoch 103. Train Loss: 0.00187413, Test Loss : 0.0180655
Epoch 104. Train Loss: 0.00219021, Test Loss : 0.0061727
Epoch 105. Train Loss: 0.00209904, Test Loss : 0.00908591
Epoch 106. Train Loss: 0.00274257, Test Loss : 0.00850028
Epoch 107. Train Loss: 0.0016415, Test Loss : 0.00635539
Epoch 108. Train Loss: 0.00280382, Test Loss : 0.0102037
Epoch 109. Train Loss: 0.00144518, Test Loss : 0.00437199
 1+334 = 335(335) 1/0 1
 91+9 = 190(100) 1/0 0
 98+4 = 103(102) 1/0 0
 903+6 = 909(909) 1/0 1
 92+55 = 147(147) 1/0 1
 3+114 = 117(117) 1/0 1
 6+8 = 14(14) 1/0 1
 77+7 = 84(84) 1/0 1
 6+7 = 13(13) 1/0 1
 590+7 = 587(597) 1/0 0
Epoch 110. Train Loss: 0.00118595, Test Loss : 0.00797355
Epoch 111. Train Loss: 0.00320637, Test Loss : 0.00650378
Epoch 112. Train Loss: 0.00130261, Test Loss : 0.00348112
Epoch 113. Train Loss: 0.00146112, Test Loss : 0.00779839
Epoch 114. Train Loss: 0.00168527, Test Loss : 0.00756565
Epoch 115. Train Loss: 0.000930022, Test Loss : 0.00544935
Epoch 116. Train Loss: 0.00145466, Test Loss : 0.0160591
Epoch 117. Train Loss: 0.00112997, Test Loss : 0.00521217
Epoch 118. Train Loss: 0.00127665, Test Loss : 0.00597252
Epoch 119. Train Loss: 0.00143328, Test Loss : 0.00937187
 92+81 = 173(173) 1/0 1
 6+40 = 46(46) 1/0 1
 73+31 = 104(104) 1/0 1
 948+2 = 950(950) 1/0 1
 343+3 = 346(346) 1/0 1
 305+98 = 403(403) 1/0 1
 5+505 = 510(510) 1/0 1
 7+8 = 15(15) 1/0 1
 31+406 = 437(437) 1/0 1
 38+3 = 51(41) 1/0 0
Epoch 120. Train Loss: 0.00141822, Test Loss : 0.0113712
Epoch 121. Train Loss: 0.00146098, Test Loss : 0.0196588
Epoch 122. Train Loss: 0.0016774, Test Loss : 0.00355691
Epoch 123. Train Loss: 0.000572061, Test Loss : 0.0038612
Epoch 124. Train Loss: 0.00025224, Test Loss : 0.0482143
Epoch 125. Train Loss: 0.000908123, Test Loss : 0.00792559
Epoch 126. Train Loss: 0.00132589, Test Loss : 0.00652228
Epoch 127. Train Loss: 0.00176812, Test Loss : 0.00711878
Epoch 128. Train Loss: 0.00153973, Test Loss : 0.00764472
Epoch 129. Train Loss: 0.0014569, Test Loss : 0.00672641
 46+590 = 636(636) 1/0 1
 379+3 = 382(382) 1/0 1
 4+1 = 6(5) 1/0 0
 8+632 = 640(640) 1/0 1
 333+91 = 424(424) 1/0 1
 838+851 = 1689(1689) 1/0 1
 6+7 = 13(13) 1/0 1
 82+5 = 86(87) 1/0 0
 954+51 = 1005(1005) 1/0 1
 7+9 = 16(16) 1/0 1
Epoch 130. Train Loss: 0.00228727, Test Loss : 0.00801615
Epoch 131. Train Loss: 0.00211349, Test Loss : 0.00721177
Epoch 132. Train Loss: 0.00226279, Test Loss : 0.00972691
Epoch 133. Train Loss: 0.00144402, Test Loss : 0.0070334
Epoch 134. Train Loss: 0.00134113, Test Loss : 0.0151508
Epoch 135. Train Loss: 0.00130168, Test Loss : 0.00492381
Epoch 136. Train Loss: 0.00134316, Test Loss : 0.0052638
Epoch 137. Train Loss: 0.00194314, Test Loss : 0.0225744
Epoch 138. Train Loss: 0.000960593, Test Loss : 0.00596013
Epoch 139. Train Loss: 0.000926426, Test Loss : 0.00496691
 24+705 = 729(729) 1/0 1
 88+7 = 95(95) 1/0 1
 0+2 = 23(2) 1/0 0
 583+89 = 672(672) 1/0 1
 35+368 = 403(403) 1/0 1
 386+89 = 475(475) 1/0 1
 4+4 = 8(8) 1/0 1
 82+4 = 86(86) 1/0 1
 559+166 = 725(725) 1/0 1
 682+824 = 1506(1506) 1/0 1
Epoch 140. Train Loss: 0.00123914, Test Loss : 0.00622027
Epoch 141. Train Loss: 0.00114216, Test Loss : 0.00766453
Epoch 142. Train Loss: 0.00150233, Test Loss : 0.0054836
Epoch 143. Train Loss: 0.00154832, Test Loss : 0.00927247
Epoch 144. Train Loss: 0.00128405, Test Loss : 0.00900654
Epoch 145. Train Loss: 0.00121568, Test Loss : 0.011445
Epoch 146. Train Loss: 0.00107785, Test Loss : 0.00929671
Epoch 147. Train Loss: 0.00102916, Test Loss : 0.00781922
Epoch 148. Train Loss: 0.000316407, Test Loss : 0.00399398
Epoch 149. Train Loss: 0.000612223, Test Loss : 0.00497149
 4+0 = 5(4) 1/0 0
 2+871 = 873(873) 1/0 1
 37+93 = 130(130) 1/0 1
 89+817 = 906(906) 1/0 1
 727+984 = 1711(1711) 1/0 1
 57+0 = 57(57) 1/0 1
 243+8 = 251(251) 1/0 1
 1+818 = 819(819) 1/0 1
 26+7 = 33(33) 1/0 1
 824+50 = 874(874) 1/0 1
Epoch 150. Train Loss: 0.000186055, Test Loss : 0.00501143
Epoch 151. Train Loss: 4.04054e-05, Test Loss : 0.00401301
Epoch 152. Train Loss: 9.7102e-06, Test Loss : 0.00419563
Epoch 153. Train Loss: 7.57328e-06, Test Loss : 0.00374868
Epoch 154. Train Loss: 6.58183e-06, Test Loss : 0.00372357
Epoch 155. Train Loss: 5.98684e-06, Test Loss : 0.00416506
Epoch 156. Train Loss: 5.53783e-06, Test Loss : 0.00376316
Epoch 157. Train Loss: 5.23978e-06, Test Loss : 0.00417315
Epoch 158. Train Loss: 4.92797e-06, Test Loss : 0.0037615
Epoch 159. Train Loss: 4.74416e-06, Test Loss : 0.00407676
 83+137 = 220(220) 1/0 1
 813+51 = 864(864) 1/0 1
 1+2 = 33(3) 1/0 0
 471+6 = 477(477) 1/0 1
 12+6 = 18(18) 1/0 1
 8+2 = 10(10) 1/0 1
 64+1 = 65(65) 1/0 1
 731+700 = 1431(1431) 1/0 1
 344+69 = 413(413) 1/0 1
 3+7 = 10(10) 1/0 1
Epoch 160. Train Loss: 4.45369e-06, Test Loss : 0.00373083
Epoch 161. Train Loss: 4.31551e-06, Test Loss : 0.00375279
Epoch 162. Train Loss: 4.16575e-06, Test Loss : 0.00369303
Epoch 163. Train Loss: 4.1166e-06, Test Loss : 0.00398206
Epoch 164. Train Loss: 3.81897e-06, Test Loss : 0.00369495
Epoch 165. Train Loss: 3.82295e-06, Test Loss : 0.0036701
Epoch 166. Train Loss: 3.68564e-06, Test Loss : 0.00370591
Epoch 167. Train Loss: 3.57566e-06, Test Loss : 0.00379358
Epoch 168. Train Loss: 3.43527e-06, Test Loss : 0.0037078
Epoch 169. Train Loss: 3.40323e-06, Test Loss : 0.00382377
 1+89 = 90(90) 1/0 1
 5+28 = 33(33) 1/0 1
 4+1 = 6(5) 1/0 0
 613+89 = 702(702) 1/0 1
 801+874 = 1675(1675) 1/0 1
 13+597 = 610(610) 1/0 1
 315+4 = 319(319) 1/0 1
 21+31 = 52(52) 1/0 1
 603+0 = 603(603) 1/0 1
 9+8 = 17(17) 1/0 1
Epoch 170. Train Loss: 3.37041e-06, Test Loss : 0.00369082
Epoch 171. Train Loss: 3.15295e-06, Test Loss : 0.00370142
Epoch 172. Train Loss: 3.28802e-06, Test Loss : 0.00372233
Epoch 173. Train Loss: 3.15507e-06, Test Loss : 0.00370121
Epoch 174. Train Loss: 3.02016e-06, Test Loss : 0.0039215
Epoch 175. Train Loss: 2.98917e-06, Test Loss : 0.00372771
Epoch 176. Train Loss: 2.91798e-06, Test Loss : 0.0036647
Epoch 177. Train Loss: 2.87783e-06, Test Loss : 0.00387829
Epoch 178. Train Loss: 2.82765e-06, Test Loss : 0.0036976
Epoch 179. Train Loss: 2.80179e-06, Test Loss : 0.00367445
 49+80 = 129(129) 1/0 1
 17+87 = 104(104) 1/0 1
 3+5 = 8(8) 1/0 1
 16+176 = 192(192) 1/0 1
 8+30 = 38(38) 1/0 1
 426+9 = 435(435) 1/0 1
 22+157 = 179(179) 1/0 1
 24+1 = 25(25) 1/0 1
 2+1 = 3(3) 1/0 1
 47+6 = 53(53) 1/0 1
Epoch 180. Train Loss: 2.76078e-06, Test Loss : 0.0037565
Epoch 181. Train Loss: 2.66419e-06, Test Loss : 0.00397936
Epoch 182. Train Loss: 2.6687e-06, Test Loss : 0.00375932
Epoch 183. Train Loss: 2.52133e-06, Test Loss : 0.00363644
Epoch 184. Train Loss: 2.67349e-06, Test Loss : 0.00365268
Epoch 185. Train Loss: 2.5557e-06, Test Loss : 0.00386944
Epoch 186. Train Loss: 2.49811e-06, Test Loss : 0.00370046
Epoch 187. Train Loss: 2.49869e-06, Test Loss : 0.00361573
Epoch 188. Train Loss: 2.47827e-06, Test Loss : 0.00367216
Epoch 189. Train Loss: 2.40782e-06, Test Loss : 0.00364053
 0+120 = 120(120) 1/0 1
 45+489 = 534(534) 1/0 1
 8+762 = 760(770) 1/0 0
 602+607 = 1209(1209) 1/0 1
 2+3 = 5(5) 1/0 1
 5+3 = 8(8) 1/0 1
 998+519 = 1517(1517) 1/0 1
 403+794 = 1197(1197) 1/0 1
 13+7 = 20(20) 1/0 1
 5+56 = 61(61) 1/0 1
Epoch 190. Train Loss: 2.4565e-06, Test Loss : 0.00369676
Epoch 191. Train Loss: 2.39905e-06, Test Loss : 0.00382188
Epoch 192. Train Loss: 2.30808e-06, Test Loss : 0.00378665
Epoch 193. Train Loss: 2.27133e-06, Test Loss : 0.00365563
Epoch 194. Train Loss: 2.22566e-06, Test Loss : 0.00364971
Epoch 195. Train Loss: 2.20956e-06, Test Loss : 0.00361473
Epoch 196. Train Loss: 2.24031e-06, Test Loss : 0.00367714
Epoch 197. Train Loss: 2.20251e-06, Test Loss : 0.00365003
Epoch 198. Train Loss: 2.07808e-06, Test Loss : 0.00390731
Epoch 199. Train Loss: 2.13289e-06, Test Loss : 0.00402877
 8+6 = 14(14) 1/0 1
 3+31 = 34(34) 1/0 1
 1+282 = 283(283) 1/0 1
 4+7 = 11(11) 1/0 1
 453+628 = 1081(1081) 1/0 1
 6+56 = 62(62) 1/0 1
 16+70 = 86(86) 1/0 1
 596+903 = 1499(1499) 1/0 1
 87+956 = 1043(1043) 1/0 1
 0+4 = 4(4) 1/0 1
Epoch 200. Train Loss: 2.07594e-06, Test Loss : 0.00362931

매 10에폭마다 랜덤 예제의 값을 예측하게 했는데, 200에폭에 근접하면서 랜덤 테스트셋을 대부분 적중하는 모습을 볼 수 있어 제대로 학습이 진행되고 있음을 확인할 수 있을 것이다.

이 모형으로 덧셈을 할 사람이 누가 있을까? 사실 이 코드는 실제 활용을 위한 코드는 아니고 이 seq2seq 코드로 직접 기계번역 모형을 빌드해보기 위함이다. 성공적으로 토이 모형이 만들어 졌으니 아마도 다음 포스트는 기계번역 초기 모형이 되지 않을까 하는 생각을 해본다. 그 포스트에서는 Attention 등과 같은 기술도 구현해 볼 수 있지 않을까 생각해본다.

CC BY-NC 4.0 seq2seq기반 덧셈 모형 빌드(with Gluon) by from __future__ import dream is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.