Auto-Keras初体验
背景
Auto-Keras,是Google的AutoML的开源替代品,可以自动选取最优的网络,实现自动化机器学习和深度学习。
Auto-Keras和AutoML的最终目标是通过使用自动神经架构搜索(NAS)算法降低进入机器学习和深度学习的门槛。Auto-Keras和AutoML使非深度学习专家能够以最小的深度学习领域知识或实际数据来训练他们自己的模型。具有最小机器学习专业知识的程序员可以使用AutoML和Auto-Keras并应用这些算法,只需很少的努力即可实现最先进的性能。
如何使用
Google的AutoML和Auto-Keras都采用了一种称为神经架构搜索(NAS)的算法。根据你的输入数据集,神经架构搜索算法将自动搜索最佳架构和相应参数。神经架构搜索基本上是用一组自动调整模型的算法取代深度学习工程师/从业者!
在计算机视觉和图像识别的背景下,神经架构搜索算法将:
1.接受输入训练数据集;
2.优化并找到称为“单元”的架构构建块,然后让这些单元自动学习,这可能看起来类似于初始化,残留或激活微架构;
3.不断训练和搜索“NAS搜索空间”以获得更优化的单元;
如果AutoML系统的用户是经验丰富的深度学习从业者,那么他们可能会决定:
1.在训练数据集的一个非常小的子集上运行NAS;
2.找到一组最佳的架构构建块/单元;
3.获取这些单元并手动定义在体系结构搜索期间找到的更深层次的网络版本;
4.使用自己的专业知识和最佳实践,在完整的培训集上训练网络;
这种方法是全自动机器学习解决方案与需要专家深度学习实践者的解决方案之间的混合体,通常这种方法比NAS自己训练的模型性能更好。
初体验
安装
## autokeras只支持python 3.6环境,故先创建一个环境
conda create -n python36 python=3.6
## 安装升级pip
python -m pip install --upgrade pip -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
## 安装 keras-tuner 和 autokeras
pip3 install git+https://github.com/keras-team/[email protected]
pip3 install autokeras -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
使用
例子来自官网 Image Classification
import numpy as np
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.python.keras.utils.data_utils import Sequence
import autokeras as ak
## 准备数据
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print(x_train.shape) # (60000, 28, 28)
print(y_train.shape) # (60000,)
print(y_train[:3]) # array([7, 2, 1], dtype=uint8)
## 调用分类器
# Initialize the image classifier.
clf = ak.ImageClassifier(
overwrite=True,
max_trials=1)
# Feed the image classifier with training data.
clf.fit(x_train, y_train, epochs=10)
# Predict with the best model.
predicted_y = clf.predict(x_test)
print(predicted_y)
## 评估模型
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
clf.fit(
x_train,
y_train,
# Split the training data and use the last 15% as validation data.
validation_split=0.15,
epochs=10,
)
结果
Search: Running Trial #1
Hyperparameter |Value |Best Value So Far
image_block_1/b...|vanilla |?
image_block_1/n...|True |?
image_block_1/a...|False |?
image_block_1/c...|3 |?
image_block_1/c...|1 |?
image_block_1/c...|2 |?
image_block_1/c...|True |?
image_block_1/c...|False |?
image_block_1/c...|0.25 |?
image_block_1/c...|32 |?
image_block_1/c...|64 |?
classification_...|flatten |?
classification_...|0.5 |?
optimizer |adam |?
learning_rate |0.001 |?
Epoch 1/10
1500/1500 [==============================] - 20s 13ms/step - loss: 0.1750 - accuracy: 0.9463 - val_loss: 0.0636 - val_accuracy: 0.9814
Epoch 2/10
1500/1500 [==============================] - 20s 13ms/step - loss: 0.0788 - accuracy: 0.9754 - val_loss: 0.0509 - val_accuracy: 0.9850
Epoch 3/10
1500/1500 [==============================] - 20s 13ms/step - loss: 0.0611 - accuracy: 0.9809 - val_loss: 0.0538 - val_accuracy: 0.9849
Epoch 4/10
1500/1500 [==============================] - 20s 13ms/step - loss: 0.0548 - accuracy: 0.9824 - val_loss: 0.0489 - val_accuracy: 0.9863
Epoch 5/10
1500/1500 [==============================] - 20s 13ms/step - loss: 0.0482 - accuracy: 0.9847 - val_loss: 0.0500 - val_accuracy: 0.9862
Epoch 6/10
1500/1500 [==============================] - 20s 13ms/step - loss: 0.0405 - accuracy: 0.9868 - val_loss: 0.0438 - val_accuracy: 0.9882
Epoch 7/10
1500/1500 [==============================] - 20s 13ms/step - loss: 0.0395 - accuracy: 0.9872 - val_loss: 0.0405 - val_accuracy: 0.9881
Epoch 8/10
1500/1500 [==============================] - 20s 13ms/step - loss: 0.0346 - accuracy: 0.9888 - val_loss: 0.0411 - val_accuracy: 0.9887
Epoch 9/10
1500/1500 [==============================] - 20s 13ms/step - loss: 0.0318 - accuracy: 0.9896 - val_loss: 0.0419 - val_accuracy: 0.9886
Epoch 10/10
1500/1500 [==============================] - 20s 13ms/step - loss: 0.0298 - accuracy: 0.9903 - val_loss: 0.0425 - val_accuracy: 0.9902
Trial 1 Complete [00h 03m 21s]
val_loss: 0.040533654391765594
Best val_loss So Far: 0.040533654391765594
Total elapsed time: 00h 03m 21s