码迷,mamicode.com
首页 > 其他好文 > 详细

OpenCv dnn模块扩展研究(1)--style transfer

时间:2019-05-03 09:42:29      阅读:183      评论:0      收藏:0      [点我收藏+]

标签:dep   set   check   epo   span   binding   har   显示   forward   

一、opencv的示例模型文件

 
使用Torch模型【OpenCV对各种模型兼容并包,起到胶水作用】,
下载地址:
fast_neural_style_eccv16_starry_night.t7
fast_neural_style_instance_norm_feathers.t7
http://cs.stanford.edu/people/jcjohns/fast-neural-style/models/instance_norm/feathers.t7

二、示例代码
 
代码流程均较简单:图像转Blob,forward,处理输出结果,显示。【可以说是OpenCV Dnn使用方面的经典入门,对于我们对流程配置、参数理解都有很好帮助】
 
c++代码如下:
 
// This script is used to run style transfer models from ‘
// https://github.com/jcjohnson/fast-neural-style using OpenCV
 
#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <iostream>
 
using namespace cv;
using namespace cv::dnn;
using namespace std;
 
 
int main(int argc, char **argv)
{
    string modelBin = "../../data/testdata/dnn/fast_neural_style_instance_norm_feathers.t7";
    string imageFile = "../../data/image/chicago.jpg";
 
    float scale = 1.0;
    cv::Scalar mean { 103.939, 116.779, 123.68 };
    bool swapRB = false;
    bool crop = false;
    bool useOpenCL = false;
 
    Mat img = imread(imageFile);
    if (img.empty()) {
        cout << "Can‘t read image from file: " << imageFile << endl;
        return 2;
    }
 
    // Load model
    Net net = dnn::readNetFromTorch(modelBin);
    if (useOpenCL)
        net.setPreferableTarget(DNN_TARGET_OPENCL);
 
    // Create a 4D blob from a frame.
    Mat inputBlob = blobFromImage(img,scale, img.size(),mean,swapRB,crop);
 
    // forward netword
    net.setInput(inputBlob);
    Mat output = net.forward();
 
    // process output
    Mat(output.size[2], output.size[3], CV_32F, output.ptr<float>(0, 0)) += 103.939;
    Mat(output.size[2], output.size[3], CV_32F, output.ptr<float>(0, 1)) += 116.779;
    Mat(output.size[2], output.size[3], CV_32F, output.ptr<float>(0, 2)) += 123.68;
 
    std::vector<cv::Mat> ress;
    imagesFromBlob(output, ress);
 
    // show res
    Mat res;
    ress[0].convertTo(res, CV_8UC3);
    imshow("reslut", res);
 
    imshow("origin", img);
 
    waitKey();
    return 0;
}
 
三、演示
fast_neural_style_instance_norm_feathers.t7的演示效果
技术图片

技术图片
技术图片
fast_neural_style_eccv16_starry_night.t7的演示效果:
技术图片
 
技术图片

我认为对简笔画的效果不是太好
技术图片
通过重新作用于原始图片,我认识到这个模型采用的很可能是局部图片
技术图片

那么这些模型如何训练出来?这里也给出了很多帮助:

Training new models

To train new style transfer models, first use the scriptscripts/make_style_dataset.py to create an HDF5 file from folders of images.You will then use the script train.lua to actually train models.

Step 1: Prepare a dataset

You first need to install the header files for Python 2.7 and HDF5. On Ubuntuyou should be able to do the following:

sudo apt-get -y install python2.7-dev
sudo apt-get install libhdf5-dev

You can then install Python dependencies into a virtual environment:

virtualenv .env                  # Create the virtual environmentsource .env/bin/activate         # Activate the virtual environment
 
pip install -r requirements.txt 
# Install Python dependencies# Work for a while ...
 
deactivate                      
# Exit the virtual environment

With the virtual environment activated, you can use the scriptscripts/make_style_dataset.py to create an HDF5 file from a directory oftraining images and a directory of validation images:

python scripts/make_style_dataset.py \
  --train_dir path/to/training/images \
  --val_dir path/to/validation/images \
  --output_file path/to/output/file.h5

All models in thisrepository were trained using the images from theCOCO dataset.

The preprocessing script has the following flags:

  • --train_dir: Path to a directory of training images.
  • --val_dir: Path to a directory of validation images.
  • --output_file: HDF5 file where output will be written.
  • --height, --width: All images will be resized to this size.
  • --max_images: The maximum number of images to use for trainingand validation; -1 means use all images in the directories.
  • --num_workers: The number of threads to use.

Step 2: Train a model

After creating an HDF5 dataset file, you can use the script train.lua totrain feedforward style transfer models. First you need to download aTorch version of theVGG-16 modelby running the script

bash models/download_vgg16.sh

This will download the file vgg16.t7 (528 MB) to the models directory.

You will also need to installdeepmind/torch-hdf5which gives HDF5 bindings for Torch:

luarocks install https://raw.githubusercontent.com/deepmind/torch-hdf5/master/hdf5-0-0.rockspec

You can then train a model with the script train.lua. For basic usage thecommand will look something like this:

th train.lua \
  -h5_file path/to/dataset.h5 \
  -style_image path/to/style/image.jpg \
  -style_image_size 384 \
  -content_weights 1.0 \
  -style_weights 5.0 \
  -checkpoint_name checkpoint \
  -gpu 0

The full set of options for this script are described here.


 





OpenCv dnn模块扩展研究(1)--style transfer

标签:dep   set   check   epo   span   binding   har   显示   forward   

原文地址:https://www.cnblogs.com/jsxyhelu/p/10804243.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!