码迷,mamicode.com
首页 > 其他好文 > 详细

Deploy Ceph and start using it:simple librados cli

时间:2014-07-31 21:24:07      阅读:399      评论:0      收藏:0      [点我收藏+]

标签:des   style   http   color   os   io   for   art   

This part of the tutorial describes how to setup a simple Ceph client using librados (for C++).

The only information that the client requires for the cephx authentication is

  • Endpoint of the monitor node
  • Keyring containing the pre-shared secret (we will use the admin keyring)

Install librados APIs

On Ubuntu, the library is available on the repositories

$ sudo apt-get install librados-dev

Create a client configuration file

This is the file from which librados will read the client configuration.

The content of the file is structured according to this template:

[global]
mon host= <IP address of one of the monitors>
keyring = <path/to/client.admin.keyring>

for example:

[global]
mon host = 192.168.252.10:6789
keyring = ./ceph.client.admin.keyring

The public endpoint of the monitor node can be retrieved with

$ ceph mon stat

The keyring file can be copied from the admin node. No change is needed to this file. The same information that is contained in the file can be retrieved with this command that will also list the client capabilities:

$ ceph auth get client.admin

Connect to the cluster

The following simple client will perform the following operations:

  • Read the configuration file (ceph.conf) from the local directory
  • Get an handle to the cluster and IO context on the “data” pool
  • Create a new object
  • Set an xattr
  • Read the object and xattr back
  • Print the list of pools
  • Print the list of objects in the “data” pool
  • Cleanup
  1. #include <rados/librados.hpp>
  2. #include <string>
  3. #include <list>

  4. int main ( int argc,  const  char  **argv )
  5. {
  6.    int ret  =  0 ;

  7.    /*
  8.    * Errors are not checked to avoid pollution.
  9.    * After each Ceph operation:
  10.    * if (ret < 0) error_condition
  11.    * else success
  12.    */

  13.    // Get cluster handle and connect to cluster
  14.   std :: string cluster_name ( “ceph” ) ;
  15.   std :: string user_name ( “client.admin” ) ;
  16.   librados :: Rados cluster ;
  17.   cluster. init2 (user_name. c_str ( ), cluster_name. c_str ( )0 ) ;
  18.   cluster. conf_read_file ( “ceph.conf” ) ;
  19.   cluster. connect ( ) ;

  20.    // IO context
  21.   librados :: IoCtx io_ctx ;
  22.   std :: string pool_name ( “data” ) ;
  23.   cluster. ioctx_create (pool_name. c_str ( ), io_ctx ) ;

  24.    // Write an object synchronously
  25.   librados :: bufferlist bl ;
  26.   std :: string objectId ( “hw” ) ;
  27.   std :: string objectContent ( “Hello World!” ) ;
  28.   bl. append (objectContent ) ;
  29.   io_ctx. write (objectId, bl, objectContent. size ( )0 ) ;

  30.    // Add an xattr to the object.
  31.   librados :: bufferlist lang_bl ;
  32.   lang_bl. append ( “en_US” ) ;
  33.   io_ctx. setxattr (objectId,  “lang”, lang_bl ) ;

  34.    // Read the object back asynchronously
  35.   librados :: bufferlist read_buf ;
  36.    int read_len  =  4194304 ;
  37.    //Create I/O Completion.
  38.   librados :: AioCompletion  *read_completion  =
  39.                                              librados :: Rados :: aio_create_completion ( ) ;
  40.    //Send read request.
  41.   io_ctx. aio_read (objectId, read_completion,  &read_buf, read_len,  0 ) ;

  42.    // Wait for the request to complete, and print content
  43.   read_completion - >wait_for_complete ( ) ;
  44.   read_completion - >get_return_value ( ) ;
  45.   std :: cout  <<  “Object name: “  << objectId  <<  \n
  46.              <<  “Content: “  << read_buf. c_str ( )  << std :: endl ;

  47.    // Read the xattr.
  48.   librados :: bufferlist lang_res ;
  49.   io_ctx. getxattr (objectId,  “lang”, lang_res ) ;
  50.   std :: cout  <<  “Object xattr: “  << lang_res. c_str ( )  << std :: endl ;


  51.    // Print the list of pools
  52.   std :: list <std :: string > pools ;
  53.   cluster. pool_list (pools ) ;
  54.   std :: cout  <<  “List of pools from this cluster handle”  << std :: endl ;
  55.    for  ( auto pool_id  : pools )  {
  56.     std :: cout  <<  \t  << pool_id  << std :: endl ;
  57.    }

  58.    // Print the list of objects
  59.   librados :: ObjectIterator oit  = io_ctx. objects_begin ( ) ;
  60.   librados :: ObjectIterator oet  = io_ctx. objects_end ( ) ;
  61.   std :: cout  <<  “List of objects from this pool”  << std :: endl ;
  62.    for  ( ; oit  ! = oet ; oit ++ )  {
  63.     std :: cout  <<  \t  << oit - >first  << std :: endl ;
  64.    }

  65.    // Remove the xattr
  66.   io_ctx. rmxattr (objectId,  “lang” ) ;

  67.    // Remove the object.
  68.   io_ctx. remove (objectId ) ;

  69.    // Cleanup
  70.   io_ctx. close ( ) ;
  71.   cluster. shutdown ( ) ;

  72.    return  0 ;
  73. }

Find the pastebin here.

This example can be compiled and executed with

$ g++ client.cpp -lrados -o cephclient
$ ./cephclient

Operate with cluster data from the command line

To quickly verify if an object was written or to remove it, use the following commands (e.g., from the monitor node).

  • List objects in pool data

    $ rados -p data ls
  • Check the location of an object in pool data

    $ ceph osd map data <object name>
  • Remove object from pool data

    $ rados rm <object name> --pool=data

Deploy Ceph and start using it:simple librados cli,布布扣,bubuko.com

Deploy Ceph and start using it:simple librados cli

标签:des   style   http   color   os   io   for   art   

原文地址:http://my.oschina.net/renguijiayi/blog/296911

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!