前回構築したCephクラスタに対し、S3互換のオブジェクトゲートウェイである『RadosGateway』を導入する。
前回同様、Ceph管理ノードからの一元操作で処理を行う。
1.RadosGatewayの構築
まず、以下のコマンドをCeph管理ノードのceph newコマンドで生成した配布設定ファイルのあるディレクトリで実行する。
cat <<EOF >> ceph.conf
[client]
rgw frontends = "civetweb port=80"
EOF
上のコマンドを実行し設定ファイルを編集後、設定ファイルを各ノードに配布する。
ceph-deploy --overwrite config push 管理ノード1 データノード1...データノードN
[ceph@BS-PUB-CEPHADM ~]$ ceph-deploy --overwrite config push BS-PUB-CEPHADM BS-PUB-CEPHNODE-01 BS-PUB-CEPHNODE-02 B
S-PUB-CEPHNODE-03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.31): /bin/ceph-deploy --overwrite config push BS-PUB-CEPHADM BS-PUB-CEPHNODE-01 BS-PUB-CEPHNODE-02 BS-PUB-CEPHNODE-03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : push
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['BS-PUB-CEPHADM', 'BS-PUB-CEPHNODE-01', 'BS-PUB-CEPHNODE-02', 'BS-PUB-CEPHNODE-03']
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.config][DEBUG ] Pushing config to BS-PUB-CEPHADM
[BS-PUB-CEPHADM][DEBUG ] connection detected need for sudo
[BS-PUB-CEPHADM][DEBUG ] connected to host: BS-PUB-CEPHADM
[BS-PUB-CEPHADM][DEBUG ] detect platform information from remote host
[BS-PUB-CEPHADM][DEBUG ] detect machine type
[BS-PUB-CEPHADM][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to BS-PUB-CEPHNODE-01
[BS-PUB-CEPHNODE-01][DEBUG ] connection detected need for sudo
[BS-PUB-CEPHNODE-01][DEBUG ] connected to host: BS-PUB-CEPHNODE-01
[BS-PUB-CEPHNODE-01][DEBUG ] detect platform information from remote host
[BS-PUB-CEPHNODE-01][DEBUG ] detect machine type
[BS-PUB-CEPHNODE-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to BS-PUB-CEPHNODE-02
[BS-PUB-CEPHNODE-02][DEBUG ] connection detected need for sudo
[BS-PUB-CEPHNODE-02][DEBUG ] connected to host: BS-PUB-CEPHNODE-02
[BS-PUB-CEPHNODE-02][DEBUG ] detect platform information from remote host
[BS-PUB-CEPHNODE-02][DEBUG ] detect machine type
[BS-PUB-CEPHNODE-02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to BS-PUB-CEPHNODE-03
[BS-PUB-CEPHNODE-03][DEBUG ] connection detected need for sudo
[BS-PUB-CEPHNODE-03][DEBUG ] connected to host: BS-PUB-CEPHNODE-03
[BS-PUB-CEPHNODE-03][DEBUG ] detect platform information from remote host
[BS-PUB-CEPHNODE-03][DEBUG ] detect machine type
[BS-PUB-CEPHNODE-03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
設定ファイルを配布したら、各データノードでRadosGatewayの作成を行う。
ceph-deploy --overwrite rgw create データノード1...データノードN
[ceph@BS-PUB-CEPHADM ~]$ ceph-deploy --overwrite rgw create BS-PUB-CEPHNODE-01 BS-PUB-CEPHNODE-02 BS-PUB-CEPHNODE-03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.31): /bin/ceph-deploy --overwrite rgw create BS-PUB-CEPHNODE-01 BS-PUB-CEPHNODE-02 BS-PUB-CEPHNODE-03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] rgw : [('BS-PUB-CEPHNODE-01', 'rgw.BS-PUB-CEPHNODE-01'), ('BS-PUB-CEPHNODE-02', 'rgw.BS-PUB-CEPHNODE-02'), ('BS-PUB-CEPHNODE-03', 'rgw.BS-PUB-CEPHNODE-03')]
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts BS-PUB-CEPHNODE-01:rgw.BS-PUB-CEPHNODE-01 BS-PUB-CEPHNODE-02:rgw.BS-PUB-CEPHNODE-02 BS-PUB-CEPHNODE-03:rgw.BS-PUB-CEPHNODE-03
[BS-PUB-CEPHNODE-01][DEBUG ] connection detected need for sudo
[BS-PUB-CEPHNODE-01][DEBUG ] connected to host: BS-PUB-CEPHNODE-01
[BS-PUB-CEPHNODE-01][DEBUG ] detect platform information from remote host
[BS-PUB-CEPHNODE-01][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to BS-PUB-CEPHNODE-01
[BS-PUB-CEPHNODE-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[BS-PUB-CEPHNODE-01][DEBUG ] create path recursively if it doesn't exist
[BS-PUB-CEPHNODE-01][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.BS-PUB-CEPHNODE-01 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.BS-PUB-CEPHNODE-01/keyring
[BS-PUB-CEPHNODE-01][INFO ] Running command: sudo systemctl enable ceph-radosgw@rgw.BS-PUB-CEPHNODE-01
[BS-PUB-CEPHNODE-01][INFO ] Running command: sudo systemctl start ceph-radosgw@rgw.BS-PUB-CEPHNODE-01
[BS-PUB-CEPHNODE-01][INFO ] Running command: sudo systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host BS-PUB-CEPHNODE-01 and default port 7480
[BS-PUB-CEPHNODE-02][DEBUG ] connection detected need for sudo
[BS-PUB-CEPHNODE-02][DEBUG ] connected to host: BS-PUB-CEPHNODE-02
[BS-PUB-CEPHNODE-02][DEBUG ] detect platform information from remote host
[BS-PUB-CEPHNODE-02][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to BS-PUB-CEPHNODE-02
[BS-PUB-CEPHNODE-02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[BS-PUB-CEPHNODE-02][DEBUG ] create path recursively if it doesn't exist
[BS-PUB-CEPHNODE-02][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.BS-PUB-CEPHNODE-02 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.BS-PUB-CEPHNODE-02/keyring
[BS-PUB-CEPHNODE-02][INFO ] Running command: sudo systemctl enable ceph-radosgw@rgw.BS-PUB-CEPHNODE-02
[BS-PUB-CEPHNODE-02][INFO ] Running command: sudo systemctl start ceph-radosgw@rgw.BS-PUB-CEPHNODE-02
[BS-PUB-CEPHNODE-02][INFO ] Running command: sudo systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host BS-PUB-CEPHNODE-02 and default port 7480
[BS-PUB-CEPHNODE-03][DEBUG ] connection detected need for sudo
[BS-PUB-CEPHNODE-03][DEBUG ] connected to host: BS-PUB-CEPHNODE-03
[BS-PUB-CEPHNODE-03][DEBUG ] detect platform information from remote host
[BS-PUB-CEPHNODE-03][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to BS-PUB-CEPHNODE-03
[BS-PUB-CEPHNODE-03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[BS-PUB-CEPHNODE-03][DEBUG ] create path recursively if it doesn't exist
[BS-PUB-CEPHNODE-03][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.BS-PUB-CEPHNODE-03 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.BS-PUB-CEPHNODE-03/keyring
[BS-PUB-CEPHNODE-03][INFO ] Running command: sudo systemctl enable ceph-radosgw@rgw.BS-PUB-CEPHNODE-03
[BS-PUB-CEPHNODE-03][WARNIN] Created symlink from /etc/systemd/system/ceph.target.wants/ceph-radosgw@rgw.BS-PUB-CEPHNODE-03.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[BS-PUB-CEPHNODE-03][INFO ] Running command: sudo systemctl start ceph-radosgw@rgw.BS-PUB-CEPHNODE-03
[BS-PUB-CEPHNODE-03][INFO ] Running command: sudo systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host BS-PUB-CEPHNODE-03 and default port 7480
これで、RadosGatewayが構築・起動する。
今回の設定では80番ポートを開いているので、telnetコマンドでポートが開いている事を確認する。
telnet データノード 80
なお、この時バイナリがcephユーザで実行時にポートを開けられないようになっている場合がある。
その場合は、データノードで以下のコマンドを実行し、ケーパビリティと.serviceファイルを編集してサービスを再起動してあげると良いだろう。
sed -i -e 's/\--setuser ceph //g' -e 's/\--setgroup ceph//g' /usr/lib/systemd/system/ceph-radosgw\@.service
sed -i '/ExecStart/aUser=ceph\nGroup=ceph' /usr/lib/systemd/system/ceph-radosgw\@.service
setcap CAP_NET_BIND_SERVICE+ep /usr/bin/radosgw
reboot
[root@BS-PUB-CEPHNODE-02 ~]# cat /usr/lib/systemd/system/ceph-radosgw\@.service
[Unit]
Description=Ceph rados gateway
After=network-online.target local-fs.target
Wants=network-online.target local-fs.target
PartOf=ceph.target
[Service]
LimitNOFILE=1048576
LimitNPROC=1048576
EnvironmentFile=-/etc/sysconfig/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.%i
User=ceph
Group=ceph
[Install]
WantedBy=ceph.target
2.RadosGatewayへの接続アカウント作成
さて、RadosGatewayは構築できたので、これを経由してCephにアクセスするための接続用アカウントを作成する。
以下のコマンドをCeph管理ノードから実行し、アカウントを作成しアクセスキーとシークレットキーを取得する。
radosgw-admin user create --uid=ユーザ名 --display-name="表示名"
[ceph@BS-PUB-CEPHADM ~]$ radosgw-admin user create --uid=ceph-test --display-name="ceph-test"
{
"user_id": "ceph-test",
"display_name": "ceph-test",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "ceph-test",
"access_key": "XXSHDYXXLWNASXX6XXA7",
"secret_key": "kuBaFtXXtYNoXXICXXGCJlXXof6tXXhMXXoQXXzA"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"temp_url_keys": []
}
このアクセスキーとシークレットキーを使って、RadosGatewayへ接続できるようになる。
※このキーは後から確認出来ないはずなので、必ず控えておくこと
3.接続確認
さて、それでは接続確認をしてみよう。
接続確認はpython及びbotoを使用するので、botoがインストールされていない場合は以下のコマンドでインストールする(CentOSの場合。epelインストール済。)。
yum install python-pip
pip install boto
次に、こちらを参考に以下のようなスクリプトを作成し、適当なバケットの作成とオブジェクト(中身が「Hello World!」となっているhello.txt)をおいてみて、RadosGW経由でのアクセスとバケットの確認が出来ることを確認する。
●radosgw-test.py
import boto import boto.s3.connection access_key = 'アクセスキー' secret_key = 'シークレットキー' conn = boto.connect_s3( aws_access_key_id = access_key, aws_secret_access_key = secret_key, host = 'RadosGateway アドレス', is_secure=False, # uncomment if you are not using ssl calling_format = boto.s3.connection.OrdinaryCallingFormat(), ) for bucket in conn.get_all_buckets(): print "{name}\t{created}".format( name = bucket.name, created = bucket.creation_date, ) bucket = conn.create_bucket('my-new-bucket') for bucket in conn.get_all_buckets(): print "{name}\t{created}".format( name = bucket.name, created = bucket.creation_date, ) key = bucket.new_key('hello.txt') key.set_contents_from_string('Hello World!') for key in bucket.list(): print "{name}\t{size}\t{modified}".format( name = key.name, size = key.size, modified = key.last_modified, ) ``` ```shell [ceph@BS-PUB-CEPHADM ~]$ python test.py my-new-bucket 2016-03-07T23:06:55.000Z hello.txt 12 2016-03-07T23:07:38.000Z ``` 試しに、このhello.txtをダウンロードしてみよう。 ```shell [ceph@BS-PUB-CEPHADM ~]$ python Python 2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import boto >>> import boto.s3.connection >>> access_key = 'XXSHDYXXLWNASXX6XXA7' >>> secret_key = 'kuBaFtXXtYNoXXICXXGCJlXXof6tXXhMXXoQXXzA' >>> >>> conn = boto.connect_s3( ... aws_access_key_id = access_key, ... aws_secret_access_key = secret_key, ... host = 'BS-PUB-CEPHNODE-01', ... is_secure=False, # uncomment if you are not using ssl ... calling_format = boto.s3.connection.OrdinaryCallingFormat(), ... ) >>> >>> bucket = conn.lookup('my-new-bucket') >>> key = bucket.get_key('hello.txt') >>> key.get_contents_to_filename('/tmp/hello.txt') >>> [ceph@BS-PUB-CEPHADM ~]$ ls -la /tmp/hello.txt -rw-rw-r--. 1 ceph ceph 12 3月 8 08:07 /tmp/hello.txt [ceph@BS-PUB-CEPHADM ~]$ cat /tmp/hello.txt Hello World! [ceph@BS-PUB-CEPHADM ~]$ ``` 確かに、バケットを作成しアップロード、ダウンロードして、中身が想定どおりであることが確認出来た。