Ceci est une ancienne révision du document !
Installation VM controller
La VM controller héberge les services suivants :
- Serveur de bases de données : MariaDB
- Serveur de bus de messages : RabbitMQ
- Orchestrateur Nova
- Services d'API pour keystone, glance, cinder, neutron
- Serveur de cache : MemcacheD
- Dashboard Horizon avec Apache
Il est conseillé d'avoir au minimum 4Go de RAM.
Service bases de données MariaDB
Installation des paquets :
yum install mariadb mariadb-server python2-PyMySQL
Configuration du service avec le fichier /etc/my.cnf.d/openstack.cnf :
[mysqld] bind-address = 192.168.2.61 default-storage-engine = innodb innodb_file_per_table max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
Activation et démarrage du service :
systemctl enable mariadb systemctl start mariadb
Service de bus de messages RabbitMQ
Installation du paquet :
yum install rabbitmq-server
Activation et démarrage du service :
systemctl enable rabbitmq-server systemctl start rabbitmq-server
NB : Si le service ne démarrage pas, vérifier que le hostname et l'ip sont bien configurés.
Création d'un utilisateur et de ses permissions pour OpenStack :
rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Service de cache memcached
Installation et activation du service memcached :
yum install memcached python-memcached systemctl enable memcached systemctl start memcached
Service d'identité KeyStone
Il s'agit de la brique OpenStack la plus importante car elle gère toutes les authentifications pour les communications entre les services, mais aussi avec les utilisateurs via les API.
Création de la base SQL pour KeyStone
Se connecter sur la base de données :
CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
Installation de KeyStone
Installation des paquets pour KeyStone, Apache et le module wsgi :
yum install openstack-keystone httpd mod_wsgi
Configuration de KeyStone
Le fichier est /etc/keystone/keystone.conf :
[DEFAULT] [assignment] [auth] [cache] [catalog] [cors] [cors.subdomain] [credential] [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@bdd/keystone [domain_config] [endpoint_filter] [endpoint_policy] [eventlet_server] [federation] [fernet_tokens] [identity] [identity_mapping] [kvs] [ldap] [matchmaker_redis] [memcache] [oauth1] [os_inherit] [oslo_messaging_amqp] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [paste_deploy] [policy] [profiler] [resource] [revoke] [role] [saml] [security_compliance] [shadow_users] [signing] [token] provider = fernet [tokenless_auth] [trust]
Pour initialiser la base de données :
su -s /bin/sh -c "keystone-manage db_sync" keystone
Pour initialiser la gestion des jetons de type fernet :
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
Configuration des endpoints des API de KeyStone :
keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ --bootstrap-admin-url http://controller.admin:35357/v3/ \ --bootstrap-internal-url http://controller.internal:35357/v3/ \ --bootstrap-public-url http://controller.public:5000/v3/ \ --bootstrap-region-id RegionOne
Configuration de Apache
Apache sert de frontal au service KeyStone. Il faut configurer le ServerName dans /etc/httpd/conf/httpd.conf.
Il faut ensuite utiliser le fichier de configuration pour la partie wsgi :
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
Activer et démarrer le service httpd :
systemctl enable httpd systemctl start httpd
Configuration des identités OpenStack
Pour être autoriser à administrer les identités d'OpenStack via KeyStone, il faut paramètrer quelques variables d'environnement pour s'authentifier :
export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller.admin:35357/v3 export OS_IDENTITY_API_VERSION=3
Configuration d'un projet pour les services OpenStack :
openstack project create --domain default --description "Service Project" service
Création d'un projet “demo” et de son utilisateur associé :
openstack project create --domain default --description "Demo Project" demo openstack user create --domain default --password-prompt demo openstack role create user openstack role add --project demo --user demo user
Scripts d'exploitation
Pour faciliter la connexion pour l'adminstration, contenu du fichier admin-openrc :
export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller.controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
Idem pour le projet “demo” avec le fichier demo-openrc :
export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=demo export OS_AUTH_URL=http://controller.admin:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
Service d'images Glance
Création de la base SQL pour Glance
Se connecter à la base SQL :
CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
Création de l'utilisateur et service glance dans OpenStack
Avant toute chose :
source admin-openrc
Puis :
openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description "OpenStack Image" image
Puis création des endpoints pour l'API glance :
openstack endpoint create --region RegionOne image public http://glance.public:9292 openstack endpoint create --region RegionOne image admin http://glance.admin:9292 openstack endpoint create --region RegionOne image internal http://glance.internal:9292
Service de calculs Nova
Il s'agit du service gérant les VMs.
Création des bases SQL pour Nova
Se connecter à la base SQL :
CREATE DATABASE nova_api; CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Création de l'utilisateur et service Nova dans OpenStack
Sur le serveur d'administration :
openstack user create --domain default --password-prompt nova openstack role add --project service --user nova admin openstack service create --name nova --description "OpenStack Compute" compute
Création des endpoints pour Nova :
openstack endpoint create --region RegionOne compute public http://controller.public:8774/v2.1/%\(tenant_id\)s openstack endpoint create --region RegionOne compute internal http://controller.internal:8774/v2.1/%\(tenant_id\)s openstack endpoint create --region RegionOne compute admin http://controller.admin:8774/v2.1/%\(tenant_id\)s
Installation de Nova
Pour installer les paquets :
yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler
Configuration de Nova
La configuration se fait dans /etc/nova/nova.conf :
[DEFAULT] auth_strategy=keystone my_ip=192.168.2.61 use_neutron=true enabled_apis=osapi_compute,metadata firewall_driver = nova.virt.firewall.NoopFirewallDriver rpc_backend=rabbit [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@bdd/nova_api [barbican] [cache] [cells] [cinder] [cloudpipe] [conductor] [cors] [cors.subdomain] [crypto] [database] connection = mysql+pymysql://nova:NOVA_DBPASS@bdd/nova [ephemeral_storage_encryption] [glance] api_servers = http://glance.internal:9292 [guestfs] [hyperv] [image_file_url] [ironic] [key_manager] [keystone_authtoken] auth_uri = http://controller.internal:5000 auth_url = http://controller.internal:35357 memcached_servers = controller.internal:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] [matchmaker_redis] [metrics] [mks] [neutron] url = http://controller.internal:9696 auth_url = http://controller.internal:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = True metadata_proxy_shared_secret = METADATA_SECRET [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_notifications] [oslo_messaging_rabbit] rabbit_host = bdd rabbit_userid = openstack rabbit_password = RABBIT_PASS [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [placement] [placement_database] [rdp] [remote_debug] [serial_console] [spice] [ssl] [trusted_computing] [upgrade_levels] [vmware] [vnc] vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip [workarounds] [wsgi] [xenserver] [xvp]
Pour initialiser les bases Nova :
su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "nova-manage db sync" nova
Activer et démarrer les services :
systemctl enable openstack-nova-api openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy systemctl start openstack-nova-api openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
Service de gestion des réseaux Neutron
La partie la plus complexe.
Création de la base SQL pour Neutron
Se connecter au serveur SQL :
CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
Création de l'utilisateur et service Neutron dans OpenStack
Sur le serveur d'administration :
source admin-openrc
Création de l'utilisateur neutron et du service associé :
openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description "OpenStack Networking" network
Création des endpoints pour l'API Neutron :
openstack endpoint create --region RegionOne network public http://controller.public:9696 openstack endpoint create --region RegionOne network internal http://controller.internal:9696 openstack endpoint create --region RegionOne network admin http://controller.admin:9696
Installation de l'API Neutron
Installer les paquets suivants :
yum install openstack-neutron openstack-neutron-ml2
Configuration du service Neutron
Fichier de configuration /etc/neutron/neutron.conf :
[DEFAULT] auth_strategy = keystone core_plugin = ml2 service_plugins = router allow_overlapping_ips = True notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true rpc_backend = rabbit [agent] [cors] [cors.subdomain] [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@bdd/neutron [keystone_authtoken] auth_uri = http://controller.internal:5000 auth_url = http://controller.internal:35357 memcached_servers = controller.internal:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [matchmaker_redis] [nova] auth_url = http://controller.internal:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = nova [oslo_concurrency] lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_notifications] [oslo_messaging_rabbit] rabbit_host = bdd rabbit_userid = openstack rabbit_password = RABBIT_PASS [oslo_messaging_zmq] [oslo_policy] [qos] [quotas] [ssl]
Configuration du plugin ML2 dans /etc/neutron/plugins/ml2/ml2_conf.ini :
[DEFAULT] [ml2] type_drivers = flat,vlan,gre,vxlan,geneve tenant_network_types = vlan,gre,vxlan,geneve mechanism_drivers = openvswitch,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = external [ml2_type_geneve] vni_ranges = 5000:7000 [ml2_type_gre] tunnel_id_ranges = 100:999 [ml2_type_vlan] network_vlan_ranges = external,vlan:3000:3999 [ml2_type_vxlan] vni_ranges = 1000:2000 [securitygroup] firewall_driver = iptables_hybrid
Pour utiliser le plugin ML2 :
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Initialiser la base Neutron :
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
Pour activer et démarrer les services Neutron :
systemctl enable neutron-server systemctl start neutron-server
La suite se passe sur le network node.
Les compute nodes sont aussi à configurer pour utiliser Neutron.
Dashboard Horizon
Installation du dashboard
Installer les paquets suivants :
yum install openstack-dashboard
Configuration du dashboard
Fichier de configuration /etc/openstack-dashboard/local_settings :
# -*- coding: utf-8 -*- import os from django.utils.translation import ugettext_lazy as _ from openstack_dashboard import exceptions from openstack_dashboard.settings import HORIZON_CONFIG DEBUG = False WEBROOT = '/dashboard/' ALLOWED_HOSTS = ['*', ] OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default' LOCAL_PATH = '/tmp' SECRET_KEY='8052e72c4fa38b789895' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller.internal:11211', }, } EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' OPENSTACK_HOST = "controller.public" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_KEYSTONE_BACKEND = { 'name': 'native', 'can_edit_user': True, 'can_edit_group': True, 'can_edit_project': True, 'can_edit_domain': True, 'can_edit_role': True, } OPENSTACK_HYPERVISOR_FEATURES = { 'can_set_mount_point': False, 'can_set_password': False, 'requires_keypair': False, 'enable_quotas': True } OPENSTACK_CINDER_FEATURES = { 'enable_backup': False, } OPENSTACK_NEUTRON_NETWORK = { 'enable_router': True, 'enable_quotas': True, 'enable_ipv6': True, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': True, 'enable_firewall': True, 'enable_vpn': True, 'enable_fip_topology_check': True, 'profile_support': None, 'supported_vnic_types': ['*'], } OPENSTACK_HEAT_STACK = { 'enable_user_pass': True, } IMAGE_CUSTOM_PROPERTY_TITLES = { "architecture": _("Architecture"), "kernel_id": _("Kernel ID"), "ramdisk_id": _("Ramdisk ID"), "image_state": _("Euca2ools state"), "project_id": _("Project ID"), "image_type": _("Image Type"), } IMAGE_RESERVED_CUSTOM_PROPERTIES = [] API_RESULT_LIMIT = 1000 API_RESULT_PAGE_SIZE = 20 SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024 INSTANCE_LOG_LENGTH = 35 DROPDOWN_MAX_ITEMS = 30 TIME_ZONE = "Europe/Paris" POLICY_FILES_PATH = '/etc/openstack-dashboard' LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'operation': { 'format': '%(asctime)s %(message)s' }, }, 'handlers': { 'null': { 'level': 'DEBUG', 'class': 'logging.NullHandler', }, 'console': { 'level': 'INFO', 'class': 'logging.StreamHandler', }, 'operation': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'operation', }, }, 'loggers': { 'django.db.backends': { 'handlers': ['null'], 'propagate': False, }, 'requests': { 'handlers': ['null'], 'propagate': False, }, 'horizon': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'horizon.operation_log': { 'handlers': ['operation'], 'level': 'INFO', 'propagate': False, }, 'openstack_dashboard': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'novaclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'cinderclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'keystoneclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'glanceclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'neutronclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'heatclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'ceilometerclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'swiftclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'openstack_auth': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'nose.plugins.manager': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'django': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'iso8601': { 'handlers': ['null'], 'propagate': False, }, 'scss': { 'handlers': ['null'], 'propagate': False, }, }, } SECURITY_GROUP_RULES = { 'all_tcp': { 'name': _('All TCP'), 'ip_protocol': 'tcp', 'from_port': '1', 'to_port': '65535', }, 'all_udp': { 'name': _('All UDP'), 'ip_protocol': 'udp', 'from_port': '1', 'to_port': '65535', }, 'all_icmp': { 'name': _('All ICMP'), 'ip_protocol': 'icmp', 'from_port': '-1', 'to_port': '-1', }, 'ssh': { 'name': 'SSH', 'ip_protocol': 'tcp', 'from_port': '22', 'to_port': '22', }, 'smtp': { 'name': 'SMTP', 'ip_protocol': 'tcp', 'from_port': '25', 'to_port': '25', }, 'dns': { 'name': 'DNS', 'ip_protocol': 'tcp', 'from_port': '53', 'to_port': '53', }, 'http': { 'name': 'HTTP', 'ip_protocol': 'tcp', 'from_port': '80', 'to_port': '80', }, 'pop3': { 'name': 'POP3', 'ip_protocol': 'tcp', 'from_port': '110', 'to_port': '110', }, 'imap': { 'name': 'IMAP', 'ip_protocol': 'tcp', 'from_port': '143', 'to_port': '143', }, 'ldap': { 'name': 'LDAP', 'ip_protocol': 'tcp', 'from_port': '389', 'to_port': '389', }, 'https': { 'name': 'HTTPS', 'ip_protocol': 'tcp', 'from_port': '443', 'to_port': '443', }, 'smtps': { 'name': 'SMTPS', 'ip_protocol': 'tcp', 'from_port': '465', 'to_port': '465', }, 'imaps': { 'name': 'IMAPS', 'ip_protocol': 'tcp', 'from_port': '993', 'to_port': '993', }, 'pop3s': { 'name': 'POP3S', 'ip_protocol': 'tcp', 'from_port': '995', 'to_port': '995', }, 'ms_sql': { 'name': 'MS SQL', 'ip_protocol': 'tcp', 'from_port': '1433', 'to_port': '1433', }, 'mysql': { 'name': 'MYSQL', 'ip_protocol': 'tcp', 'from_port': '3306', 'to_port': '3306', }, 'rdp': { 'name': 'RDP', 'ip_protocol': 'tcp', 'from_port': '3389', 'to_port': '3389', }, } REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES', 'LAUNCH_INSTANCE_DEFAULTS', 'OPENSTACK_IMAGE_FORMATS'] ALLOWED_PRIVATE_SUBNET_CIDR = {'ipv4': [], 'ipv6': []}
Redemarrer Apache :
systemctl restart httpd
Service d'orchestration HEAT
Création de la base SQL pour HEAT
Se connecter au serveur SQL :
CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
Création des utilisateus et services HEAT dans OpenStack
Sur le serveur d'administration :
sourceadmin-openrc
Puis :
openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin openstack service create --name heat --description "Orchestration" orchestration openstack service create --name heat-cfn --description "Orchestration" cloudformation
Création des endpoints pour l'API HEAT :
openstack endpoint create --region RegionOne orchestration public http://controller.public:8004/v1/%\(tenant_id\)s openstack endpoint create --region RegionOne orchestration internal http://controller.internal:8004/v1/%\(tenant_id\)s openstack endpoint create --region RegionOne orchestration admin http://controller.admin:8004/v1/%\(tenant_id\)s openstack endpoint create --region RegionOne cloudformation public http://controller.public:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller.internal:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller.admin:8000/v1
Création d'un domaine qui contiendra les “stacks” :
openstack domain create --description "Stack projects and users" heat openstack user create --domain heat --password heat_domain_admin heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user
Pour autoriser l'utilisateur “demo” à gérer des stacks :
openstack role add --project demo --user demo heat_stack_owner
Installation de HEAT
Installer les paquets suivants :
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
Configuration de HEAT
Fichier /etc/heat/heat.conf :
[DEFAULT] heat_metadata_server_url = http://controller.internal:8000 heat_waitcondition_server_url = http://controller.internal:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = heat_domain_admin stack_user_domain_name = heat rpc_backend = rabbit [auth_password] [clients] [clients_aodh] [clients_barbican] [clients_ceilometer] [clients_cinder] [clients_designate] [clients_glance] [clients_heat] [clients_keystone] auth_uri = http://controller.internal:35357 [clients_magnum] [clients_manila] [clients_mistral] [clients_monasca] [clients_neutron] [clients_nova] [clients_sahara] [clients_senlin] [clients_swift] [clients_trove] [clients_zaqar] [cors] [cors.subdomain] [database] connection = mysql+pymysql://heat:HEAT_DBPASS@bdd/heat [ec2authtoken] auth_uri = http://controller.internal:5000 [eventlet_opts] [heat_api] [heat_api_cfn] [heat_api_cloudwatch] [keystone_authtoken] auth_uri = http://controller.internal:5000 auth_url = http://controller.internal:35357 memcached_servers = controller.internal:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = heat [matchmaker_redis] [oslo_messaging_amqp] [oslo_messaging_notifications] [oslo_messaging_rabbit] rabbit_host = bdd rabbit_userid = openstack rabbit_password = RABBIT_PASS [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [paste_deploy] [profiler] [revision] [ssl] [trustee] auth_type = password auth_url = http://controller.internal:35357 username = heat password = heat user_domain_name = default [volumes]