Table des matières

Installation VM compute

Les noeuds de calculs correspondent aux hyperviseurs. Ils sont pilotés par la brique Nova, et possèdent en général un agent neutron pour la configuration des réseaux dédiés aux VMs.

Installation de la partie Nova

Pour installer les paquets :

yum install openstack-nova-compute

Configuration de Nova

Fichier de configuration /etc/nova/nova.conf :

[DEFAULT]
auth_strategy=keystone
my_ip=192.168.2.69
use_neutron=true
enabled_apis=osapi_compute,metadata
firewall_driver = nova.virt.firewall.NoopFirewallDriver
rpc_backend=rabbit
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[cloudpipe]
[conductor]
[cors]
[cors.subdomain]
[crypto]
[database]
[ephemeral_storage_encryption]
[glance]
api_servers = http://glance.internal:9292
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[key_manager]
[keystone_authtoken]
auth_uri = http://controller.internal:5000
auth_url = http://controller.internal:35357
memcached_servers = controller.internal:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = bdd
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[placement]
[placement_database]
[rdp]
[remote_debug]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller.public:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]

Activer et demarrer les services Nova de l'hyperviseur :

systemctl enable libvirtd openstack-nova-compute
systemctl start libvirtd openstack-nova-compute

Configuration de la partie Neutron

Installer les paquets des agents nécessaires :

yum install openstack-neutron-openvswitch ipset

Configuration de l'agent openvswitch dans /etc/neutron/plugins/ml2/openvswitch_agent.ini :

[DEFAULT]
[agent]
tunnel_types = gre,vxlan
l2_population = True
[ovs]
local_ip = 192.168.200.4
bridge_mappings = vlan:br-vlan
[securitygroup]
firewall_driver = iptables_hybrid

Configuration de la partie Neutron dans /etc/neutron/neutron.conf :

[DEFAULT]
auth_strategy = keystone
rpc_backend = rabbit
[agent]
[cors]
[cors.subdomain]
[database]
[keystone_authtoken]
auth_uri = http://controller.internal:5000
auth_url = http://controller.internal:35357
memcached_servers = controller.internal:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = bdd
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_messaging_zmq]
[oslo_policy]
[qos]
[quotas]
[ssl]

Activer et démarrer les services suivants :

systemctl enable neutron-openvswitch-agent openvswitch
systemctl start neutron-openvswitch-agent openvswitch

Configuration de la libvirt

Pour permettre les migrations à chaud, il faut ajouter l'option –listen via le fichier /etc/sysconfig/libvirtd. Il faut de plus autoriser le tcp, sans le TLS (à voir pour plus tard), ainsi que d'accepter les connexions sans authentification en TCP. Cela se fait dans /etc/libvirt/libvirtd.conf (auth_tcp = none).