The Cisco Meeting Server (CMS) core server installation includes all roles typically deployed on the internal, corporate network. These services are not typically reachable directly from an external (public) network. Later sections of this lab will show you how to extend access to those servers to users outside the corporate firewalls. In deployments where you only want to provide basic audio/video conferencing, either scheduled or ad-hoc, this may be all you will need to deploy.
In this lab you will deploy three CMS core servers. These are virtualized CMS servers that do not have the same scalability as the CMS1000 or CMS2000 platforms. In fact, the resources allocated to these virtual machines are extremely limited, so at times video quality may not be optimal in this lab environment; however, all product features are present to give you the opportunity to experience the product first-hand.
The core roles you will deploy on these servers are the Database, and Call Bridge roles. Each of these roles will be clustered for high availability. You will also configure the Web Admin for administrative and API access. Optionally, Web Bridge services are added so clients can join meetings via WebRTC-capable browsers. This is covered as part of the Edge access since it is most often deployed for external access.
The three CMS servers have already been installed for you, the admin password has been set (which is required after the first login), and the network interface has been configured with an IP address and gateway.
dns add forwardzone . 10.0.224.140
dns add forwardzone . 10.0.224.141
dns flush
dscp 4 multimedia 0x22
dscp 4 multimedia-streaming 0x22
dscp 4 voice 0x2E
dscp 4 signaling 0x1A
dscp 4 low-latency 0x1A
ntp server add 10.0.108.1
ntp server add 64.102.244.57
timezone America/New_York
reboot
|
For your reference, the following command was used to configure the IP address on the first CMS server:
This command assigns the IP address (10.0.108.51), mask (24-bit or 255.255.255.0), and default gateway (10.0.108.1) to the first interface, called “a”. CMS supports up to four network interfaces (named a, b, c, and d). This capability exists so that, for example, one interface can be configured for management while another for the audio/video conferencing traffic. The interfaces should not ever be connected to the same network/VLAN. In practice, this is most used for separating internal and externally facing interfaces for edge deployments, which we will discuss when examining the single edge and Cisco Expressway products later in the lab.
Let's take a look at the CMS servers in your pod. There are three, named cms1a, cms1b, and cms1c. Since cms1b and cms1c have most of the initial pieces preconfigured, let's start by accessing the MMC (the command-line interface) of cms1a via SSH:
Cisco Meeting Server | Password |
---|---|
cms1a.pod8.cms.lab |
TIP: To speed up access, simply click the cms1a.pod8.cms.lab link above, then right-click inside the SSH session window that appears. That will paste the password to the terminal session.
To view the interface configuration for interface a, issue the command
. For example:Next you must configure CMS to be able to perform DNS lookups. This is important for many functions, such as locating servers and services, as well as for validating certificates. The CMS itself has a static DNS table which can be populated manually with all of the records it needs--including SRV records--but it is recommended to instead point CMS to a reliable, external DNS server.
In this lab we will make a distinction between an internal and external DNS server. The internal DNS server is the one used by devices inside your network to resolve names to IP addresses. The external server would represent a public DNS server such as one provided by an ISP, OpenDNS, or Google, for example. Because you have the same domain internally and externally, you will use what is known as split DNS, whereby a DNS query on the internal network may resolve a particular record, especially SRV records to locate services, to an internal server, whereas the public DNS server would resolve the same DNS record to a device that proxies or otherwise acts as the external entry point for that service.
The IP address of the internal DNS server for our internal clients and servers is 10.0.224.140. There is also a backup server, 10.0.224.141. You will configure the CMS1a server to send all DNS requests to these servers. To avoid redundant tasks, the other two servers, CMS1b and CMS1c, are already pre-configured with the exact same DNS servers.
To add the DNS server configuration, log into CMS1a and issue the following commands, starting with cms1a.pod8.cms.lab :
As mentioned, CMS supports configuring its own internal DNS server with the required A and SRV records. While this can eliminate a dependency on external DNS servers, for distributed deployments such as this one, it would require you to configure the same information on each CMS, thereby increasing the chances for mistakes and making it difficult to manage if you need to add or change an IP address. You are best off leveraging an external DNS server, but be aware that CMS is dependant on the availability of your DNS infrastructure, therefore you should ensure you have highly-available DNS servers.
You can verify DNS operation with the dns command as shown below.
To validate that DNS is working properly, issue the following command to perform a lookup for the A record cmslab-ad.cms.lab.
Every DNS lookup performed by CMS is cached locally. Because this was the first DNS server added, there is nothing more to do; however, it is worth pointing out that for any changes in DNS - either changes to the local CMS DNS entries or on the external DNS server - you will want to flush the DNS cache with the dns flush command.
Many of the services you will deploy will not function reliably without DNS. As a reference, the following table documents the additional CMS-related DNS records that have already been created on the internal DNS server for this lab. Each of the deployment guides explain other requirements for features that are outside the scope of this lab. Keep in mind that there are two domains that will be explained later in more detail: pod8.cms.lab is the domain that all Unified CM endpoints will use for their URIs. This would typically match the same domain users use for their email addresses. For example, the Jabber client on your laptop has a URI of pod8user4@pod8.cms.lab. The other domain is what you will configure for users on CMS. The CMS domain will be conf.pod8.cms.lab, so for that same Jabber user to log into CMS, they would log in using pod8user4@conf.pod8.cms.lab.
Here is a reference of other internal DNS records that are configured. Feel free to query them from your CMS.
Type | Record | Description |
---|---|---|
A |
cms1a.pod8.cms.lab cms1b.pod8.cms.lab cms1c.pod8.cms.lab |
Resolve to the IP address of each CMS server. |
A | join.pod8.cms.lab | There will be three of these exact same A records, each pointing to a different CMS. Think of this as our cluster hostname. This entry is used by clients to reach a node in the CMS cluster. For example, they could put join.pod8.cms.lab into their browser and get connected to one of the nodes in the cluster. |
Since CMS generates real-time traffic that is sensitive to delays and packet loss, in most cases, configuring Quality of Service (QoS) is recommended. For this, CMS supports Differentiated Services Code Point (DSCP) tagging of packets that it generates. While the prioritization of traffic based on DSCP depends on if and how traffic is handled by your infrastructure's network components, for this lab we will configure our CMS with a typical DSCP prioritization based on QoS best practices.
CMS1b and CMS1c have already been configured, so we will focus on CMS1a. We would like to configure the system with DSCP tagging for IPv4 traffic such that all video is marked with AF41 (DSCP 0x22), all voice is tagged with EF (DSCP 0x2E), other signaling such as SIP uses AF31 (DSCP 0x1A). Configure the following commands on CMS1a:
You will note that the system requires a reboot for this to take effect. We will wait until the end of this chapter to do this, since there are other tasks that require this. For now, let us just confirm the configuration with the dscp command. We should see
The Network Time Protocol (NTP) is important not just for ensuring accurate timestamps for meeting times and logs, but also for certificate validation. Configure the following two NTP servers on CMS1a. The other two servers, CMS1b and CMS1c, have the NTP servers already configured.
You can use the ntp server list command to view the NTP configuration as follows:
To check to ensure that NTP is functioning properly use the ntp status command. For a short amount of time after configuring the NTP servers, you may see a connection refused message or even a timeout, because the NTP service is restarting. Eventually, the correct status should appear as shown below.
The * next to the NTP server indicates that the clock is synchronized
Check the clock on the servers using the date command to ensure that the time is correct.
By default, the timezone on CMS is set to UTC time. To match other components, you will set the time zone to America/New_York. The timezone list command shows all known time zones.
Set the timezone on your CMS1a server to the America/New_York timezone with the timezone America/New_York command.
To reboot the server, simply issue the reboot command:
The server will take approximately 2-3 minutes to reboot. Once it has rebooted, log in again to check the clocks using the date command again.
Cisco Meeting Server | Password |
---|---|
cms1a.pod8.cms.lab |
You can see that since the New York timezone is -5 or -4 from UTC (depending on daylight savings time or not), the Local time now appears 4 or 5 hours earlier than the System time.