DHCP Static Binding on Cisco IOS

Lesson Contents

Cisco IOS devices can be configured as DHCP servers and it’s also possible to configure a static binding for certain hosts. This might sound easy but there’s a catch to it…in this lesson, I’ll show you how to configure this for a Cisco router and Windows 7 and Linux host. This is the topology I’ll be using:

DHCP Binding Demo Topology

The router called “DHCP” will be the DHCP server, R1, and the two computers will be DHCP clients. Everything is connected to a switch and we’ll use the 192.168.1.0 /24 subnet. The idea is to create a DHCP pool and use static bindings for the two computers and R1:

  • R1: 192.168.1.100
  • Windows 7: 192.168.1.110
  • Linux: 192.168.1.120

First, we will create a new DHCP pool for the 192.168.1.0 /24 subnet:

Whenever a DHCP client sends a DHCP discover it will send its client identifier or MAC address. We can see this if we enable a debug on the DHCP server:

Cisco Router DHCP Client

Now we’ll configure R1 to request an IP address:

In a few seconds you will see the following message on the DHCP server:

When a Cisco router sends a DHCP Discover, message, it will include a client identifier to identify the device uniquely. We can use this value to configure a static binding, here’s what it looks like:

We create a new pool called “R1-STATIC” with the IP address we want to use for R1 and its client identifier. We’ll renew the IP address on R1 to see what happens:

Use the renew dhcp command or do a ‘shut’ and ‘no shut’ on the interface of R1 and you’ll see this on the DHCP server:

As you can see above the DHCP server uses the client identifier for the static binding and assigns IP address 192.168.1.100 to R1. If you don’t like these long numbers, you can also configure R1 to use the MAC address as the client identifier instead:

This tells the router to use the MAC address of its FastEthernet 0/0 interface as the client identifier. You’ll see this change on the DHCP server:

Of course, now we have to change the binding on the DHCP server to match the MAC address:

Do another release on R1:

And you’ll see that R1 gets its correct IP address from the DHCP server and is being identified with its MAC address:

So that’s how the Cisco router requests an IP address. Let’s look at the Windows 7 host now to see if there’s a difference.

Windows 7 DHCP Client

This is what you’ll find on the DHCP server:

Windows 7 uses its MAC address as the client identifier. We can verify this by looking at ipconfig:

That’s easy enough. We’ll create another static binding on the DHCP server so that our Windows 7 computer receives IP address 192.168.1.110:

Let’s verify our work:

This is what the debug on the DHCP server will tell us:

There you go, Windows 7 has received the correct IP address. Last but not least is our Linux computer, which acts a little differently.

Linux DHCP Client

Linux (Ubuntu), in my example, acts a little differently when it comes to DHCP client. Let me show you:

The DHCP server shows this:

We see the MAC address of the Linux server so we’ll create a static binding that matches this:

We’ll release the IP address on our Linux host:

Now take a good look at the debug:

That’s not good, even though we configured the client identifier, it’s not working. Let’s double-check the MAC address:

We're Sorry, Full Content Access is for Members Only...

  • Learn any CCNA, CCNP and CCIE R&S Topic . Explained As Simple As Possible.
  • The Best Investment You’ve Ever Spent on Your Cisco Career!
  • Full Access to our 790 Lessons . More Lessons Added Every Week!
  • Content created by Rene Molenaar (CCIE #41726)

1057 Sign Ups in the last 30 days

satisfaction-guaranteed

Tags: DHCP , Network Services

Forum Replies

Thanks for the info Rene As always complete posts!

how to reserve a single ip for pc in router ? ?

That is exactly what this lesson is about…

I have some challenges with CCNA R&S lab about DHCP/DHCP relay.The lab number is 10.1.3.3 assuming that you have access to the new CCNA R&S oficial course… The lab has two clients, two intermediary routers and another router connected to the intermediary routers via serials. Intermediary router are R1 and R3, the central router is R2. To R2 are connected via Ethernet GIgabit interfaces a DNS Seriver and the ISP The lab tell me to do the R2 a dhcp server for the two PC’s connected to the intermediary routers R1 and R2 so they receive an IP… The challenge

Hi Catalin,

Your message wasn’t deleted but not approeved before, I do this manually because of spam. I think this example should help you:

https://networklessons.com/cisco/ccie-routing-switching/cisco-ios-dhcp-relay-agent

If not, let me know.

17 more replies! Ask a question or join the discussion by visiting our Community Forum

  • Log in / create account
  • Privacy Policy
  • Community portal
  • Current events
  • Recent changes
  • Random page

Static DHCP

From dd-wrt wiki.

 •   •   •   •   •   •   •   •   •   •   •   • 

Introduction Configuration DHCP Options Example How to add static leases into dhcp by command Troubleshooting

[ edit ] Introduction

Static DHCP (aka DHCP reservation) is a useful feature which makes the DHCP server on your router always assign the same IP address to a specific computer on your LAN . To be more specific, the DHCP server assigns this static IP to a unique MAC address assigned to each NIC on your LAN. Your computer boots and requests its IP from the router's DHCP server. The DHCP server recognizes the MAC address of your device's NIC and assigns the static IP address to it. (Note also that, currently, each reserved IP address must also be unique. Therefore, e.g., one cannot reserve the same IP address for both the wired and wireless interfaces of a device, even though the device may be configured such that only one interface is active at any given time.)

Static DHCP will be needed if you want an interface to always have the same IP address. Sometimes required for certain programs, this feature is useful if other people on your LAN know your IP and access your PC using this IP. Static DHCP should be used in conjuction with Port Forwarding . If you forward an external WAN TCP/UDP port to a port on a server running inside your LAN, you have to give that server a static IP, and this can be achieved easily through Static DHCP.

[ edit ] Configuration

  • Log into the DD-WRT Web GUI
  • Go to the Services tab
  • DHCP Daemon should be enabled
  • If there is no blank slot under "Static Leases", click Add
  • Enter the MAC address of the client interface, the hostname of the machine, and the desired IP address for this machine. Note that you cannot reserve the same IP address for two different MAC addresses (e.g. both the wired and wireless interfaces of a device).
  • Scroll to the bottom of the page and save your changes.

Note: You must either Save or Apply the page each time you've added and filled out a new static lease. Clicking the Add button refreshes the page without saving what you entered. If you try to add multiple blank leases and fill them all out at once then you will encounter a bug that the GUI thinks they are duplicate entries.

static dhcp assignment

Note: A blank lease duration means it will be an infinite lease (never expires). Setting a lease duration will allow you to change the static lease information later on and have the host automatically get the new information without having to manually release/renew the lease on the host.

[ edit ] DHCP Options

The DHCP system assigns IP addresses to your local devices. While the main configuration is on the setup page you can program some nifty special functions here.

DHCP Daemon : Disabling here will disable DHCP on this router irrespective of the settings on the Setup screen.

Used domain : You can select here which domain the DHCP clients should get as their local domain. This can be the WAN domain set on the Setup screen or the LAN domain which can be set here.

LAN Domain : You can define here your local LAN domain which is used as local domain for DNSmasq and DHCP service if chosen above.

Static Leases : Assign certain hosts specific addresses here. This is also a way to add hosts to the router's local DNS service (DNSmasq).

Note: It is recommended but not necessary to set your static leases outside of your automatic DHCP address range. This range is 192.168.1.100-192.168.1.149 by default and can be configured under Setup -> Basic Setup : Network Address Server Settings (DHCP) .

[ edit ] Example

To assign the IP address 192.168.1.12 and the hostname "mypc" to a PC with a network card having the MAC address 00:AE:0D:FF:BE:56 you should press Add then enter 00:AE:0D:FF:BE:56 into the MAC field, mypc into the HOST field and 192.168.1.12 into the IP field.

Remember: If you press the 'Add' button before saving the entries you just made, they will be cleared. This is normal behavior.

[ edit ] How to add static leases into dhcp by command

If more than two...just keep adding them to the static_leases variable with a space between each.

Don't forget the double quotes...if you have any spaces (more than one lease) you must include the quotes to the variable.

NOTE: Starting with build 13832 the format has changed to the following:

Note the '=' sign at the end of each lease. If you want to set static lease time then put a number after the last '=' to set time in minutes...in the second cases above a blank after the = sign means its an indefinite lease, 1440=24 hours.

If you have success setting a couple of entries, but have difficulty setting lots of entries, then you may be encountering a length limitation for your given method and firmware combination. A workaround for this is to ssh or telnet to the router and enter it there. If you still run into a length problem then you could use vi (or other method) to enter your configuration into a file and then run the file. Here is what I saved into a file and ran with success (I needed single quotes instead of double quotes in my case):

I then ran the file (which is named myiplist.sh) like so:

Also remember tools such as "nvram show | grep static" as well as "nvram commit".

[ edit ] Troubleshooting

If you cannot ping a hostname, append a period to the end. I.e. instead of "ping server" try "ping server."

If the static reservations are visible, but your machines continue to get a normal DHCP IP try going to the Setup page. Hit Save and then Apply settings. The DHCP daemon should restart and you may lose connection briefly. Try renewing your DHCP lease and you should be getting the correct IP at this point.

Categories : DHCP | Basic tutorials

  • Discussion |
  • What links here |
  • Related changes |
  • Upload file |
  • Special pages
  • | Permanent link
  • About DD-WRT Wiki |
  • Disclaimers |
  • Powered by MediaWiki |
  • Design by Paul Gu

How to Configure Static IP Address on Ubuntu 20.04

Published on Sep 15, 2020

Ubuntu Static IP Address

This article explains how to set up a static IP address on Ubuntu 20.04.

Typically, in most network configurations, the IP address is assigned dynamically by the router DHCP server. Setting a static IP address may be required in different situations, such as configuring port forwarding or running a media server .

Configuring Static IP address using DHCP #

The easiest and recommended way to assign a static IP address to a device on your LAN is to configure a Static DHCP on your router. Static DHCP or DHCP reservation is a feature found on most routers which makes the DHCP server to automatically assign the same IP address to a specific network device, each time the device requests an address from the DHCP server. This works by assigning a static IP to the device’s unique MAC address.

The steps for configuring a DHCP reservation vary from router to router. Consult the vendor’s documentation for more information.

Ubuntu 17.10 and later uses Netplan as the default network management tool. The previous Ubuntu versions were using ifconfig and its configuration file /etc/network/interfaces to configure the network.

Netplan configuration files are written in YAML syntax with a .yaml file extension. To configure a network interface with Netplan, you need to create a YAML description for the interface, and Netplan will generate the required configuration files for the chosen renderer tool.

Netplan supports two renderers, NetworkManager and Systemd-networkd. NetworkManager is mostly used on Desktop machines, while the Systemd-networkd is used on servers without a GUI.

Configuring Static IP address on Ubuntu Server #

On Ubuntu 20.04, the system identifies network interfaces using ‘predictable network interface names’.

The first step toward setting up a static IP address is identifying the name of the ethernet interface you want to configure. To do so, use the ip link command, as shown below:

The command prints a list of all the available network interfaces. In this example, the name of the interface is ens3 :

Netplan configuration files are stored in the /etc/netplan directory. You’ll probably find one or more YAML files in this directory. The name of the file may differ from setup to setup. Usually, the file is named either 01-netcfg.yaml , 50-cloud-init.yaml , or NN_interfaceName.yaml , but in your system it may be different.

If your Ubuntu cloud instance is provisioned with cloud-init, you’ll need to disable it. To do so create the following file:

To assign a static IP address on the network interface, open the YAML configuration file with your text editor :

Before changing the configuration, let’s explain the code in a short.

Each Netplan Yaml file starts with the network key that has at least two required elements. The first required element is the version of the network configuration format, and the second one is the device type. The device type can be ethernets , bonds , bridges , or vlans .

The configuration above also has a line that shows the renderer type. Out of the box, if you installed Ubuntu in server mode, the renderer is configured to use networkd as the back end.

Under the device’s type ( ethernets ), you can specify one or more network interfaces. In this example, we have only one interface ens3 that is configured to obtain IP addressing from a DHCP server dhcp4: yes .

To assign a static IP address to ens3 interface, edit the file as follows:

  • Set DHCP to dhcp4: no .
  • Specify the static IP address. Under addresses: you can add one or more IPv4 or IPv6 IP addresses that will be assigned to the network interface.
  • Specify the gateway.
  • Under nameservers , set the IP addresses of the nameservers.

When editing Yaml files, make sure you follow the YAML code indent standards. If the syntax is not correct, the changes will not be applied.

Once done, save the file and apply the changes by running the following command:

Verify the changes by typing:

That’s it! You have assigned a static IP to your Ubuntu server.

Configuring Static IP address on Ubuntu Desktop #

Setting up a static IP address on Ubuntu Desktop computers requires no technical knowledge.

In the Activities screen, search for “settings” and click on the icon. This will open the GNOME settings window. Depending on the interface you want to modify, click either on the Network or Wi-Fi tab. To open the interface settings, click on the cog icon next to the interface name.

In “IPV4” Method" tab, select “Manual” and enter your static IP address, Netmask and Gateway. Once done, click on the “Apply” button.

Ubuntu Set static IP Address

To verify the changes, open your terminal either by using the Ctrl+Alt+T keyboard shortcut or by clicking on the terminal icon and run:

The output will show the interface IP address:

Conclusion #

We’ve shown you how to configure a static IP address on Ubuntu 20.04.

If you have any questions, please leave a comment below.

Related Tutorials

How to Configure Static IP Address on Ubuntu 18.04

How to install php 8 on ubuntu 20.04, how to install flask on ubuntu 20.04.

  • How to Install Python 3.9 on Ubuntu 20.04
  • How to Install Nvidia Drivers on Ubuntu 20.04
  • How to Set Up WireGuard VPN on Ubuntu 20.04
  • How to Install and Configure Squid Proxy on Ubuntu 20.04

If you like our content, please consider buying us a coffee. Thank you for your support!

Sign up to our newsletter and get our latest tutorials and news straight to your mailbox.

We’ll never share your email address or spam you.

Related Articles

Mar 9, 2020

Ubuntu Static IP Address

Dec 5, 2020

Install PHP 8 on Ubuntu 20.04

Nov 21, 2020

Install Flask on Ubuntu 20.04

  • Articles Automation Career Cloud Containers Kubernetes Linux Programming Security

Static and dynamic IP address configurations: DHCP deployment

%t min read | by Damon Garn

Static and dynamic IP address configurations: DHCP deployment

In my Static and dynamic IP address configurations for DHCP article, I discussed the pros and cons of static versus dynamic IP address allocation. Typically, sysadmins will manually configure servers and network devices (routers, switches, firewalls, etc.) with static IP address configurations. These addresses don’t change (unless the administrator changes them), which is important for making services easy to find on the network.

With dynamic IP configurations, client devices lease an IP configuration from a Dynamic Host Configuration Protocol (DHCP) server. This server is configured with a pool of available IPs and other settings. Clients contact the server and temporarily borrow an IP address configuration.

In this article, I demonstrate how to configure DHCP on a Linux server.

[ You might also like:  Using systemd features to secure services ]

Manage the DHCP service

First, install the DHCP service on your selected Linux box. This box should have a static IP address. DHCP is a very lightweight service, so feel free to co-locate other services such as name resolution on the same device.

Note : By using the -y option, yum will automatically install any dependencies necessary.

Configure a DHCP scope

Next, edit the DHCP configuration file to set the scope. However, before this step, you should make certain you understand the addressing scheme in your network segment. In my courses, I recommend establishing the entire range of addresses, then identifying the static IPs within the range. Next, determine the remaining IPs that are available for DHCP clients to lease. The following information details this process.

How many static IP addresses?

Figure out how many servers, routers, switches, printers, and other network devices will require static IP addresses. Add some additional addresses to this group to account for network growth (it seems like we’re always deploying more print devices).

What are the static and dynamic IP address ranges?

Set the range of static IPs in a distinct group. I like to use the front of the available address range. For example, in a simple Class C network of 192.168.2.0/24, I might set aside 192.168.2.1 through 192.168.2.50 for static IPs. If that’s true, you may assume I have about 30 devices that merit static IP addresses, and I have left about twenty addresses to grow into. Therefore, the available address space for DHCP is 192.168.2.51 through 192.168.2.254 (remember, 192.168.2.255 is the subnet broadcast address).

This screenshot from the part one article is a reminder:

spreadsheet tracking IP addresses, MAC addresses, hostnames, etc

Note : Some administrators include the static IPs in the scope and then manually mark them as excluded or unavailable to the DHCP service for leasing. I’m not a fan of this approach. I prefer that the DHCP not even be aware of the addresses that are statically assigned.

What is the router’s IP address?

Document the router’s IP address because this will be the default gateway value. Administrators tend to choose either the first or the last address in the static range. In my case, I’d configure the router’s IP address as 192.168.2.1/24, so the default gateway value in DHCP is 192.168.2.1.

Where are the name servers?

Name resolution is a critical network service. You should configure clients for at least two DNS name servers for fault tolerance. When set manually, this configuration is in the /etc/resolv.conf file.

Note that the DNS name servers don’t have to be on the same subnet as the DNS clients.

Lease duration

In the next section, I’ll go over the lease generation process whereby clients receive their IP address configurations. For now, suffice it to say that the IP address configuration is temporary. Two values are configured on the DHCP server to govern this lease time:

default-lease-time - How long the lease is valid before renewal attempts begin.

max-lease-time - The point at which the IP address configuration is no longer valid and the client is no longer considered a lease-holder.

Configure the DHCP server

Now that you understand the IP address assignments in the subnet, you can configure the DHCP scope. The scope is the range of available IP addresses, as well as options such as default gateway. There is good documentation here .

Create the DHCP scope

Begin by editing the dhcp.conf configuration file (you’ll need root privileges to do so). I prefer Vim :

Next, add the values you identified in the previous section. Here is a subnet declaration (scope):

Remember, that spelling counts and typos can cause you a lot of trouble. Check your entries carefully. A mistake in this file can prevent many workstations from having valid network identities.

Reserved IP addresses

It is possible to reserve an IP address for a specific host. This is not the same thing as a statically-assigned IP address. Static IP addresses are configured manually, directly on the client. Reserved IP addresses are leased from the DHCP server, but the given client will always receive the same IP address. The DHCP service identifies the client by MAC address, as seen below.

Start the DHCP service

Start and enable the DHCP service. RHEL 7 and 8 rely on systemd to manage services, so you’ll type the following commands:

See this article I wrote for a summary on successfully deploying services.

Don’t forget to open the DHCP port in the firewall:

Explore the DORA process

Now that the DHCP server is configured, here is the lease generation process. This is a four-step process, and I like to point out that it is entirely initiated and managed by the client, not the server. DHCP is a very passive network service.

The process is:

  • Acknowledge

Which spells the acronym DORA .

  • The client broadcasts a DHCPDiscover message on the subnet, which the DHCP server hears.
  • The DHCP server broadcasts a DHCPOffer on the subnet, which the client hears.
  • The client broadcasts a DHCPRequest message, formally requesting the use of the IP address configuration.
  • The DHCP server broadcasts a DHCPAck message that confirms the lease.

The lease must be renewed periodically, based on the DHCP Lease Time setting. This is particularly important in today’s networks that often contain many transient devices such as laptops, tablets, and phones. The lease renewal process is steps three and four. Many client devices, especially desktops, will maintain their IP address settings for a very long time, renewing the configuration over and over.

Updating the IP address configuration

You may need to obtain a new IP address configuration with updated settings. This can be an important part of network troubleshooting.

Manually generate a new lease with nmcli

You can manually force the lease generation process by using the nmcli command. You must know the connection name and then down and up the card.

Manually force lease generation with dhclient

You can also use the dhclient command to generate a new DHCP lease manually. Here are the commands:

dhclient -r to release it

dhclient (no option) to lease a new one

dhclient -r eth0 for specific NIC

Note : use -v for verbose output

Remember, if the client’s IP address is 169.254.x.x, it could not lease an IP address from the DHCP server.

Other DHCP considerations

There are many ways to customize DHCP to suit your needs. This article only covers the most common options. Two settings to keep in mind are lease times and dealing with routers.

Managing lease times

There is a good trick to be aware of. Use short lease durations on networks with many portable devices or virtual machines that come and go quickly from the network. These short leases will allow IP addresses to be recycled regularly. Use longer durations on unchanging networks (such as a subnet containing mostly desktop computers). In theory, the longer durations reduced network traffic by requiring fewer renewals, but on today’s networks, that traffic is inconsequential.

Routers and DHCP

There is one other aspect of DHCP design to consider. The DORA process covered above occurs entirely by broadcast. Routers, as a general rule, are configured to stop broadcasts. That’s just part of what they do. There are three approaches you can take to managing this problem:

  • Place a DHCP server on each subnet (no routers between the DHCP server and its clients).
  • Place a DHCP relay agent on each subnet that sends DHCP lease generation traffic via unicast to the DHCP server on a different subnet.
  • Use RFC 1542-compliant routers, which can be configured to recognize and pass DHCP broadcast traffic.

[ Getting started with containers? Check out this free course. Deploying containerized applications: A technical overview . ]

DHCP is a simple service but an absolutely critical one. Understanding the lease generation process helps with network troubleshooting. Proper planning and tracking are essential to ensuring you don’t permit duplicate IP address problems to enter your network environment.

Static and dynamic IP configurations for DHCP

Damon Garn owns Cogspinner Coaction, LLC, a technical writing, editing, and IT project company based in Colorado Springs, CO. Damon authored many CompTIA Official Instructor and Student Guides (Linux+, Cloud+, Cloud Essentials+, Server+) and developed a broad library of interactive, scored labs. He regularly contributes to Enable Sysadmin, SearchNetworking, and CompTIA article repositories. Damon has 20 years of experience as a technical trainer covering Linux, Windows Server, and security content. He is a former sysadmin for US Figure Skating. He lives in Colorado Springs with his family and is a writer, musician, and amateur genealogist. More about me

Try Red Hat Enterprise Linux

Download it at no charge from the red hat developer program., related content.

Photo of a broken pink egg with smaller eggs spilling out

How-To Geek

How to set static ip addresses on your router.

4

Your changes have been saved

Email Is sent

Please verify your email address.

You’ve reached your account maximum for followed topics.

How-To Get More Free ChatGPT 4o Access

Get rugged, reliable storage for under $100, change these hidden settings to speed up your android phone, quick links, dhcp versus static ip assignment, when to use static ip addresses, assigning static ip addresses the smart way.

Routers both modern and antiquated allow users to set static IP addresses for devices on the network, but what's the practical use of static IP addresses for a home user? Read on as we explore when you should, and shouldn't, assign a static IP.

Dear How-To Geek, After reading over your five things to do with a new router article , I was poking around in the control panel of my router. One of the things I found among all the settings is a table to set static IP addresses. I'm pretty sure that section is self explanatory in as much as I get that it allows you to give a computer a permanent IP address, but I don't really understand why? I've never used that section before and everything on my home network seems to work fine. Should I be using it? It's obviously there for some reason, even if I'm not sure what that reason is! Sincerely, IP Curious

To help you understand the application of static IP addresses, let's start with the setup you (and most readers for that matter) have. The vasty majority of modern computer networks, including the little network in your home controlled by your router, use DHCP (Dynamic Host Configuration Protocol). DHCP is a protocol that automatically assigns a new device an IP address from the pool of available IP addresses without any interaction from the user or a system administrator. Let's use an example to illustrate just how wonderful DHCP is and how easy it makes all of our lives.

Related: How to Set Up Static DHCP So Your Computer's IP Address Doesn't Change

Imagine that a friend visits with their iPad. They want to get on your network and update some apps on the iPad. Without DHCP, you would need to hop on a computer, log into your router's admin panel, and manually assign an available address to your friend's device, say 10.0.0.99. That address would be permanently assigned to your friend's iPad unless you went in later and manually released the address.

With DHCP, however, life is so much easier. Your friend visits, they want to jump on your network, so you give them the Wi-Fi password to login and you're done. As soon as the iPad connected to the router, the router's DHCP server checks the available list of IP addresses, and assigns an address with an expiration date built in. Your friend's iPad is given an address, connected to the network, and then when your friend leaves and is no longer using the network that address will return to the pool for available addresses ready to be assigned to another device.

All that happens behind the scenes and, assuming there isn't a critical error in the router's software, you'll never even need to pay attention to the DHCP process as it will be completely invisible to you. For most applications, like adding mobile devices to your network, general computer use, video game consoles, etc., this is a more than satisfactory arrangement and we should all be happy to have DHCP and not be burdened with the hassle of manually managing our IP assignment tables.

static dhcp assignment

Although DHCP is really great and makes our lives easier, there are situations where using a manually assigned static IP address is quite handy. Let's look at a few situations where you would want to assign a static IP address in order to illustrate the benefits of doing so.

You need reliable name resolution on your network for computers that need to be consistently and accurately found. Although networking protocols have advanced over the years, and the majority of the time using a more abstract protocol like SMB (Server Message Block) to visit computers and shared folders on your network using the familiar //officecomputer/shared_music/ style address works just fine, for some applications it falls apart. For example, when setting up media syncing on XBMC it's necessary to use the IP address of your media source instead of the SMB name.

Any time you rely on a computer or a piece of software to accurately and immediately locate another computer on your network (as is the case with our XBMC example - the client devices need to find the media server hosting the material) with the least chance of error, assigning a static IP address is the way to go. Direct IP-based resolution remains the most stable and error free method of communicating on a network.

You want to impose a human-friendly numbering scheme onto your network devices. For network assignments like giving an address to your friend's iPad or your laptop, you probably don't care where in the available address block the IP comes from because you don't really need to know (or care). If you have devices on your network that you regularly access using command line tools or other IP-oriented applications, it can be really useful to assignment permanent addresses to those devices in a scheme that is friendly to the human memory.

For example, if left to its own devices our router would assign any available address to our three Raspberry Pi XBMC units. Because we frequently tinker with those units and access them by their IP addresses, it made sense to permanently assign addresses to them that would be logical and easy to remember:

static dhcp assignment

The .90 unit is in the basement, the .91 unit is on the first floor, and the .92 unit is on the second floor.

You have an application the expressly relies on IP addresses.  Some applications will only allow you to supple an IP address to refer to other computers on the network. In such cases it would be extremely annoying to have to change the IP address in the application every time the IP address of the remote computer was changed in the DHCP table. Assigning a permanent address to the remote computer prevents you from the hassle of frequently updating your applications. This is why it's quite useful to assign any computer that functions as a server of any sort to a permanent address.

Before you just start assigning static IP addresses left and right, let's go over some basic network hygiene tips that will save you from a headache down the road.

First, check what the IP pool available on your router is. Your router will have a total pool and a pool specifically reserved for DHCP assignments. The total pool available to home routers is typically 10.0.0.0 through 10.255.255.255 or 192.168.0.0 through 192.168.255.255 . Then, within those ranges a smaller pool is reserved for the DHCP server, typically around 252 addresses in a range like 10.0.0.2 through 10.0.0.254. Once you know the general pool, you should use the following rules to assign static IP addresses:

  • Never assign an address that ends in .0 or .255 as these addresses are typically reserved for network protocols. This is the reason the example IP address pool above ends at .254.
  • Never assign an address to the very start of the IP pool, e.g. 10.0.0.1 as the start address is always reserved for the router. Even if you've changed the IP address of your router for security purposes , we'd still suggest against assigning a computer.
  • Never assign an address outside of the total available pool of private IP addresses. This means if your router's pool is 10.0.0.0 through 10.255.255.255 every IP you assign (keeping in mind the prior two rules) should fall within that range. Given that there are nearly 17 million addresses in that pool, we're sure you can find one you like.

Some people prefer to only use addresses outside of the DHCP range (e.g. they leave the 10.0.0.2 through 10.0.0.254 block completely untouched) but we don't feel strongly enough about that to consider it an outright rule. Given the improbability of a home user needing 252 device addresses simultaneously, it's perfectly fine to assign a device to one of those addresses if you'd prefer to keep everything in, say, the 10.0.0.x block.

Related: How and Why All Devices in Your Home Share One IP Address

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Assigning a fixed IP address to a machine in a DHCP network

I want to assign a fixed private IP address to a server so that local computers can always access it.

Currently, the DHCP address of the server is something like 192.168.1.66 .

Should I simply assign the server this same IP as fixed and configure the router so that it will exclude this IP from the ones available for DHCP? Or are there some ranges of IP that are traditionally reserved for static addresses?

My beginner's question doesn't relate to commands but to general principles and good practices.

Practical case (Edit 1 of 2)

Thank you for the many good answers, especially the very detailed one from Liam.

I could access the router's configuration.

When booting any computer, it obtains its IPv4 address in DHCP.

The IP and the MAC addresses that I can see with the ipconfig all command in Windows match those in the list of connected devices that the router displays, so that I can confirm who is who.

The list of connected devices is something like

Things that I don't understand:

  • Although all IP addresses are all obtained in DCHP, they are displayed as by the router as if they are static addresses.
  • The router's setting "Enable DHCP on LAN" is set on "Off" but the IP addresses are obtained in DHCP.
  • IP addresses attributed to the computers are outside of the very narrow DHCP range of 192.168.1.33 to 192.68.1.35

On any Windows computer connected in DCHP, ipconfig /all shows something like:

I'm missing something, but what?

Practical case (Edit 2 of 2)

Solution found.

For details, see my answer to Michal's comment at the bottom of this message.

I must admit that the way the router display things keeps some parts a mystery. The router seems to be using DHCP by default, but remembers the devices that were connected to it (probably using their mac address). It could be the reason why it lists the IPs as static although they're dynamic. There was also Cisco router at 192.168.1.4 which appeared for some business communications service, but I had no credentials to access it.

Alexis Wilke's user avatar

  • There's no standard governing DHCP reservation ranges, but it would be kinda nice. –  LawrenceC Commented Apr 5, 2018 at 2:43
  • Some routers allow you to define an IP for a chosen mac-address. Use that and DHCP will keep that address for your server. You could also set a DHCP range to e.g. 192.168.0.128 - 192.168.0.254 in a 192.168.0.1/255.255.255.0 network and set all static addresses on the "static" servers from within 192.168.0.2 - 192.168.0.127 range. –  Michal B. Commented Apr 5, 2018 at 7:29
  • @Michal B.: I agree and did it meanwhile.: 1. Obtain the server's mac address. 2. Observe which IPs the router assigns to computers (eg. 192.168.0.50 to 192.168.1.70 ) 3. Start the server in DHCP. In the router panel, name it, basing on its mac address so that the router will remember it. 4. In the server switch IP from DHCP mode to manual and assign an IP that is beyond the ones that the router would assign to other devices (eg. 192.168.1.100 ). You can use nmtui and then edit the config file where you can replace PREFIX=32 by NETMASK=255.255.255.0 . 6. Restart the network service. –  OuzoPower Commented Apr 6, 2018 at 9:58

7 Answers 7

Determine the IP address that is assigned to your server and then go onto the DHCP and set a DHCP reservation for that server.

JohnA's user avatar

  • 1 Reservations are essentially self-documenting. ++ –  mfinni Commented Apr 4, 2018 at 21:30
  • 5 @mfinni ++ only works for programmers. -- for your comment :P –  Canadian Luke Commented Apr 4, 2018 at 23:59
  • ..and yes he should also use a fixed IP, and label it. Document it. Maybe even reserve a range for this. In an enterprise using internal VPN it is common for these IP's to be hard coded in HOSTS files and SSH config files so it is a big deal when they suddenly change. –  mckenzm Commented Apr 5, 2018 at 1:30

DHCP services differ across many possible implementations, and there are no ranges of IP that are traditionally reserved for static addresses; it depends what is configured in your environment. I'll assume we're looking at a typical home / SOHO setup since you mention your router is providing the DHCP service.

Should I simply assign the server this same IP as fixed and configure the router so that it will exclude this IP from the ones available for DHCP?

I would say that is not best practice. Many consumer routers will not have the ability to exclude a single address from within the DHCP range of addresses for lease (known as a 'pool'). In addition, because DHCP is not aware that you have "fixed" the IP address at the server you run the risk of a conflict. You would normally either:

  • set a reservation in DHCP configuration so that the server device is always allocated the same address by the DHCP service, or
  • set the server device with a static address that is outside the pool of addresses allocated by the DHCP service.

To expand on these options:

Reservation in DHCP

If your router allows reservations, then the first, DHCP reservation option effectively achieves what you have planned. Note the significant difference: address assignment is still managed by the DHCP service, not "fixed" on the server. The server still requests a DHCP address, it just gets the same one every time.

Static IP address

If you prefer to set a static address, you should check your router's (default) configuration to determine the block of addresses used for DHCP leases. You will normally be able to see the configuration as a first address and last address, or first address and a maximum number of clients. Once you know this, you can pick a static address for your server.

An example would be: the router is set to allow a maximum of 128 DHCP clients with a first DHCP IP address of 192.168.1.32. Therefore a device could be assigned any address from 192.168.1.32 up to and including 192.168.1.159. Your router will use a static address outside this range (generally the first or last address .1 or .254) and you can now pick any other available address for your server.

It depends on the configuration of your DHCP service. Check the settings available to you for DHCP then either reserve an address in DHCP or pick a static address that is not used by DHCP - don't cross the streams.

Liam's user avatar

  • 1 Double++ on this. –  ivanivan Commented Apr 5, 2018 at 3:26
  • 1 Thank you Liam for your very detailed and useful answer. After accessing the router's configuration, other issues arised that I added in the original message. –  OuzoPower Commented Apr 5, 2018 at 9:45
  • @OuzoPower I'm new to responding here so don't have enough rep to comment on the question. Your update shows your router is not providing the DHCP service. The setting is off on the router, and your Windows ipconfig output shows the DHCP service is provided from a device at 192.168.1.5 . Do you have Pi-Hole or another similar device providing DHCP? That's where you'll find your DHCP configuration. NB: This also explains why the router shows the addresses as static and why DHCP assigned addresses are outside the range configured on the router. –  Liam Commented Apr 6, 2018 at 9:52
  • @Liam: No Pi-Hole or similar thing as far as I know. Solution found: As I could not set DHCP ranges in the router but could register the mac address of the server in the router and then attribute to the server a fixed IP address that is far beyond the range that the router is naturally assigning to existing devices. Thanks to the registration of the server's mac address, the router keeps it in memory and shows the server as missing when thus is off. For details, see my answer to Michal B. in the original post. This solution seems working like a charm. –  OuzoPower Commented Apr 6, 2018 at 10:11
  • @OuzoPower That approach may work in the short term but how do you know that the address you have picked is outside the DHCP range? Many DHCP systems pick addresses at random from the available pool. At some point you will need to know what your DHCP configuration actually is, rather than estimating by observation (!) otherwise you will experience some conflict. Your question asked about best practice. Here, best practice would be to know what system is handling DHCP for your LAN. I would start by visiting 192.168.1.5 or https://192.168.1.5/ for clues. –  Liam Commented Apr 6, 2018 at 10:48

It's not a bad habit to divide your subnet to DHCP pool range and static ranges, but of course you can do what JohnA wrote - use reservation for your server, but first case is IMHO clearer, because you are not messing up your DHCP server with unused extra settings (it could be confusing then for another admins who are not aware of that the server is static). if using DHCP pool + static pool, then just don't forget to add your static server to DNS (create A/AAAA record for it).

Journeyman Geek's user avatar

  • I would like to add that the downside of DHCP reservations for servers is that if your DHCP environment is not sufficient fault tolerant, a DHCP server outage could cause all manner of problems. Monitor the DHCP closely and set leases that are long enough to be able respond to problems even after a long weekend. –  JohnA Commented Apr 5, 2018 at 2:06

I prefer to set my network devices, servers, printers, etc. that require a static IP address out of range of the DHCP pool. For example, xx.xx.xx.0 to xx.xx.xx.99 would be set aside for fixed IP assignments and xx.xx.xx.100 to xx.xx.xx.250 would be set as the DHCP pool.

user1780242's user avatar

  • I like this approach as well. This way I can still access the servers even if the DHCP server takes the morning off or decides to start handing out invalid leases! –  ErikF Commented Apr 5, 2018 at 1:24
  • Using isc-dhcp-server this is required (this is what my pi does, along with DNS caching, a fake domain for my LAN, and some traffic shaping for some wireless stuff). Unfortunately, I've seen browser based router config pages (both factory and replacement) that either require a reserved address to be in the dynamic pool... or out of it. –  ivanivan Commented Apr 5, 2018 at 3:30

In addition to the other answers I want to concentrate on the fact that your router configuration does not seem to fit the IP address configuration on your server.

Please have a look on the output of ipconfig /all:

IPv4 Address ........ 192.168.1.xx(prefered)

Default Gateway ........ 192.168.1.1 (= IP of the router)

DHCP server ............ 192.168.1.5

The clients in the network don't get the IP address from the router, but a different DHCP server in the network (192.168.1.5 instead of 192.168.1.1). You have to find this server and check it's configuration instead of the router's DHCP server config, which is seemingly only used for Wireless.

Qippix's user avatar

My router ( OpenWRT ) allows for static DHCP leases.

Static leases are used to assign fixed IP addresses and symbolic hostnames to DHCP clients.

So, you supply the MAC address of the server and it's desired IP address as a "static lease", and DHCP will always allocate the same IP. The client machine (the server in this case) requires no configuration changes and still picks up its IP address (the configured address) from DHCP.

spender's user avatar

Note that you can't assign a fixed IP addresses in 192.168 so that clients can "always access it" unless you also give each client a fixed IP address and subnet. Because if the clients use DHCP, then they get whatever subnect the DHCP server gives them, and if they use automatic addressing, then they won't be in a 192.168 subnet.

Once you realise that the system can't be easily perfected, you can see that your best options depend on what you are trying to do. Upnp is a common way of making devices visible. DNS is a common way of making devices visible. WINS is a common way of making devices visible. DHCP is a common way of making devices visible.

All of my printers have reservations: my printers aren't critical infrastructure, I want to be able to manage them, many of the clients use UPNP or mDNS for discovery anyway.

My gateway and DNS servers have fixed IP address in a reserved range: My DHCP server provides gateway and DNS addresses, and my DHCP server does not have the capacity to do dynamic discovery or DNS lookup.

None of my streaming devices have fixed or reserved IP values at all: if the network is so broken that DHCP and DNS aren't working, there is no way that the clients will be able to connect to fixed IP addresses anyway.

user165568's user avatar

  • This literally makes no sense. Are you asserting that you can’t mix static and dynamic in a /16? –  Gaius Commented Apr 5, 2018 at 12:59
  • I have asserted that if you use static, you haven't gauaranteed that clients can "always access it"Not at all. I've just asserted that I've mixed static and dynamic in my setup. –  user165568 Commented Apr 6, 2018 at 9:46
  • @Gaius I have asserted that if you use static, you haven't guaranteed that clients can "always access it". I'm sorry that doesn't make sense to you: it's one of the primary reasons the world moved away from static. I've also asserted that I've mixed static and dynamic in my setup: see: "none of my streaming devices have fixed or reserved" and "DNS servers have fixed IP": the DNS servers are indeed in the same subnet as the clients. –  user165568 Commented Apr 6, 2018 at 9:52
  • Sorry, but I must admin not understanding most of your answer. As far as I know, DNS are domain name servers and are useful when you want to name servers, like when assigning domain names to web sites. As I don't need domain names, DNS appears me useless. Accessing the server is not an issue without DNS. See my answer to Michal B. in the original post for the solution that I found. –  OuzoPower Commented Apr 6, 2018 at 10:18

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged dhcp ip-address ..

  • Featured on Meta
  • Upcoming sign-up experiments related to tags

Hot Network Questions

  • Was BCD a limiting factor on 6502 speed?
  • Is the FOCAL syntax for Alphanumeric Numbers ("0XYZ") documented anywhere?
  • Geometry question about a six-pack of beer
  • Is the zero vector necessary to do quantum mechanics?
  • Remove assignment of [super] key from showing "activities" in Ubuntu 22.04
  • What’s the highest salary the greedy king can arrange for himself?
  • Why is it 'capacité d'observation' (without article) but 'sens de l'observation' (with article)?
  • Con permiso to enter your own house?
  • Were there engineers in airship nacelles, and why were they there?
  • Why depreciation is considered a cost to own a car?
  • Integration of the product of two exponential functions
  • Correlation for Small Dataset?
  • Why can't LaTeX (seem to?) Support Arbitrary Text Sizes?
  • Is it possible to complete a Phd on your own?
  • Typing Fractions in Wolfram Cloud
  • Is there any legal justification for content on the web without an explicit licence being freeware?
  • How is Victor Timely a variant of He Who Remains in the 19th century?
  • Was Paul's Washing in Acts 9:18 a Ritual Purification Rather Than a Christian Baptism?
  • Where does someone go with Tzara'as if they are dwelling in a Ir Miklat?
  • Movie about a planet where seeds must be harvested just right in order to liberate a valuable crystal within
  • What could explain that small planes near an airport are perceived as harassing homeowners?
  • Are there alternatives to alias I'm not aware of?
  • Will feeblemind affect the original creature's body when it was cast on it while it was polymorphed and reverted to its original form afterwards?
  • Where can I access records of the 1947 Superman copyright trial?

static dhcp assignment

  • Help center
  • Chinese (traditional)

Learn more 

  • Case Study  
  • Knowledge Center  
  • Glossary  

Product Updates (143)

Getting Started (141)

News & Announcements (45)

Self Service (10)

Configuration Guide (6)

Troubleshooting Guide (6)

Client Reviews (4)

Installation Guide (4)

Product Testing (4)

Hear It from Experts (2)

Switches (132)

Optics and Transceivers (100)

Network Cabling and Wiring (78)

Networking Devices (75)

Optical Networking (24)

Networking (104)

Fiber Optic Communication (57)

Data Center (39)

General (31)

Wireless and Mobility (9)

Business Type (7)

Routing and Switching (6)

Australia (5)

United Kingdom (3)

Singapore (2)

Central & Northern Europe (1)

  • Product Updates
  • Networking Devices
  • Case Study 
  • Knowledge Center 

DHCP vs Static IP: Which One Is Better?

Nowadays, most networking devices such as routers or network switches use IP protocol as the standard to communicate over the network. In the IP protocol, each device on a network has a unique identifier that is called IP address. The easiest method of achieving this was configuring a fixed IP address or static IP address. Since there are limitations to static IP, some administrators seek to use dynamic IP instead. DHCP (Dynamic Host Configuration Protocol) is a protocol for assigning dynamic IP addresses to devices that are connected to the network. So DHCP vs static IP, what's the difference?

What Is a Static IP Address?

A static IP address is an address that is permanently assigned to your network devices by your ISP, and does not change even if your device reboots. Static IP addresses typically have two versions: IPv4 and IPv6. A static IP address is usually assigned to a server hosting websites and provides email, VPN and FTP services. In static IP addressing, each device on the network has its own address with no overlap and you'll have to configure the static IP addresses manually. When new devices are connected to a network, you would have to select the "manual" configuration option and input the IP address, the subnet mask, the default gateway and the DNS server.

A typical example of using static IP address is web server. From the Window on your computer, go to START -> RUN -> type "cmd" -> OK. Then type "ping www.google.com" on the Command Window, the interface will pop up as you can see below. The four-byte number 74.125.127.147 is the current IP for www.google.com. If it is a static IP, you would be able to connect Google at any time by using this static IP address in the web browser if you want to visit Google.

static IP address

What Is DHCP?

What is in contrast with the static IP address is the dynamic IP address. Static vs dynamic IP topic is hotly debated among many IT technicians. Dynamic IP address is an address that keeps on changing. To create dynamic IP addresses, the network must have a DHCP server configured and operating. The DHCP server assigns a vacant IP address to all devices connected to the network. DHCP is a way of dynamically and automatically assigning IP addresses to network devices on a physical network. It provides an automated way to distribute and update IP addresses and other configuration information over a network. To know how DHCP works, read this article: DHCP and DNS: What Are They, What’s Their Difference?

Proper IP addressing is essential for establishing communications among devices on a network. Then DHCP vs static IP, which one is better? This part will discuss it.

Static IP addresses allow network devices to retain the same IP address all the time, A network administrator must keep track of each statically assigned device to avoid using that IP address again. Since static IP address requires manual configurations, it can create network issues if you use it without a good understanding of TCP/IP.

While DHCP is a protocol for automating the task of assigning IP addresses. DHCP is advantageous for network administrators because it removes the repetitive task of assigning multiple IP addresses to each device on the network. It might only take a minute but when you are configuring hundreds of network devices, it really gets annoying. Wireless access points also utilize DHCP so that administrators would not need to configure their devices by themselves. For wireless access points, PoE network switches , which support dynamic binding by users' definition, are commonly used to allocate IP addresses for each device that is connected together. Besides, what makes DHCP appealing is that it is cheaper than static IP addresses with less maintenance required. You can easily find their advantages and disadvantages from the following table.

IP address Advantages Disadvantages
DHCP DHCP does not need any manual configuration to connect to local devices or gain access to the Web. Since DHCP is a "hands-off" technology, there is a danger that someone may implant an unauthorized DHCP server, making it possible to invade the network for illegal purposes or result in random access to the network without explicit permission.
Static IP The address does not change over time unless it is changed manually - good for web servers, and email servers. It's more expensive than a dynamic IP address because ISP often charges an additional fee for static IP addresses. Also, it requires additional security and manual configuration, which adds complexity when large numbers of devices are connected.

After comparing DHCP vs static IP, it is undoubtedly that DHCP is the more popular option for most users as they are easier and cheaper to deploy. Having a static IP and guessing which IP address is available is really bothersome and time-consuming, especially for those who are not familiar with the process. However, static IP is still in demand and useful if you host a website from home, have a file server in your network, use networked printers, or if you use a remote access program. Because a static IP address never changes so that other devices can always know exactly how to contact a device that utilizes a static IP.

Related Article: IPv4 vs IPv6: What’s the Difference?

You might be interested in

See profile for Moris.

Email Address

Please enter your email address.

Please make sure you agree to our Privacy Policy and Terms of Use.

FS Same Day Shipping Ensures Your Business Success

PicOS® Switch Software

AmpCon™ Management Platform

AmpCon™ Management Platform

  • Network Cabling and Wiring
  • Buying Guide
  • Fiber Optic Communication
  • Optics and Transceivers
  • Data Center
  • Network Switch
  • Ethernet Patch Cords
  • Business Type
  • Routing and Switching
  • Optical Networking

Fiber Optic Cable Types: Single Mode vs Multimode Fiber Cable

Fiber Optic Cable Types: Single Mode vs Multimode Fiber Cable

May 10, 2022

Layer 2 vs Layer 3 Switch: Which One Do You Need?

Layer 2 vs Layer 3 Switch: Which One Do You Need?

Oct 6, 2021

Multimode Fiber Types: OM1 vs OM2 vs OM3 vs OM4 vs OM5

Multimode Fiber Types: OM1 vs OM2 vs OM3 vs OM4 vs OM5

Sep 22, 2021

Running 10GBASE-T Over Cat6 vs Cat6a vs Cat7 Cabling?

Running 10GBASE-T Over Cat6 vs Cat6a vs Cat7 Cabling?

Mar 18, 2024

PoE vs PoE+ vs PoE++ Switch: How to Choose?

PoE vs PoE+ vs PoE++ Switch: How to Choose?

May 30, 2024

  • Command Line Interface (CLI)
  • Denial-of-Service Attack (DoS)
  • Desktop Managers
  • Linux Administration
  • Virtual Private Network (VPN)
  • Wireless LAN (Wi-Fi)
  • Privacy Policy

blackMORE Ops Learn one trick a day ….

Setup dhcp or static ip address from command line in linux.

March 26, 2015 Command Line Interface (CLI) , How to , Linux , Linux Administration , Networking 25 Comments

This guide will guide you on how to setup DHCP or static IP address from command Line in Linux. It saved me when I was in trouble, hopefully you will find it useful as well. In case you’ve only got Wireless, you can use this guide to connect to WiFi network from command line in Linux .

Note that my network interface is eth0 for this whole guide. Change eth0 to match your network interface.

Static assignment of IP addresses is typically used to eliminate the network traffic associated with DHCP/DNS and to lock an element in the address space to provide a consistent IP target.

Step 1 : STOP and START Networking service

Some people would argue restart would work, but I prefer STOP-START to do a complete rehash. Also if it’s not working already, why bother?

Step 2 : STOP and START Network-Manager

If you have some other network manager (i.e. wicd, then start stop that one).

Just for the kicks, following is what restart would do:

Step 3 : Bring up network Interface

Now that we’ve restarted both networking and network-manager services, we can bring our interface eth0 up. For some it will already be up and useless at this point. But we are going to fix that in next few steps.

The next command shows the status of the interface. as you can see, it doesn’t have any IP address assigned to it now.

Step 4 : Setting up IP address – DHCP or Static?

Now we have two options. We can setup DHCP or static IP address from command Line in Linux. If you decide to use DHCP address, ensure your Router is capable to serving DHCP. If you think DHCP was the problem all along, then go for static.

Again, if you’re using static IP address, you might want to investigate what range is supported in the network you are connecting to. (i.e. some networks uses 10.0.0.0/8, some uses 172.16.0.0/8 etc. ranges). For some readers, this might be trial and error method, but it always works.

Step 4.1 – Setup DHCP from command Line in Linux

Assuming that you’ve already completed step 1,2 and 3, you can just use this simple command

The first command updates /etc/network/interfaces file with eth0 interface to use DHCP.

The next command brings up the interface.

With DHCP, you get IP address, subnet mask, broadcast address, Gateway IP and DNS ip addresses. Go to step xxx to test your internet connection.

Step 4.2 – Setup static IP, subnet mask, broadcast address in Linux

Use the following command to setup IP, subnet mask, broadcast address in Linux. Note that I’ve highlighted the IP addresses in red . You will be able to find these details from another device connected to the network or directly from the router or gateways status page. (i.e. some networks uses 10.0.0.0/8, some uses 172.16.0.0/8 etc. ranges)

Next command shows the IP address and details that we’ve set manually.

Because we are doing everything manually, we also need to setup the Gateway address for the interface. Use the following command to add default Gateway route to eth0 .

We can confirm it using the following command:

Step 4.3 – Alternative way of setting Static IP in a DHCP network

If you’re connected to a network where you have DHCP enabled but want to assign a static IP to your interface, you can use the following command to assign Static IP in a DHCP network, netmask and Gateway.

At this point if your network interface is not up already, you can bring it up.

Step 4.4 –  Fix missing default Gateway

Looks good to me so far. We’re almost there.

Try to ping http://google.com/ (cause if www.google.com is down, Internet is broken!):

Step 5 : Setting up nameserver / DNS

For most users step 4.4 would be the last step. But in case you get a DNS error you want to assign DNS servers manually, then use the following command:

This will add Google Public DNS servers to your resolv.conf file. Now you should be able to ping or browse to any website.

Losing internet connection these days is just painful because we are so dependent on Internet to find usable information. It gets frustrating when you suddenly lose your GUI and/or your Network Manager and all you got is either an Ethernet port or Wireless card to connect to the internet. But then again you need to memorize all these steps.

I’ve tried to made this guide as much generic I can, but if you have a suggestion or if I’ve made a mistake, feel free to comment. Thanks for reading. Please share & RT.

Enabling AMD GPU for Hashcat on Kali Linux: A Quick Guide

Enabling AMD GPU for Hashcat on Kali Linux: A Quick Guide

If you’ve encountered an issue where Hashcat initially only recognizes your CPU and not the …

Boot Ubuntu Server 22.04 LTS from USB SSD on Raspberry Pi 4

Boot Ubuntu Server 22.04 LTS from USB SSD on Raspberry Pi 4

This is a guide for configuring Raspberry Pi4 to boot Ubuntu from external USB SSD …

25 comments

' src=

Just wanted to say, your guides are amazing and should be included into kali’s desktop help manual. Thanks for your awesome work!

' src=

Hi Matt, That’s very kind, thank you. I’m happy that my little contributions are helping others. Cheers, -BMO

' src=

I’ve gone through the steps listed in Step 4.2 and when I check my settings are correct, until I reboot. After I reboot all my settings have reverted back to the original settings. Any ideas?

' src=

The only problem with this is that nowadays Linux machines aren’t always shipped with the tools you use. They are now shipped with the systemd virus so the whole init.d doens’t work anymore and ifconfig isn’t shipped on a large number of distro’s.

Hi, The intention was to show what to do when things are broken badly. In my case, I’ve lost Network Manager and all of Gnome Desktop. I agree this is very old school but I’m sure it’s better than reinstalling. Not sure what distro you’re talking about. I use Debian based Kali (and Debian Wheezy), CentOS(5,6,7) and Ubuntu for work, personal and testing. ifconfig is present is every one of them. ifconfig also exists in all variants of server distro, even in all Big-IP F5’s or CheckPoint Firewalls. Hope that explains my inspiration for this article. Cheers, -BMO

' src=

Hi , I want to say Thank you for your Guide, it’s very useful. and want to add another method for Step 5 : Setting up nameserver / DNS: add nameserver directly to resolv.conf file

nano /etc/resolv.conf

Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)

DO NOT EDIT THIS FILE BY HAND — YOUR CHANGES WILL BE OVERWRITTEN

nameserver 8.8.8.8 nameserver 8.8.4.4 search Home

' src=

nano or vi is not requiered, use “printf” instead “echo”… e.g:

printf “nameserver 8.8.8.8\nnameserver 8.8.4.4\n” >> /etc/resolv.conf

double-check with:

grep nameserver /etc/resolv.conf

' src=

Hey I’m new to VM my eth0 inet addr is 10.0.02.15 but every video I watch their inet addr always starts with 192. I was wondering what I can do to change my inet addr to start with 192. Is this guide a solution

Hi Billy Bob, Is 10.x address coming from VBox internal or from your router? You possibly selected Bridged network. Try juggling between Bridged and NAT. Also look up VBox IP addressing in Google. Cheers -BMO

' src=

Please help, I’ve done all these steps and still I don’t have internet connection with bridged adapter. When I set NAT I have internet connection but with bridged adapter i don’t. I checked with ifconfig eth0 command and I have ip, netmask and broadcast ip. What could be the problem?

' src=

Excellent guide. I haven’t been using any debian based linux distros in a while and forgot where the entries go manually. I was actually kind of surprised how long it took to find your page in google, there is a lot of pages that don’t actually answer the question, but yours was spot on.

' src=

I did all the commands but my IP address doesn’t show up, and now my internet server on Linux iceweasel is down. It’s telling me that “Server not found” I really need help.

' src=

Hi Blackmoreops Thanks for the tutorial. I do have a question tho, in kalisana, I have followed your advice step by step to configure a static ip on my kali VM. But when I check with ifconfig, I still get the ip assigned by my modem? I run the kali vm on fedora 22 host… Is there a way around this? Regards Adexx

' src=

Hey Blackmoreops, Thanks for the great article. Being a total NOOB, I’m wondering if these are the last steps in getting my correct lab setting to enumerate De-Ice 1.100 with nmap. My current setting on Kali 2016 machine are: add:192.168.1.5 , mask: 255.255.255.0, default gw 192.168.1.1. Both machine set to NAT in Virtualbox 5. I’ve tried numerous scans ie., ping, list proctocol verify, and stealth and I’m unable to find any open ports. Help!!!!!!!

Best Regards. C

' src=

i tried on my kali linux but i lost my internet connection

' src=

hello everyone i have got problem on my kali linux with internet. Kali is connected to my wifi but iceweasel can’t open any site. Can you help me solve this problem please ?

' src=

check mtu and DNS

' src=

Followed through all the steps, and it worked. Then I restarted the router, and everything is back to the earlier configuration?

' src=

thanks for tutorial.bu how change the ip that blocked by google :D

' src=

Hello sorry but wasnt able to configure my network. I installed kali into my hdd and im using it as my main OS on this pc(idk if thats recommended or not) . I am curently connected to the internet with an ethernet cable and somehow in th top-right corner it says that is curently connected but when i try to open ice weasel i get a message that tells me “server not found” can someone please tell me how to fix this issue and also i followed your tutorial until the end but i had trouble in the end because i get this message bash: /etc/resolv.conf: no such file or directory . If you can help me i would be so gratefull. Sorry for butchering the english language and its grammar

' src=

Sir, How can we change or spoof dns server in kali Linux.

' src=

I can’t get my static IP address to ping google.

This is what I am trying to do:

ping google.com using a server created with static IP address using Linux Redhat VM Ware,

please help!

' src=

For setting up DHCP using the Command : ifconfig eth0 inet dhcp Also works

For setting up DHCP using command : ifconfig eth0 inet dhcp Also works

Leave your solution or comment to help others. Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Discover more from blackMORE Ops

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

Privacy Policy on Cookies Usage

Some services used in this site uses cookies to tailor user experience or to show ads.

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

DHCP Reservation vs Static IP address

So after browsing some websites, some people are telling me that static IP address is the best. But others say the DHCP Reservation is just as good if not the same.

So what is better? Or are they pretty much the same?

Well to help clarify some more. I reserved my PS3 and Wii U IP address in my router. Is that all right?

Ron Crafton's user avatar

  • 1 What websites have you been reading, and what points did they make? It's a ridiculous argument really. If you can assign via DHCP, do it. If you can't, you're stuck assigning an manual address on the device. –  Brad Commented Sep 13, 2014 at 6:03
  • One of the factors to take into account when evaluating articles written on this subject is that many learned to use static IPs because early consumer routers didn't have a mechanism for DHCP reservations. Of the 3 answers available now @Wes Sayeed saysit the best... but I hate his first sentence. I absolutely agree with the 3rd paragraph tho... –  Tyson Commented Sep 13, 2014 at 13:06

5 Answers 5

Using DHCP reservations offers you a sort of poor-man's IP address management solution. You can see and change IP addresses from a single console and makes it so you can see what addresses are available without having to resort to an Excel spreadsheet (or worse, a ping and pray system).

That being said, many applications require a static IP. If the server is configured to use DHCP, the application has no way of knowing that a reservation exists and may refuse to install. Also some applications tie their license to an IP address and therefore must be static as well.

Personally I prefer to use reservations when I can, and statics when I have to. But when I do use a static, I make a reservation for that address anyway so that A) it can be within the scope with the rest of the servers, and B) still provides the visual accounting of the address.

NOTE: If you're referring to network devices like IP cameras and printers, reservations are definitely the way to go because you can add a comment in the reservation as to what the device is and where it's located. Depending on the device, this may be your only means of documenting that information within the system.

Wes Sayeed's user avatar

  • 1 Setting a reservation for a computer that has a static IP is also a good way of preventing IP address clashes. –  Michael Frank Commented Sep 13, 2014 at 5:48
  • 1 I am very curious to know what software you run that requires a fixed IP address and is unable to know what that address is if that IP is assigned via DHCP. I've also never seen any application permanently tying a license to a single IP address. –  Brad Commented Sep 13, 2014 at 6:04
  • 2 This is a great answer... the first line stinks tho, You start out by implying that it's the poor man's solution, but go on to sell the merits. –  Tyson Commented Sep 13, 2014 at 13:08
  • 3 To call the the Poor-man's solution, you should explain what the more elegant rich-man's solution would be. (Sorry for the double comment, the edit button for the one above is already gone.) –  Tyson Commented Sep 13, 2014 at 13:16
  • 1 I'm not knocking it at all. I just meant "poor-man's solution" as in it's built-in and therefore free. There are IPAM solutions out there that cost money -- some lots of money -- and offer all kinds of features beyond your basic DHCP functions. –  Wes Sayeed Commented Sep 13, 2014 at 18:56

As a printer tech, DHCP reservations are preferable to static IP assignments. You can manage them centrally as well as ensure that the device always has the current DNS and other network info.

However, DHCP reservations require you to have access to the router/DHCP server, which as an outside vendor isn't always possible. If you can't do DHCP reservations, use a static IP (being sure to manually enter subnet, DNS, etc.) but try to make it outside the DHCP scope if possible.

freginold's user avatar

I have never ran into a situation that I NEEDED to use a static but was more profitable to use one such as office laser jet printers (when you do always block the ip address from DHCP).

In my opinion laptops, phones, and any "mobile" devices should be reserved not static. It requires no set up on the device and the server will reserve that address for that device.

When it comes to printers and in certain cases workstations (if you need to know the address... for remote desktop ect.) always go static but remember to block the address from DHCP.

Remember though if you need to re-configure your subnet mask for any reason any and all static devices must be changed. Always think about future needs.

Vdub's user avatar

I have several devices at home, that need fixed IP addresses and many others where I desire fixed IP addresses. Over the years I have discovered that the choice between static IP addresses or DHCP reservations depends on the nature of the application and convenience (how many and how often do you have to set them).

For devices whose configuration does not change often (NAS, desktops, VDI machines, print server, routers, switches) and where it takes little to no effort to change IP addresses, I prefer static addresses. For everything else (IP Cams, printers, thin clients, IoT devices), I use DHCP reservations. Setting a static IP on a computer is extremely easy; once set, I don't have to visit them for years. On the other hand, I may reset printers, IP cams, Raspberry Pi devices, UPS etc. several times. It is much easier to make DHCP reservations on the DHCP server for these devices, and expect to find the reset device at the same IP every time.

Regardless of how I set the IP, I always have a reservation on the DHCP server (for consistency sake) and I track them on a spreadsheet.

Prashanth Ruthala's user avatar

A manual IP allocation is always more worthy. As an administrator, it is very important to keep a track on users' activities and DHCP gives a new IP after every 8 days by default. In such cases you cannot maintain any record for your IP addresses. Also if you want to permit different internet access authority to different departments, manual IP allocation is the best and most reliable option.

Static techniques take time but its always better to go for a static IP address if you have a big network.

Stephen's user avatar

  • 5 But DHCP reservation also ensure the same device will always get the same IP address.. so besides the above answers (where a static IP is a MUST and no option otherwise), you can still manage your devices IP address using DHCP reservation (and all done via a central console, without configuring every single client devices). Or am I missing something? –  Darius Commented Sep 13, 2014 at 13:09
  • 4 I agree with @Darius... and sorry Stephen, but your answer shows that you don't understand the concept of DHCP reservations. –  Tyson Commented Sep 13, 2014 at 13:24
  • Well thanks Tyson! yes I think I went on another track. Hope to serve better next time. –  Stephen Commented Sep 17, 2014 at 7:45

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged networking ip dhcp static-ip ..

  • Featured on Meta
  • Upcoming sign-up experiments related to tags

Hot Network Questions

  • Does it matter if a fuse is on a positive or negative voltage?
  • Cloud masking ECOSTRESS LST data
  • I wanna start making scripts for my own Indie animation series, but can't find a good way to start it
  • What kind of sequence is between an arithmetic and a geometric sequence?
  • Rear shifter cable wont stay in anything but the highest gear
  • Where can I access records of the 1947 Superman copyright trial?
  • Why depreciation is considered a cost to own a car?
  • What does ‘a grade-hog’ mean?
  • How many steps are needed to turn one "a" into at least 100,000 "a"s using only the three functions of "select all", "copy" and "paste"?
  • Does the Ogre-Faced Spider regenerate part of its eyes daily?
  • Different outdir directories in one Quantum ESPRESSO run
  • Con permiso to enter your own house?
  • Predictable Network Interface Names: ensX vs enpXsY
  • Is it unfair to retroactively excuse a student for absences?
  • How is Victor Timely a variant of He Who Remains in the 19th century?
  • Examples of distribution for which first-order condition is not enough for MLE
  • \newrefsegment prints title of bib file
  • Is arxiv strictly for new stuff?
  • What to do if you disagree with a juror during master's thesis defense?
  • What's the point of Dream Chaser?
  • Can you arrange 25 whole numbers (not necessarily all different) so that the sum of any three successive terms is even but the sum of all 25 is odd?
  • Can I get a refund for ICE due to cancelled regional bus service?
  • Was BCD a limiting factor on 6502 speed?
  • What are these courtesy names and given names? - confusion in translation

static dhcp assignment

  • Engineering Mathematics
  • Discrete Mathematics
  • Operating System
  • Computer Networks
  • Digital Logic and Design
  • C Programming
  • Data Structures
  • Theory of Computation
  • Compiler Design
  • Computer Org and Architecture
  • Ethical Hacking Tutorial

Introduction to Ethical Hacking

  • What is Hacktivism ?
  • Methodology followed by the Hackers
  • Remote Access in Ethical Hacking
  • Kali Linux - Information Gathering Tools
  • ARIN in Ethical Hacking
  • Basic Characteristics of Computer Networks

Foot Printing and Reconnaissance

  • What is DNS Footprinting?
  • Footprinting Through Search Engines
  • What is Whois Footprinting?
  • Footprinting Using Social Engineering Method

Scanning Networks

  • What is Credentialed Vulnerability Scan?
  • What are Scanning Attacks?
  • Malware Scan in Ethical Hacking
  • What is Running of a Malware Scan?
  • WAScan - web application security scanner in Kali Linux
  • What is TCP-ACK Scanning?
  • Port Scanning Techniques By Using Nmap
  • What is SYN Scanning?
  • What is UDP Scanning?

Enumeration

  • Cyber Security - Types of Enumeration
  • What is DNS Enumeration?
  • SMTP Enumeration
  • LDAP Enumeration
  • What is NTP Enumeration?
  • What is IPsec Enumeration?
  • What is NetBIOS Enumeration?
  • SNMP Enumeration
  • What is Security Testing in Enumeration?

System Hacking

  • What is System Hacking in Ethical Hacking?
  • What is Windows Hacking ?
  • Importance of Physical Security in Ethical Hacking
  • What is Non-Electronic Password Attack on a System?
  • What is Password Guessing Attack?
  • Credential Stuffing in Ethical Hacking
  • Reverse Brute Force Attack in System Hacking
  • Brute Force Attack
  • What is a Default Password Attack Threat?
  • USB Drop Attack in System Hacking
  • What is Sniffing Attack in System Hacking?
  • How to Prevent Man In the Middle Attack?
  • How To Generate Rainbow Table Using WinRTGen?
  • What is Elcomsoft Distributed Password Recovery?
  • pwdump7 in System Hacking
  • FGDUMP in System Hacking
  • Password Auditing With L0phtcrack 7 Tool
  • What is Salted Password Hashing?
  • How to Hack Wifi Using Aircrack-ng in Termux Without Root?
  • How to Defend Against Password Cracking of Systems?
  • How to Defend Against Wi-Fi Pineapple?
  • What is DLL Hijacking?
  • How to Prevent Privilege Escalation?

Malware Analysis

  • Most Popular Methods Used By Hackers to Spread Ransomware
  • What is Malvertising?
  • How to Find Trojan on Computers?
  • Malwares - Malicious Software
  • What is WannaCry? How does WannaCry ransomware work?
  • Working of Stuxnet Virus
  • CryptoLocker Ransomware Attack
  • What is Zeus Malware?
  • What is SQL Slammer Virus?
  • How to Install Trojan Virus on Any Computer?
  • Different Ways to Remove Trojan Horse Malware
  • How to Defend Against Botnets ?
  • What is Proxy Trojan?
  • What are Banking Trojans?
  • What is a Computer Virus?
  • Difference between Worms and Virus
  • Port Scanning Attack
  • What is System Integrity Check?
  • Code Emulation Technique For Computer Virus Detection
  • Heuristic Virus
  • How to Prevent Backdoor Attacks?
  • What are Active Sniffing Attacks?
  • What is Protocol Analyzer?
  • What is MAC Spoofing Attack?
  • How to Prevent MAC Flooding?
  • What is Port Stealing?

Dynamic Host Configuration Protocol (DHCP)

  • DHCP Starvation Attack
  • What is Rogue DHCP Server Attack?
  • What is ARP Spoofing Attack?
  • How to Prevent DNS Poisoning and Spoofing?
  • DNS Spoofing or DNS Cache poisoning
  • How to Detect Sniffer in Your Network?
  • Mitigation of DHCP Starvation Attack

Social Engineering

  • Social Engineering - The Art of Virtual Exploitation
  • What is Insider Attack?
  • What is an Impersonation Attack?
  • What are Tailgating?
  • How Hackers Use Social Engineering to Get Passwords on Facebook?
  • Pretexting in Social Engineering
  • Credit Card Frauds
  • Active Social Engineering Defense (ASED)
  • Cyber Crime - Identity Theft
  • Penetration Testing - Software Engineering

Denial-of-Service

  • Distributed Denial of Service DDoS attack
  • What are Bandwidth Attacks?
  • HTTP Flood Attack
  • ICMP Flood DDoS Attack
  • Ping Flood Attack
  • What is a Permanent DoS (PDoS) Attack?
  • What is Phlashing?
  • Session Hijacking
  • TCP/IP Hijacking
  • UDP Session Hijacking
  • What are Types of Session Hijacking ?
  • Difference Between Spoofing and Hijacking
  • Application Level Hijacking Using Proxy Hacking
  • Man-in-the-Browser Attack
  • DOM-Based Cookie Manipulation
  • What are Session Replay Attacks?
  • What is Cookie Hijacking?
  • Session Prediction Software Attack
  • Types of Client-Side Attacks
  • Difference Between XSS and SQL Injection
  • How SYN cookies are used to preventing SYN Flood attack
  • IPSec Architecture

Evading IDS,Firewalls,and Honeypots

  • Bypass Firewalls Using SSH
  • What is Bitvise SSH Client?
  • How to Prevent Port Scan Attacks?
  • What is Source Port Randomization For Caching DNS ?
  • Types of Evasion Technique For IDS

Hacking Web Servers

  • Web Threat Shield
  • Web Reputation
  • What is Recursive DNS?
  • Path Traversal Attack and Prevention
  • What is Server Misconfiguration?
  • Web Cache Poisoning
  • How to Brute-Force SSH in Kali Linux?
  • How to Hack a Web Server?
  • Testing For IMAP SMTP Injection
  • Web Parameter Tampering Attack on Web Servers
  • How To Crack Online Web Form Passwords?
  • How to Crack FTP Passwords?
  • Cookie Tampering Techniques
  • What is Input Validation Attack?
  • Ethical Hacking | Footprinting
  • Parsero - Tool for reading the Robots.txt file in Kali Linux
  • What is Credential Harvester Attack ?
  • Script Http-Userdir-Enumeration Method
  • Linux - Metasploit Command
  • Working with Payload Metasploit in Kali Linux
  • What is Code Access Security?
  • CRLF Injection Attack
  • What is Patch Management?

Hacking Web Applications

  • What is Cookie Poisoning?
  • What are Injection Flaws?
  • How to Prevent Broken Access Control?
  • What is Improper Error Handling?
  • What is Log Tampering?
  • OWASP Top 10 Vulnerabilities And Preventions
  • Insecure Cryptographic Storage Vulnerability
  • Web Server and its Types of Attacks
  • Insufficient Transport Layer Protection
  • What is Failure to Restrict URL Access?
  • Session Fixation Attack
  • What is Malicious File Execution?
  • What is CSV Injection?
  • Converting a CVE list to Patch Vulnerabilities
  • What is Arbitrary Code Execution?
  • Malicious Script
  • What is User Privileges in Ethical Hacking ?
  • What is CAPTCHA Attack?
  • What is Banner Grabbing?
  • WhatWaf - Detect And Bypass Web Application Firewalls And Protection Systems
  • User Directed Spidering with Burp
  • What is Attack Surface?
  • What is Authentication Attack?
  • User Enumeration in Ethical Hacking
  • What is SMTP Header Injection?
  • What is Canonicalization Attack?
  • How a Connection String Injection Attack is Performed?
  • What is Connection String Parameter Pollution?
  • Pass-the-Hash (PtH) Attack
  • What is WSDL Attack?
  • Types of SQL Injection (SQLi)

Hacking Wireless Networks

  • Orthogonal Frequency-Division Multiplexing (OFDM)
  • Direct Sequence Spread Spectrum in Wireless Networks
  • Frequency-Hopping Spread Spectrum in Wireless Networks
  • Warchalking in Wireless Networks
  • Types of WiFi Antenna in Wireless Networks
  • Types of Wireless Security Encryption
  • WEP Crack Method in Wireless Networks
  • Bluesnarfing Attack in Wireless Networks
  • BlueSmack Attack in Wireless Networks
  • How To Install Super Bluetooth Hack on Android?
  • Bluebugging in Wireless Networks

Cloud Computing

  • Net-Centric Computing in Cloud Computing
  • Security Issues in Cloud Computing
  • Packet Switched Network (PSN) in Networking
  • What is Parallel File System in Cloud Computing?
  • How To Install AWS CLI - Amazon Simple Notification Service (SNS)?
  • How to Authorize Inbound Traffic For Your Linux Instances?
  • How To Multiple IP Addresses Work in Ethical Hacking?

Cryptography

  • What is Heartbleed Bug in Ethical Hacking ?
  • Email Hijacking
  • What is Hybrid Cryptosystem in Ethical Hacking?

Dynamic Host Configuration Protocol, is a network protocol used to automate the process of assigning IP addresses and other network configuration parameters to devices (such as computers, smartphones, and printers) on a network.

What is DHCP?

DHCP stands for Dynamic Host Configuration Protocol. It is the critical feature on which the users of an enterprise network communicate. DHCP helps enterprises to smoothly manage the allocation of IP addresses to the end-user clients’ devices such as desktops, laptops, cellphones, etc. is an application layer protocol that is used to provide:

DHCP is based on a client-server model and based on discovery, offer, request, and ACK. 

Why Use DHCP?

DHCP helps in managing the entire process automatically and centrally. DHCP helps in maintaining a unique IP Address for a host using the server. DHCP servers maintain information on TCP/IP configuration and provide configuration of address to DHCP-enabled clients in the form of a lease offer.

Components of DHCP

The main components of DHCP include:

  • DHCP Server: DHCP Server is a server that holds IP Addresses and other information related to configuration.
  • DHCP Client: It is a device that receives configuration information from the server. It can be a mobile, laptop, computer, or any other electronic device that requires a connection.
  • DHCP Relay: DHCP relays basically work as a communication channel between DHCP Client and Server. 
  • IP Address Pool: It is the pool or container of IP Addresses possessed by the DHCP Server. It has a range of addresses that can be allocated to devices.
  • Subnets: Subnets are smaller portions of the IP network partitioned to keep networks under control. 
  • Lease: It is simply the time that how long the information received from the server is valid, in case of expiration of the lease, the tenant must have to re-assign the lease.
  • DNS Servers: DHCP servers can also provide DNS (Domain Name System) server information to DHCP clients, allowing them to resolve domain names to IP addresses.
  • Default Gateway: DHCP servers can also provide information about the default gateway, which is the device that packets are sent to when the destination is outside the local network.
  • Options: DHCP servers can provide additional configuration options to clients, such as the subnet mask, domain name, and time server information.
  • Renewal: DHCP clients can request to renew their lease before it expires to ensure that they continue to have a valid IP address and configuration information.
  • Failover: DHCP servers can be configured for failover, where two servers work together to provide redundancy and ensure that clients can always obtain an IP address and configuration information, even if one server goes down.
  • Dynamic Updates: DHCP servers can also be configured to dynamically update DNS records with the IP address of DHCP clients, allowing for easier management of network resources.
  • Audit Logging: DHCP servers can keep audit logs of all DHCP transactions, providing administrators with visibility into which devices are using which IP addresses and when leases are being assigned or renewed.
Operation Code          Hardware type             Hardware length                Hop count
                                                 Transition ID
                Number of seconds                                      Flags
                                               Client IP address
                                                Your IP address
                                                Server IP address
                                                Gateway IP address

                                                Client hardware address

                                                (16 bytes)

                                                Server name

                                                (64 bytes)

                                                 Boot file name

                                                (128 bytes)

                                                 Options

                                                 ( Variable length)

                                                           Fig. DHCP Packet Format

  • Hardware length: This is an 8-bit field defining the length of the physical address in bytes. e.g for Ethernet the value is 6.
  • Hop count: This is an 8-bit field defining the maximum number of hops the packet can travel.
  • Transaction ID: This is a 4-byte field carrying an integer. The transcation identification is  set by the client and is used to match a reply with the request. The server returns the same value in its reply.
  • Number of seconds: This is a 16-bit field that indicates the number of seconds elapsed since the time the client started to boot.
  • Flag: This is a 16-bit field in which only the leftmost bit is used and the rest of the bit should be set to os. A leftmost bit specifies a forced broadcast reply from the server. If the reply were to be unicast to the client, the destination. IP address of the IP packet is the address assigned to the client.
  • Client IP address: This is a 4-byte field that contains the client IP address . If the client does not have this information this field has a value of 0.
  • Your IP address: This is a 4-byte field that contains the client IP address. It is filled by the server at the request of the client.
  • Server IP address: This is a 4-byte field containing the server IP address. It is filled by the server in a reply message.
  • Gateway IP address: This is a 4-byte field containing the IP address of a routers. IT is filled by the server in a reply message.
  • Client hardware address: This is the physical address of the client .Although the server can retrieve this address from the frame sent by the client it is more efficient if the address is supplied explicity by the client in the request message.
  • Server name: This is a 64-byte field that is optionally filled by the server in a reply packet. It contains a null-terminated string consisting of the domain name of the server. If the server does not want to fill this filed with data, the server must fill it with all 0s.
  • Boot filename: This is a 128-byte field that can be optionally filled by the server in a reply packet. It contains a null- terminated string consisting of the full pathname of the boot file. The client can use this path to retrieve other booting information. If the server does not want to fill this field with data, the server must fill it with all 0s.
  • Options: This is a 64-byte field with a dual purpose. IT can carry either additional information or some specific vendor information. The field is used only in a reply message. The server uses a number, called a magic cookie, in the format of an IP address with the value of 99.130.83.99. When the client finishes reading the message, it looks for this magic cookie. If present the next 60 bytes are options. 

Working of DHCP

DHCP works on the Application layer of the TCP/IP Protocol. The main task of DHCP is to dynamically assigns IP Addresses to the Clients and allocate information on TCP/IP configuration to Clients. For more, you can refer to the Article Working of DHCP .

The DHCP port number for the server is 67 and for the client is 68. It is a client-server protocol that uses UDP services . An IP address is assigned from a pool of addresses. In DHCP, the client and the server exchange mainly 4 DHCP messages in order to make a connection, also called the DORA process, but there are 8 DHCP messages in the process.

Working of DHCP

The 8 DHCP Messages

1. DHCP discover message: This is the first message generated in the communication process between the server and the client. This message is generated by the Client host in order to discover if there is any DHCP server/servers are present in a network or not. This message is broadcasted to all devices present in a network to find the DHCP server. This message is 342 or 576 bytes long 

DHCP discover message

DHCP discover message

As shown in the figure, the source MAC address (client PC) is 08002B2EAF2A, the destination MAC address(server) is FFFFFFFFFFFF, the source IP address is 0.0.0.0(because the PC has had no IP address till now) and the destination IP address is 255.255.255.255 (IP address used for broadcasting). As they discover message is broadcast to find out the DHCP server or servers in the network therefore broadcast IP address and MAC address is used.  

2. DHCP offers a message: The server will respond to the host in this message specifying the unleased IP address and other TCP configuration information. This message is broadcasted by the server. The size of the message is 342 bytes. If there is more than one DHCP server present in the network then the client host will accept the first DHCP OFFER message it receives. Also, a server ID is specified in the packet in order to identify the server. 

DHCP offer message

DHCP offer message

Now, for the offer message, the source IP address is 172.16.32.12 (server’s IP address in the example), the destination IP address is 255.255.255.255 (broadcast IP address), the source MAC address is 00AA00123456, the destination MAC address is FFFFFFFFFFFF. Here, the offer message is broadcast by the DHCP server therefore destination IP address is the broadcast IP address and destination MAC address is FFFFFFFFFFFF and the source IP address is the server IP address and the MAC address is the server MAC address. 

Also, the server has provided the offered IP address 192.16.32.51 and a lease time of 72 hours(after this time the entry of the host will be erased from the server automatically). Also, the client identifier is the PC MAC address (08002B2EAF2A) for all the messages. 

3. DHCP request message: When a client receives an offer message, it responds by broadcasting a DHCP request message. The client will produce a  gratuitous ARP in order to find if there is any other host present in the network with the same IP address. If there is no reply from another host, then there is no host with the same TCP configuration in the network and the message is broadcasted to the server showing the acceptance of the IP address. A Client ID is also added to this message. 

DHCP request message

DHCP request message

Now, the request message is broadcast by the client PC therefore source IP address is 0.0.0.0(as the client has no IP right now) and destination IP address is 255.255.255.255 (the broadcast IP address) and the source MAC address is 08002B2EAF2A (PC MAC address) and destination MAC address is FFFFFFFFFFFF. 

Note – This message is broadcast after the ARP request broadcast by the PC to find out whether any other host is not using that offered IP. If there is no reply, then the client host broadcast the DHCP request message for the server showing the acceptance of the IP address and Other TCP/IP Configuration. 

4. DHCP acknowledgment message: In response to the request message received, the server will make an entry with a specified client ID and bind the IP address offered with lease time. Now, the client will have the IP address provided by the server. 

DHCP acknowledgment message

DHCP acknowledgment message

Now the server will make an entry of the client host with the offered IP address and lease time. This IP address will not be provided by the server to any other host. The destination MAC address is FFFFFFFFFFFF and the destination IP address is 255.255.255.255 and the source IP address is 172.16.32.12 and the source MAC address is 00AA00123456 (server MAC address).  

5. DHCP negative acknowledgment message: Whenever a DHCP server receives a request for an IP address that is invalid according to the scopes that are configured, it sends a DHCP Nak message to the client. Eg-when the server has no IP address unused or the pool is empty, then this message is sent by the server to the client. 

6. DHCP decline: If the DHCP client determines the offered configuration parameters are different or invalid, it sends a DHCP decline message to the server. When there is a reply to the gratuitous ARP by any host to the client, the client sends a DHCP decline message to the server showing the offered IP address is already in use. 

7. DHCP release: A DHCP client sends a DHCP release packet to the server to release the IP address and cancel any remaining lease time. 

8. DHCP inform: If a client address has obtained an IP address manually then the client uses DHCP information to obtain other local configuration parameters, such as domain name. In reply to the DHCP inform message, the DHCP server generates a DHCP ack message with a local configuration suitable for the client without allocating a new IP address. This DHCP ack message is unicast to the client.  

Note – All the messages can be unicast also by the DHCP relay agent if the server is present in a different network. 

Advantages of DHCP

  • Centralized management of IP addresses.
  • Centralized and automated TCP/IP configuration .
  • Ease of adding new clients to a network.
  • Reuse of IP addresses reduces the total number of IP addresses that are required.
  • The efficient handling of IP address changes for clients that must be updated frequently, such as those for portable devices that move to different locations on a wireless network.
  • Simple reconfiguration of the IP address space on the DHCP server without needing to reconfigure each client.
  • The DHCP protocol gives the network administrator a method to configure the network from a centralized area. 
  • With the help of DHCP, easy handling of new users and the reuse of IP addresses can be achieved.

Disadvantages of DHCP

  • IP conflict can occur.
  • The problem with DHCP is that clients accept any server. Accordingly, when another server is in the vicinity, the client may connect with this server, and this server may possibly send invalid data to the client.
  • The client is not able to access the network in absence of a DHCP Server.
  • The name of the machine will not be changed in a case when a new IP Address is assigned.

Frequently Asked Question on DHCP – FAQs

What are common issues with dhcp.

If the DHCP server is not properly set, it can cause difficulties such as IP address conflicts, incorrect subnet masks , incorrect default gateways , or insufficient IP address pools.

Which port is used in DHCP?

DHCP uses UDP port 67 on the server and UDP port 68 on the client.

Which layer protocol is DHCP?

DHCP is an application layer protocol.

Why is DHCP preferred?

It is a more efficient method for managing IP addresses than static address allocation. DHCP employs a stable transport layer protocol.

author

Please Login to comment...

Similar reads, improve your coding skills with practice.

 alt=

What kind of Experience do you want to share?

  • Skip to content
  • Skip to search
  • Skip to footer

Cisco Secure Firewall Threat Defense Virtual Getting Started Guide, Version 7.6

Bias-free language.

The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.

  • Introduction to the Cisco Secure Firewall Threat Defense Virtual
  • Deploy the Threat Defense Virtual on KVM

Managing the Secure Firewall Threat Defense Virtual with the Secure Firewall Management Center

  • Managing the Secure Firewall Threat Defense Virtual with the Secure Firewall Device Manager

Clear Contents of Search

Chapter: Managing the Secure Firewall Threat Defense Virtual with the Secure Firewall Management Center

About secure firewall threat defense virtual with the secure firewall management center, log in to the secure firewall management center, register the device with the secure firewall management center, configure interfaces, configure the dhcp server, add the default route, configure nat, configure access control, deploy the configuration, access the secure firewall threat defense cli.

This chapter describes how to deploy a standalone threat defense virtual device managed with the management center .

version features. If you are on an old version of software, refer to the procedures in the management center configuration guide for your version.

The Secure Firewall Threat Defense Virtual is the virtualized component of the Cisco NGFW solution. The threat defense virtual provides next-generation firewall services, including stateful firewalling, routing, VPN, Next-Generation Intrusion Prevention System (NGIPS), Application Visibility and Control (AVC), URL filtering, and malware defense.

You can manage the threat defense virtual using the management center , a full-featured, multidevice manager on a separate server. The threat defense virtual registers and communicates with the management center on the Management interface that you allocated to the threat defense virtual machine.

The threat defense virtual registers and communicates with the management center on the Management interface that you allocated to the threat defense virtual machine.

For troubleshooting purposes, you can access the threat defense CLI using SSH on the Management interface, or you can connect to the threat defense from the management center CLI.

This guide describes how to deploy a standalone threat defense virtual device managed with the management center . For detailed configuration information on the management center , see the Management Center Administration Guide and Management Center Device Configuration Guide .

For information about installing the management center , see the Cisco Firepower Management Center 1600, 2600, and 4600 Hardware Installation Guide or Management Center Virtual Getting Started Guide .

Use the management center to configure and monitor the threat defense .

Before you begin

For information on supported browsers, refer to the release notes for the version you are using (see https://www.cisco.com/go/firepower-notes ).

Using a supported browser, enter the following URL.

identifies the IP address or host name of the management center.

Enter your username and password.

Click Log In.

Make sure the threat defense virtual machine has deployed successfully, is powered on, and has gone through its first boot procedures.

via the day0/bootstrap script. However, all of these settings can be changed later at the CLI using configure network commands. See the .

Choose Devices > Device Management.

From the Add drop-down list, choose Add Device, and enter the following parameters.

—Enter the IP address of the device you want to add.

—Enter the name for the device as you want it to display in the management center.

—Enter the same registration key that you specified in the threat defense virtual bootstrap configuration.

—Assign the device to a leaf domain if you have a multidomain environment.

—Assign it to a device group if you are using groups.

—Choose an initial policy. Unless you already have a customized policy you know you need to use, choose Create new policy, and choose Block all traffic. You can change this later to allow traffic; see .

—Assign the Smart Licenses you need for the features you want to deploy: Malware (if you intend to use malware defense inspection), Threat (if you intend to use intrusion prevention), and URL (if you intend to implement category-based URL filtering).

—Specify the NAT ID you specified in the threat defense virtual bootstrap configuration.

—Allow the device to transfer packets to the management center. When events like IPS or Snort are triggered with this option enabled, the device sends event metadata information and packet data to the management center for inspection. If you disable it, only event information will be sent to the management center, but packet data is not sent.

Click Register, and confirm a successful registration.

fails to register, check the following items:

CLI ( ), and ping the management center IP address using the following command:

ping system

server set on the System > Configuration > Time Synchronization page.

IP address—Make sure you are using the same registration key, and if used, NAT ID, on both devices. You can set the registration key and NAT ID on the threat defense virtual using the configure manager add DONTRESOLVE<registrationkey> <NATID> command. This command also lets you change the management center IP address.

Configure a Basic Security Policy

This section describes how to configure a basic security policy with the following settings:

Inside and outside interfaces—Assign a static IP address to the inside interface, and use DHCP for the outside interface.

DHCP server—Use a DHCP server on the inside interface for clients.

Default route—Add a default route through the outside interface.

NAT—Use interface PAT on the outside interface.

Access control—Allow traffic from inside to outside.

Enable the threat defense virtual interfaces, assign them to security zones, and set the IP addresses. Typically, you must configure at least a minimum of two interfaces to have a system that passes meaningful traffic. Normally, you would have an outside interface that faces the upstream router or internet, and one or more inside interfaces for your organization’s networks. Some of these interfaces might be “demilitarized zones” (DMZs), where you place publically-accessible assets such as your web server.

A typical edge-routing situation is to obtain the outside interface address through DHCP from your ISP, while you define static addresses on the inside interfaces.

The following example configures a routed mode inside interface with a static address and a routed mode outside interface using DHCP.

Choose Devices > Device Management, and click the Edit ( for the device.

Click Interfaces.

Click the Edit ( for the interface that you want to use for .

tab appears.

up to 48 characters in length.

.

check box.

set to None.

drop-down list, choose an existing inside security zone or add a new one by clicking New.

. Each interface must be assigned to a security zone and/or interface group. An interface can belong to only one security zone, but can also belong to multiple interface groups. You apply your security policy based on zones or groups. For example, you can assign the inside interface to the inside zone; and the outside interface to the outside zone. Then you can configure your access control policy to enable traffic to go from inside to outside, but not from outside to inside. Most policies only support security zones; you can use zones or interface groups in NAT policies, prefilter policies, and QoS policies.

tab.

—Choose Use Static IP from the drop-down list, and enter an IP address and subnet mask in slash notation or DHCP option .

For example, enter 192.168.1.1/24

.

Click the Edit ( for the interface that you want to use for .

tab appears.

up to 48 characters in length.

.

check box.

set to None.

drop-down list, choose an existing outside security zone or add a new one by clicking New.

.

tab.

—Choose Use DHCP, and configure the following optional parameters:

—Obtains the default route from the DHCP server.

—Assigns an administrative distance to the learned route, between 1 and 255. The default administrative distance for the learned routes is 1.

.

Click Save.

Enable the DHCP server if you want clients to use DHCP to obtain IP addresses from the threat defense virtual .

Choose Devices > Device Management, and click the Edit ( for the device.

Choose DHCP > DHCP Server.

On the Server page, click Add, and configure the following options:

—Choose the interface from the drop-down list.

—Set the range of IP addresses from lowest to highest that are used by the DHCP server. The range of IP addresses must be on the same subnet as the selected interface and cannot include the IP address of the interface itself.

—Enable the DHCP server on the selected interface.

Click OK.

Click Save.

The default route normally points to the upstream router reachable from the outside interface. If you use DHCP for the outside interface, your device might have already received a default route. If you need to manually add the route, complete this procedure.

Choose Devices > Device Management, and click the Edit ( for the device.

Choose Routing > Static Route, click Add Route, and set the following:

—Click the IPv4 radio button depending on the type of static route that you are adding.

—Choose the egress interface; typically the outside interface.

—Choose any-ipv4 for an IPv4 default route.

—Enter or choose the gateway router that is the next hop for this route. You can provide an IP address or a Networks/Hosts object.

—Enter the number of hops to the destination network. Valid values range from 1 to 255; the default value is 1.

Click OK.

Click Save.

A typical NAT rule converts internal addresses to a port on the outside interface IP address. This type of NAT rule is called interface Port Address Translation (PAT) .

Choose Devices > NAT, and click New Policy > Threat Defense NAT.

Name the policy, select the device(s) that you want to use the policy, and click Save.

. You still have to add rules to the policy.

Click Add Rule.

dialog box appears.

Configure the basic rule options:

—Choose Auto NAT Rule.

—Choose Dynamic.

On the Interface Objects page, add the outside zone from the Available Interface Objects area to the Destination Interface Objects area.

On the Translation page, configure the following options:

—Click the Add ( to add a network object for all IPv4 traffic (0.0.0.0/0).

 
object, because Auto NAT rules add NAT as part of the object definition, and you cannot edit system-defined objects.

Translated Source —Choose Destination Interface IP .

Step 7

Click Save to add the rule.

The rule is saved to the Rules table.

static dhcp assignment

Step 8

Click Save on the NAT page to save your changes.

If you created a basic Block all traffic access control policy when you registered the threat defense virtual with the management center , then you need to add rules to the policy to allow traffic through the device. The following procedure adds a rule to allow traffic from the inside zone to the outside zone. If you have other zones, be sure to add rules allowing traffic to the appropriate networks.

See the Firewall Management Center Configuration Guide configuration guide to configure more advanced security settings and rules.

Choose Policy > Access Policy > Access Policy, and click the Edit ( for the access control policy assigned to the threat defense.

Click Add Rule, and set the following parameters:

—Name this rule, for example, inside_to_outside.

—Select the inside zone from Available Zones, and click Add to Source.

—Select the outside zone from Available Zones, and click Add to Destination.

Leave the other settings as is.

Click Add.

table.

Click Save.

Deploy the configuration changes to the threat defense virtual ; none of your changes are active on the device until you deploy them.

Click Deploy in the upper right.

Select the device in the Deploy Policies dialog box, then click Deploy.

Ensure that the deployment succeeds. Click the icon to the right of the Deploy button in the menu bar to see status for deployments.

You can use the threat defense virtual CLI to change management interface parameters and for troubleshooting purposes. You can access the CLI using SSH to the Management interface, or by connecting from the VMware console.

(Option 1) SSH directly to the threat defense virtual management interface IP address.

with the admin account and the password you set during initial deployment.

(Option 2) Open the VMware console and log in with the default username admin account and the password you set during initial deployment.

Was this Document Helpful?

Feedback

Contact Cisco

login required

  • (Requires a Cisco Service Contract )

static dhcp assignment

How to give your Xbox Series X|S or Xbox One a Static IP address

Sometimes you may need to give your Xbox a Static IP address to solve issues with Strict NAT types.

Xbox Series S

  • Explaining IP addresses as pertains to Xbox

How to set up a static IP address on Xbox

  • More guides on improving Xbox connectivity

Need a static IP address on your Xbox console? Then look no further. 

There's a range of reasons why you might want to set up static IP addresses on your Xbox One, Xbox Series X|S, or even Xbox 360 console. In households with multiple Xbox consoles, sometimes you can get IP conflicts if you rely on DHCP to assign IP addresses automatically, especially if one device has a static IP but yours doesn't. Some online games really don't like it if you don't set up static IP addresses as well when in households with multiple Xbox consoles. Additionally, some routers may require that you have a static IP address in order to set up port forwarding for your Xbox . 

Related: Best routers for gaming

In this simple Xbox help guide, we'll run through the general steps on how to set up a static IP address for your Xbox. This guide works for Xbox One consoles and the modern Xbox Series S , and Xbox Series X . The steps are fairly similar for the older Xbox 360 as well, just with a different settings menu style. 

The steps may differ depending on your router and home network environment, but the general direction will be the same overall. 

Understanding IP addresses and their relationship to Xbox

A Wi-Fi router.

An IP address (Internet Protocol Address) is a number that devices use to communicate across the internet, and within a home network. Your external IP address is what your ISP uses to deliver the web to your home, but within your home, each device has its own IP address to communicate with the outside world. This includes your Xbox, your smart TV, your phone, iPad, and whatever else you have connected up there. 

Your router handles internal IP addresses using DHCP (Dynamnic Host Configuration Protocol), that automates assigning IP addresses within your home network. By default, your router will typically have DHCP enabled, and will automatically assign IP addresses to the various devices in your house, as you connect them up. 

IP addresses assigned this way sometimes expire or get overwritten. In some cases, automatic IP address assignments can interfere with some internal routing for certain games, too. I had this issue with Monster Hunter Rise on Xbox for for a while, where me and my family couldn't connect and play together reliably within tihe same network until I gave everyone a static IP address and did full Xbox port forwarding for each device. 

For the vast majority of situations, using automatic IP addresses on your Xbox will be fine. However, if for whatever reason you do feel like setting up a static IP, either to enable port forwarding or overcome some other issue, here's how you go about it.

  • First, navigate to the settings menu on your Xbox by hitting the guide button, moving right through the menus, and then hitting settings. 
  • Find the network settings by going to the general tab at the top, then network settings. 
  • Navigate to advanced settings . Here, we'll need to make note of a bunch of information. 
  • Note down your IP address, Subnet Mask, Gateway, and DNS numbers (in the format of number.number.number.number). You don't need to make a note of IPv6 numbers. You can also make a note of your Wired and Wireless MAC addresses too , but they might not be needed. 

Xbox network settings menu

  • Now, you'll need to access your router settings . This is best done via a web browser on a Windows PC connected to the same network as your Xbox. 
  • Using the gateway IP address you recorded earlier, type that into your web browser (like Microsoft Edge or Chrome) to access your router. It should look something like this: http://192.168.1.1, although the last two numbers may vary slightly. Some routers even have a special URL you can go to to configure their systems. Your router instructions should have more details. 
  • Sign in to your router. The credentials are often listed in the manual, or occasionally on the underside of the device itself, particularly if it was supplied by your ISP. 
  • Once on your router settings page, you will need to assign your Xbox to utilize the same IP address you recorded down earlier. 
  • The methodology for making sure a connected device retains the same IP address will vary by manufacturer. For example, a typical way to find it is under the Wi-Fi devices list. My router allows me to view all of the devices on the network, and select "always assign the same IP address." Other routers are less simplistic, especially older ones, and basic ones provided by your ISP. Find connected device settings either under Wi-Fi devices, security, or other similarly-phrased menus. Or, consult your router manual. Just make sure you're assigning the correct IP address to the correct device. The device should match the IP address we noted down earlier from Xbox's advanced network connection menu. 

Example of static IP setting

  • Once you have assigned your Xbox to a specific IP address on your router, it's time to go back to your Xbox network settings menu to make sure everything lines up.
  • Select I P settings. Then select manual.
  • Enter the IP address, subnet mask, and gateway from the information recorded above. You can use RB and LB on your controller to move the cursor, and then press the hamburger-looking menu button to move to the next section.
  • Next, go to DNS settings, then select manual.
  • Enter the DNS addresses from the information you recorded earlier.
  • Now, your Xbox should automatically test the connection . If it says "You're connected," then congrats, you now have a static IP address!

Other things you can do to improve your Xbox network connection

Setting up a static IP address by itself won't necessarily boost your Xbox connectivity by itself, but it's an important step towards setting up things like port forwarding and other features. 

Port forwarding is method for making sure your Xbox can communicate effectively with Xbox's network systems remotely. Microsoft requires certain ports to be open in order for Xbox network (formerly known as Xbox Live) to function properly. Without this, you might see messages like "Strict NAT" on your network connections screen, which can block and impede connections in games like Battlefield, Call of Duty, or Overwatch. One good guide to follow is how to get an open NAT on Xbox with port forwarding , if you don't already have one. 

RELATED: How to set up a static IP address on Windows 11 PC

Additionally, you can enable UPnP on your router to ensure open NAT happens automatically. If you don't want to bother setting up static IP addresses or dabbling with port forwarding, which can be overly complicated on some routers, you can just enable UPnP. UPnP doesn't always work, I find, to improve your Xbox connectivity. But it's always worth a shot. Read about how to enable UPnP for Xbox here. 

If you have any questions about Xbox connectivity, be sure to drop them in the comments below. 

Jez Corden is a Managing Editor at Windows Central, focusing primarily on all things Xbox and gaming. Jez is known for breaking exclusive news and analysis as relates to the Microsoft ecosystem while being powered by tea. Follow on Twitter @JezCorden and listen to his XB2 Podcast , all about, you guessed it, Xbox!

  • 2 Upcoming Xbox RPG Avowed features some "darker, scarier areas" and "tonal variety," according to directors
  • 3 Why World of Warcraft's Pandaria 'Remix' event is the PERFECT starting point for new players
  • 4 Elden Ring DLC: How to fix the 'Inappropriate activity detected' bug when summoning in Shadow of the Erdtree
  • 5 Ever put content on the web? Microsoft says that it's okay for them to steal it because it's 'freeware.'

static dhcp assignment

Techdocs Logo

  • Documentation Home
  • Palo Alto Networks
  • Live Community
  • Knowledge Base

What's New in the NetSec Platform

Globalprotect support for pan-os-11.2-dhcp-based ip address assignments.

  • Auto VPN Support for HA Devices
  • Cloud NGFW Policy Management Using Strata Cloud Manager
  • Connect to GlobalProtect App with IPSec Only
  • Changes to Behavior for Web Traffic Handling
  • Dynamic Privilege Access
  • Embedded Browser Framework Upgrade
  • End User Coaching
  • Enhanced HIP Remediation Process Improvements
  • Enhancements for Authentication Using Smart Cards-Authentication Fallback
  • Enhancements for Authentication Using Smart Cards-Removal of Multiple PIN Prompts
  • Global Find Using Config Search
  • Local Configuration Management Support for Firewalls
  • Manage and Share Common Configuration Using Snippet Sharing
  • Native IPv6 Compatibility
  • Overlapping IP Address Support
  • PA-410R-5G Next-Generation Firewall
  • Simplified License Activation and Default Tenant Creation
  • Strata Logging Service in Strata Cloud Manager
  • View and Monitor App Acceleration
  • View and Monitor Native IPv6 Compatibility
  • View and Monitor Third-Party Device-IDs
  • ZTNA Connector Application Discovery, User-ID Across NAT, and Support for IP Connector Block Deletion
  • Advanced DNS Security
  • Advanced Threat Prevention (ATP) Support on CN-Series Firewall
  • Advanced Threat Prevention: Support for Zero-day Exploit Prevention
  • App Acceleration Support for Additional Apps
  • Authorized Support Center Support View
  • Bulk Configuration
  • Business Continuity During Mergers and Acquisitions
  • Calgary and South Africa Central Compute Locations
  • CIE (SAML) Authentication using Embedded Web-view
  • Configuration File Compression
  • Dynamic DNS Registration Support for Mobile Users—GlobalProtect
  • Explicit Proxy Support for South Africa Central Location
  • Fast-Session Delete
  • FedRAMP Moderate
  • FQDNs for Remote Network and Service Connection IPSec Tunnels
  • GlobalProtect Portal and Gateway Support for TLSv1.3
  • GlobalProtect Proxy Enhancements
  • GTP Support for Intelligent Security
  • Increased Maximum Number of Security Rules for PA-3400 Series Firewalls
  • IPSec Serviceability
  • Local Deep Learning for Advanced Threat Prevention
  • Monitor Bandwidth on SD-WAN Devices
  • NGFW Clustering of PA-7500 Series Firewalls
  • OOXML Support for WildFire Inline ML
  • PA-410R Next-Generation Firewall
  • PA-450R-5G Next-Generation Firewall
  • PAN-OS 11.0, 11.1, and 11.2 Dataplane Support
  • PAN-OS 11.2 Support for Panoramas That Manage Prisma Access
  • Post Quantum Hybrid Key Exchange VPN
  • Prisma Access Internal Gateway
  • Remote Network Tunnel Automation API
  • Strata Cloud Manager Connectivity Using Port 443
  • TLSv1.3 Support for HSM Integration with SSL Inbound Inspection
  • User-ID for CN-Series
  • User-ID Across NAT
  • View and Monitor Third-Party Device-ID
  • Virtual Systems Support on VM-Series Firewall
  • Intelligent Traffic Offload - Layer 3 (Dynamic Routing) Support on VM-Series Firewall
  • Intelligent Traffic Offload - NAT Support on VM-Series Firewall
  • Zero Touch Provisioning (ZTP) Onboarding Enhancements
  • View Preferred and Base Releases of PAN-OS Software
  • Additional Private Link Types
  • Additional SD-WAN Hubs in VPN Cluster
  • Aggregate Ethernet Interface Usability Enhancement
  • Configuration Indicator
  • Device Onboarding Rules
  • External Gateway Integration for Prisma Access and On-Premises NGFWs
  • Enterprise DLP Migrator
  • Software Cut-through based Offload on CN-Series Firewall
  • Software Cut Through Support for PA-400 and PA-1400 Series Firewalls
  • Strata Cloud Manager: Activity Insights
  • Strata Cloud Manager: Command Center
  • Trusted IP List
  • View Only Administrator Role Enhancement
  • Web Proxy for Cloud-Managed Firewalls
  • Multitenant Notifications
  • Authenticate LSVPN Satellite with Serial Number and IP Address Method
  • Private Key Export in Certificate Management
  • Clone a Snippet
  • Security Checks
  • GlobalProtect Portal and Gateway
  • IP Optimization for Mobile Users - GlobalProtect Deployments
  • License Enforcement for Mobile Users (Enhancements)
  • Multiple Virtual Routers Support on SD-WAN Hubs
  • Native SASE Integration with Prisma SD-WAN
  • New Prisma Access Cloud Management Location
  • Normalized Username Formats
  • PAN-OS Software Patch Deployment
  • Policy Analyzer
  • Saudi Arabia Compute Location
  • Site Template Configuration
  • TACACS+ Accounting
  • Tenant Moves and Acquisitions
  • Traceability and Control of Post-Quantum Cryptography in Decryption
  • User Session Inactivity Timeout
  • FedRAMP High "In Process" Requirements and Activation
  • License Activation Changes
  • Performance Policy with Forward Error Correction (FEC)
  • View and Monitor ZTNA Connector Access Objects
  • Software Cut-Through Support for PA-3400 and PA-5400 Series Firewalls
  • Persistent NAT for DIPP
  • ZTNA Connector Wildcard and FQDN Support for Applications and Additional Diagnostic Tools
  • 5G Cellular Interface for IPv4
  • Advanced WildFire Inline Cloud Analysis
  • API Key Certificate
  • App Acceleration in Prisma Access
  • ARM Support on VM-Series Firewall
  • Authentication Exemptions for Explicit Proxy
  • BGP MRAI Configuration Support
  • Cloud Managed Support for Prisma Access China
  • Configuration Audit Enhancements
  • Strata Logging Service with CN-Series Firewall
  • Device-ID Visibility and Policy Rule Recommendations in PAN-OS
  • Dynamic IPv6 Address Assignment on the Management Interface
  • Dynamic Routing in CN-Series HSF
  • Enhanced IoT Policy Recommendation Workflow for Strata Cloud Manager
  • Enhanced SaaS Tenants Control
  • Exclude All Explicit Proxy Traffic from Authentication
  • IKEv2 Certificate Authentication Support for Stronger Authentication
  • Improved Throughput with Lockless QoS
  • Increased Device Management Capacity for the Panorama Virtual Appliance
  • Inline Security Checks
  • Integrate Prisma Access with Microsoft Defender for Cloud Apps
  • Intelligent Security with PFCP for N6 and SGI Use Cases
  • IoT Security: Device Visibility and Automatic Policy Rule Recommendations
  • IOT Security Support for CN-Series
  • IP Protocol Scan Protection
  • IPSec VPN Monitoring
  • Link Aggregation Support on VM-Series
  • Maximum of 500 Remote Networks Per 1 Gbps IPSec Termination Node
  • New Platform Support for Web Proxy
  • New Template Variables
  • PA-415-5G Next-Generation Firewall
  • PA-450R Next-Generation Firewall
  • PA-455 Next-Generation Firewall
  • PA-5445 Next-Generation Firewall
  • PA-7500 Next-Generation Firewall
  • Policy Rulebase Management Using Tags
  • Post Quantum IKE VPN Support
  • PPPoE Client for IPv6
  • Public Cloud SD-WAN High Availability (HA)
  • Remote Browser Isolation
  • Secure Copy Protocol (SCP) Support
  • Service Connection Identity Redistribution Management
  • Service Provider Backbone Integration
  • Session Resiliency for the VM-Series on Public Clouds
  • SNMP Network Discovery for IoT Security
  • Strata Cloud Manager: Application Name Updates
  • Support for Strata Logging Service Switzerland Region
  • Throughput Enhancements for Web Proxy
  • TLSv1.3 Support for Administrative Access Using SSL/TLS Service Profiles
  • Traffic Replication Remote Network and Strata Cloud Manager Support
  • VM-Series Device Management
  • View and Monitor Remote Browser Isolation
  • Virtual Routing Forwarding for WAN Segmentation
  • Cisco Catalyst SD-WAN Integration
  • Cloud IP-Tag Collection
  • Config Version Snapshot
  • Create a Custom Path Quality Profile
  • Delete a Snippet
  • High-Bandwidth Private App Access with Colo-Connect
  • Introducing ADEM APIs
  • Log Viewer Usability Enhancements
  • New Predefined BGP Redistribution Profile
  • Refresh Pre Shared Keys for Auto VPN
  • Strata Logging Service Regional Support
  • Troubleshoot NGFW Connectivity and Policy Enforcement Anomalies
  • Credential Phishing Prevention Support
  • Prisma Access PAC File Endpoint for Explicit Proxy
  • User-Based Enforcement for Explicit Proxy Kerberos Authentication
  • Local Zones
  • DLP Support for AI Applications
  • Traffic Replication and PCAP Support
  • Third-Party Device-ID in Prisma Access
  • New and Remapped Prisma Access Locations and Compute Locations
  • Transparent SafeSearch Support
  • Private IP Visibility and Enforcement for Explicit Proxy Traffic Originating from Remote Networks
  • Cloud Management of NGFWs
  • Feature Adoption Dashboard
  • Best Practices Dashboard
  • Compliance Summary Dashboard
  • Security Posture Insights Dashboard
  • Advanced Threat Prevention Dashboard
  • Custom Dashboard
  • Device Health Dashboard
  • Incidents and Alerts
  • NGFW SDWAN Dashboard
  • Capacity Analyzer
  • Enhancements to CDSS Dashboard
  • Conditional Connect Method for GlobalProtect
  • Enhanced Split Tunnel Configuration
  • Prisma Access Explicit Proxy Connectivity in GlobalProtect for Always-On Internet Security
  • Host Information Profile (HIP) Exceptions for Patch Management
  • Host Information Profile (HIP) Process Remediation
  • License Activation

Recommended For You

© 2024 Palo Alto Networks, Inc. All rights reserved.

Installing OpenShift Container Platform with the Assisted Installer

Making open source more inclusive.

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. Because of the enormity of this endeavor, these changes are being updated gradually and where possible. For more details, see our CTO Chris Wright’s message .

Providing feedback on Red Hat documentation

You can provide feedback or report an error by submitting the Create Issue form in Jira. The Jira issue will be created in the Red Hat Hybrid Cloud Infrastructure Jira project, where you can track the progress of your feedback.

  • Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback.

Click Create Issue

  • Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
  • Click Create .

We appreciate your feedback on our documentation.

Chapter 1. About the Assisted Installer

The Assisted Installer for Red Hat OpenShift Container Platform is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console . The Assisted Installer supports various deployment platforms with a focus on bare metal, Nutanix, vSphere, and Oracle Cloud Infrastructure.

You can install OpenShift Container Platform on premises in a connected environment, with an optional HTTP/S proxy, for the following platforms:

  • Highly available OpenShift Container Platform or single-node OpenShift cluster
  • OpenShift Container Platform on bare metal or vSphere with full platform integration, or other virtualization platforms without integration
  • Optionally, OpenShift Virtualization and Red Hat OpenShift Data Foundation

1.1. Features

The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following features:

  • You can install your cluster by using the Hybrid Cloud Console instead of creating installation configuration files manually.
  • You do not need a bootstrap node because the bootstrapping process runs on a node within the cluster.
  • You do not need in-depth knowledge of OpenShift Container Platform to deploy a cluster. The Assisted Installer provides reasonable default configurations.
  • You do not need to run the OpenShift Container Platform installer locally.
  • You have access to the latest Assisted Installer for the latest tested z-stream releases.
  • The Assisted Installer supports IPv4 networking with SDN and OVN, IPv6 and dual stack networking with OVN only, NMState-based static IP addressing, and an HTTP/S proxy.
  • OVN is the default Container Network Interface (CNI) for OpenShift Container Platform 4.12 and later.
  • SDN is supported up to OpenShift Container Platform 4.14 and deprecated in OpenShift Container Platform 4.15.

Before installing, the Assisted Installer checks the following configurations:

  • Network connectivity
  • Network bandwidth
  • Connectivity to the registry
  • Upstream DNS resolution of the domain name
  • Time synchronization between cluster nodes
  • Cluster node hardware
  • Installation configuration parameters
  • You can automate the installation process by using the Assisted Installer REST API.

1.2. Customizing your installation

You can customize your installation by selecting one or more options.

These options are installed as Operators, which are used to package, deploy, and manage services and applications on the control plane. See the Operators documentation for details.

You can deploy these Operators after the installation if you require advanced configuration options.

You can deploy OpenShift Virtualization to perform the following tasks:

  • Create and manage Linux and Windows virtual machines (VMs).
  • Run pod and VM workloads alongside each other in a cluster.
  • Connect to VMs through a variety of consoles and CLI tools.
  • Import and clone existing VMs.
  • Manage network interface controllers and storage disks attached to VMs.
  • Live migrate VMs between nodes.

See the OpenShift Virtualization documentation for details.

You can deploy the multicluster engine for Kubernetes to perform the following tasks in a large, multi-cluster environment:

  • Provision and manage additional Kubernetes clusters from your initial cluster.
  • Use hosted control planes to reduce management costs and optimize cluster deployment by decoupling the control and data planes. See Introduction to hosted control planes for details.

Use GitOps Zero Touch Provisioning to manage remote edge sites at scale. See Edge computing for details.

You can deploy the multicluster engine with Red Hat OpenShift Data Foundation on all OpenShift Container Platform clusters.

Multicluster engine and storage configurations

Deploying multicluster engine without OpenShift Data Foundation results in the following scenarios:

  • Multi-node cluster: No storage is configured. You must configure storage after the installation process.
  • Single-node OpenShift: LVM Storage is installed.

1.3. API support policy

Assisted Installer APIs are supported for a minimum of three months from the announcement of deprecation.

Chapter 2. Prerequisites

The Assisted Installer validates the following prerequisites to ensure successful installation.

If you use a firewall, you must configure it so that Assisted Installer can access the resources it requires to function.

2.1. Supported CPU architectures

The Assisted Installer is supported on the following CPU architectures:

2.2. Resource requirements

This section describes the resource requirements for different clusters and installation options.

The multicluster engine for Kubernetes requires additional resources.

If you deploy the multicluster engine with storage, such as OpenShift Data Foundation or LVM Storage, you must also allocate additional resources to each node.

2.2.1. Multi-node cluster resource requirements

The resource requirements of a multi-node cluster depend on the installation options.

Control plane nodes:

  • 4 CPU cores
  • 100 GB storage

The disks must be reasonably fast, with an etcd wal_fsync_duration_seconds p99 duration that is less than 10 ms. For more information, see the Red Hat Knowledgebase solution How to Use 'fio' to Check Etcd Disk Performance in OCP .

Compute nodes:

  • 2 CPU cores
  • Additional 4 CPU cores

Additional 16 GB RAM

If you deploy multicluster engine without OpenShift Data Foundation, no storage is configured. You configure the storage after the installation.

  • Additional 75 GB storage

2.2.2. Single-node OpenShift resource requirements

The resource requirements for single-node OpenShift depend on the installation options.

  • 8 CPU cores
  • Additional 8 CPU cores

Additional 32 GB RAM

If you deploy multicluster engine without OpenShift Data Foundation, LVM Storage is enabled.

  • Additional 95 GB storage

2.3. Networking requirements

The network must meet the following requirements:

  • A DHCP server unless using static IP addressing.

A base domain name. You must ensure that the following requirements are met:

  • There is no wildcard, such as *.<cluster_name>.<base_domain> , or the installation will not proceed.
  • A DNS A/AAAA record for api.<cluster_name>.<base_domain> .
  • A DNS A/AAAA record with a wildcard for *.apps.<cluster_name>.<base_domain> .
  • Port 6443 is open for the API URL if you intend to allow users outside the firewall to access the cluster via the oc CLI tool.
  • Port 443 is open for the console if you intend to allow users outside the firewall to access the console.
  • A DNS A/AAAA record for each node in the cluster when using User Managed Networking, or the installation will not proceed. DNS A/AAAA records are required for each node in the cluster when using Cluster Managed Networking after installation is complete in order to connect to the cluster, but installation can proceed without the A/AAAA records when using Cluster Managed Networking.
  • A DNS PTR record for each node in the cluster if you want to boot with the preset hostname when using static IP addressing. Otherwise, the Assisted Installer has an automatic node renaming feature when using static IP addressing that will rename the nodes to their network interface MAC address.
  • DNS A/AAAA record settings at top-level domain registrars can take significant time to update. Ensure the A/AAAA record DNS settings are working before installation to prevent installation delays.
  • For DNS record examples, see Example DNS configuration in this chapter.

The OpenShift Container Platform cluster’s network must also meet the following requirements:

  • Connectivity between all cluster nodes
  • Connectivity for each node to the internet
  • Access to an NTP server for time synchronization between the cluster nodes

2.4. Example DNS configuration

This section provides A and PTR record configuration examples that meet the DNS requirements for deploying OpenShift Container Platform using the Assisted Installer. The examples are not meant to provide advice for choosing one DNS solution over another.

In the examples, the cluster name is ocp4 and the base domain is example.com .

2.4.1. Example DNS A record configuration

The following example is a BIND zone file that shows sample A records for name resolution in a cluster installed using the Assisted Installer.

Example DNS zone database

In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.

2.4.2. Example DNS PTR record configuration

The following example is a BIND zone file that shows sample PTR records for reverse name resolution in a cluster installed using the Assisted Installer.

Example DNS zone database for reverse records

A PTR record is not required for the OpenShift Container Platform application wildcard.

2.5. Preflight validations

The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing software on the nodes, the Assisted Installer conducts the following validations:

  • Ensures network connectivity
  • Ensures sufficient network bandwidth
  • Ensures connectivity to the registry
  • Ensures that any upstream DNS can resolve the required domain name
  • Ensures time synchronization between cluster nodes
  • Verifies that the cluster nodes meet the minimum hardware requirements
  • Validates the installation configuration parameters

If the Assisted Installer does not successfully validate the foregoing requirements, installation will not proceed.

Chapter 3. Installing with the Assisted Installer web console

After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster.

3.1. Preinstallation considerations

Before installing OpenShift Container Platform with the Assisted Installer, you must consider the following configuration choices:

  • Which base domain to use
  • Which OpenShift Container Platform product version to install
  • Whether to install a full cluster or single-node OpenShift
  • Whether to use a DHCP server or a static network configuration
  • Whether to use IPv4 or dual-stack networking
  • Whether to install OpenShift Virtualization
  • Whether to install Red Hat OpenShift Data Foundation
  • Whether to install multicluster engine for Kubernetes
  • Whether to integrate with the platform when installing on vSphere or Nutanix
  • Whether to install a mixed-cluster architecture

3.2. Setting the cluster details

To create a cluster with the Assisted Installer web user interface, use the following procedure.

  • Log in to the Red Hat Hybrid Cloud Console .
  • In the Red Hat OpenShift tile, click Scale your applications .
  • In the menu, click Clusters .
  • Click Create cluster .
  • Click the Datacenter tab.
  • Under Assisted Installer , click Create cluster .
  • Enter a name for the cluster in the Cluster name field.

Enter a base domain for the cluster in the Base domain field. All subdomains for the cluster will use this base domain.

The base domain must be a valid DNS name. You must not have a wild card domain set up for the base domain.

Select the version of OpenShift Container Platform to install.

  • For IBM Power and IBM zSystems platforms, only OpenShift Container Platform 4.13 and later is supported.
  • For a mixed-architecture cluster installation, select OpenShift Container Platform 4.12 or later, and use the -multi option. For instructions on installing a mixed-architecture cluster, see Additional resources .

Optional: Select Install single node Openshift (SNO) if you want to install OpenShift Container Platform on a single node.

Currently, SNO is not supported on IBM zSystems and IBM Power platforms.

  • Optional: The Assisted Installer already has the pull secret associated to your account. If you want to use a different pull secret, select Edit pull secret .

Optional: If you are installing OpenShift Container Platform on a third-party platform, select the platform from the Integrate with external parter platforms list. Valid values are Nutanix , vSphere or Oracle Cloud Infrastructure . Assisted Installer defaults to having no platform integration.

For details on each of the external partner integrations, see Additional Resources .

Assisted Installer supports Oracle Cloud Infrastructure (OCI) integration from OpenShift Container Platform 4.14 and later. For OpenShift Container Platform 4.14, the OCI integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features - Scope of Support .

Optional: Assisted Installer defaults to using x86_64 CPU architecture. If you are installing OpenShift Container Platform on a different architecture select the respective architecture to use. Valid values are arm64 , ppc64le , and s390x . Keep in mind, some features are not available with arm64 , ppc64le , and s390x CPU architectures.

For a mixed-architecture cluster installation, use the default x86_64 architecture. For instructions on installing a mixed-architecture cluster, see Additional resources .

Optional: Select Include custom manifests if you have at least one custom manifest to include in the installation. A custom manifest contains additional configurations not currently supported in the Assisted Installer. Selecting the checkbox adds the Custom manifests page to the wizard, where you upload the manifests.

  • If you are installing OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) third-party platform, it is mandatory to add the custom manifests provided by Oracle.
  • If you have already added custom manifests, unchecking the Include custom manifests box automatically deletes them all. You will be asked to confirm the deletion.

Optional: The Assisted Installer defaults to DHCP networking. If you are using a static IP configuration, bridges or bonds for the cluster nodes instead of DHCP reservations, select Static IP, bridges, and bonds .

A static IP configuration is not supported for OpenShift Container Platform installations on Oracle Cloud Infrastructure.

  • Optional: If you want to enable encryption of the installation disks, under Enable encryption of installation disks you can select Control plane node, worker for single-node OpenShift. For multi-node clusters, you can select Control plane nodes to encrypt the control plane node installation disks and select Workers to encrypt worker node installation disks.

You cannot change the base domain, the SNO checkbox, the CPU architecture, the host’s network configuration, or the disk-encryption after installation begins.

Additional resources

  • Optional: Installing on Nutanix
  • Optional: Installing on vSphere
  • Optional: Installing on Oracle Cloud Infrastructure (OCI)

3.3. Optional: Configuring static networks

The Assisted Installer supports IPv4 networking with SDN up to OpenShift Container Platform 4.14 and OVN, and supports IPv6 and dual stack networking with OVN only. The Assisted Installer supports configuring the network with static network interfaces with IP address/MAC address mapping. The Assisted Installer also supports configuring host network interfaces with the NMState library, a declarative network manager API for hosts. You can use NMState to deploy hosts with static IP addressing, bonds, VLANs and other advanced networking features. First, you must set network-wide configurations. Then, you must create a host-specific configuration for each host.

For installations on IBM Z with z/VM, ensure that the z/VM nodes and vSwitches are properly configured for static networks and NMState. Also, the z/VM nodes must have a fixed MAC address assigned as the pool MAC addresses might cause issues with NMState.

  • Select the internet protocol version. Valid options are IPv4 and Dual stack .
  • If the cluster hosts are on a shared VLAN, enter the VLAN ID.

Enter the network-wide IP addresses. If you selected Dual stack networking, you must enter both IPv4 and IPv6 addresses.

  • Enter the cluster network’s IP address range in CIDR notation.
  • Enter the default gateway IP address.
  • Enter the DNS server IP address.

Enter the host-specific configuration.

  • If you are only setting a static IP address that uses a single network interface, use the form view to enter the IP address and the MAC address for each host.
  • If you use multiple interfaces, bonding, or other advanced networking features, use the YAML view and enter the desired network state for each host that uses NMState syntax. Then, add the MAC address and interface name for each host interface used in your network configuration.
  • NMState version 2.1.4

3.4. Optional: Installing Operators

This step is optional.

See the product documentation for prerequisites and configuration options:

  • OpenShift Virtualization
  • Multicluster Engine for Kubernetes
  • Red Hat OpenShift Data Foundation
  • Logical Volume Manager Storage

If you require advanced options, install the Operators after you have installed the cluster.

Select one or more from the following options:

  • Install OpenShift Virtualization

Install multicluster engine

You can deploy the multicluster engine with OpenShift Data Foundation on all OpenShift Container Platform clusters.

Deploying the multicluster engine without OpenShift Data Foundation results in the following storage configurations:

  • Multi-node cluster: No storage is configured. You must configure storage after the installation.
  • Install Logical Volume Manager Storage
  • Install OpenShift Data Foundation
  • Click Next .

3.5. Adding hosts to the cluster

You must add one or more hosts to the cluster. Adding a host to the cluster involves generating a discovery ISO. The discovery ISO runs Red Hat Enterprise Linux CoreOS (RHCOS) in-memory with an agent.

Perform the following procedure for each host on the cluster.

Click the Add hosts button and select the provisioning type.

  • Select Minimal image file: Provision with virtual media to download a smaller image that will fetch the data needed to boot. The nodes must have virtual media capability. This is the recommended method for x86_64 and arm64 architectures.
  • Select Full image file: Provision with physical media to download the larger full image. This is the recommended method for the ppc64le architecture and for the s390x architecture when installing with RHEL KVM.

Select iPXE: Provision from your network server to boot the hosts using iPXE. This is the recommended method for IBM Z with z/VM nodes. ISO boot is the recommended method on the RHEL KVM installation.

  • If you install on RHEL KVM, in some circumstances, the VMs on the KVM host are not rebooted on first boot and need to be restarted manually.
  • If you install OpenShift Container Platform on Oracle Cloud Infrastructure, select Minimal image file: Provision with virtual media only.

Optional: Activate the Run workloads on control plane nodes switch to schedule workloads to run on control plane nodes, in addition to the default worker nodes.

This option is available for clusters of five or more nodes. For clusters of under five nodes, the system runs workloads on the control plane nodes only, by default. For more details, see Configuring schedulable control plane nodes in Additional Resources .

  • Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings . Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.

Optional: Add an SSH public key so that you can connect to the cluster nodes as the core user. Having a login to the cluster nodes can provide you with debugging information during the installation.

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

  • If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access .
  • In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu.
  • Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy, or if the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates . Add additional certificates in X.509 format.
  • Configure the discovery image if needed.
  • Optional: If you are installing on a platform and want to integrate with the platform, select Integrate with your virtualization platform . You must boot all hosts and ensure they appear in the host inventory. All the hosts must be on the same platform.
  • Click Generate Discovery ISO or Generate Script File .
  • Download the discovery ISO or iPXE script.
  • Boot the host(s) with the discovery image or iPXE script.
  • Configuring the discovery image for additional details.
  • Booting hosts with the discovery image for additional details.
  • Red Hat Enterprise Linux 9 - Configuring and managing virtualization for additional details.
  • How to configure a VIOS Media Repository/Virtual Media Library for additional details.
  • Adding hosts on Nutanix with the web console
  • Adding hosts on vSphere
  • Configurng schedulable control plane nodes

3.6. Configuring hosts

After booting the hosts with the discovery ISO, the hosts will appear in the table at the bottom of the page. You can optionally configure the hostname and role for each host. You can also delete a host if necessary.

From the Options (⋮) menu for a host, select Change hostname . If necessary, enter a new name for the host and click Change . You must ensure that each host has a valid and unique hostname.

Alternatively, from the Actions list, select Change hostname to rename multiple selected hosts. In the Change Hostname dialog, type the new name and include {{n}} to make each hostname unique. Then click Change .

You can see the new names appearing in the Preview pane as you type. The name will be identical for all selected hosts, with the exception of a single-digit increment per host.

From the Options (⋮) menu, you can select Delete host to delete a host. Click Delete to confirm the deletion.

Alternatively, from the Actions list, select Delete to delete multiple selected hosts at the same time. Then click Delete hosts .

In a regular deployment, a cluster can have three or more hosts, and three of these must be control plane hosts. If you delete a host that is also a control plane, or if you are left with only two hosts, you will get a message saying that the system is not ready. To restore a host, you will need to reboot it from the discovery ISO.

  • From the Options (⋮) menu for the host, optionally select View host events . The events in the list are presented chronologically.

For multi-host clusters, in the Role column next to the host name, you can click on the menu to change the role of the host.

If you do not select a role, the Assisted Installer will assign the role automatically. The minimum hardware requirements for control plane nodes exceed that of worker nodes. If you assign a role to a host, ensure that you assign the control plane role to hosts that meet the minimum hardware requirements.

  • Click the Status link to view hardware, network and operator validations for the host.
  • Click the arrow to the left of a host name to expand the host details.

Once all cluster hosts appear with a status of Ready , proceed to the next step.

3.7. Configuring storage disks

Each of the hosts retrieved during host discovery can have multiple storage disks. The storage disks are listed for the host on the Storage page of the Assisted Installer wizard.

You can optionally modify the default configurations for each disk.

Changing the installation disk

The Assisted Installer randomly assigns an installation disk by default. If there are multiple storage disks for a host, you can select a different disk to be the installation disk. This automatically unassigns the previous disk.

  • Navigate to the Storage page of the wizard.
  • Expand a host to display the associated storage disks.
  • Select Installation disk from the Role list.
  • When all storage disks return to Ready status, proceed to the next step.

Disabling disk formatting

The Assisted Installer marks all bootable disks for formatting during the installation process by default, regardless of whether or not they have been defined as the installation disk. Formatting causes data loss.

You can choose to disable the formatting of a specific disk. This should be performed with caution, as bootable disks may interfere with the installation process, mainly in terms of boot order.

You cannot disable formatting for the installation disk.

  • Clear Format for a disk.
  • Configuring hosts

3.8. Configuring networking

Before installing OpenShift Container Platform, you must configure the cluster network.

In the Networking page, select one of the following if it is not already selected for you:

Cluster-Managed Networking: Selecting cluster-managed networking means that the Assisted Installer will configure a standard network topology, including keepalived and Virtual Router Redundancy Protocol (VRRP) for managing the API and Ingress VIP addresses.

  • Currently, Cluster-Managed Networking is not supported on IBM zSystems and IBM Power in OpenShift Container Platform version 4.13.
  • Oracle Cloud Infrastructure (OCI) is available for OpenShift Container Platform 4.14 with a user-managed networking configuration only.
  • User-Managed Networking : Selecting user-managed networking allows you to deploy OpenShift Container Platform with a non-standard network topology. For example, if you want to deploy with an external load balancer instead of keepalived and VRRP, or if you intend to deploy the cluster nodes across many distinct L2 network segments.

For cluster-managed networking, configure the following settings:

  • Define the Machine network . You can use the default network or select a subnet.
  • Define an API virtual IP . An API virtual IP provides an endpoint for all users to interact with, and configure the platform.
  • Define an Ingress virtual IP . An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster.

For user-managed networking, configure the following settings:

Select your Networking stack type :

  • IPv4 : Select this type when your hosts are only using IPv4.
  • Dual-stack : You can select dual-stack when your hosts are using IPv4 together with IPv6.
  • Optional: You can select Allocate IPs via DHCP server to automatically allocate the API IP and Ingress IP using the DHCP server.

Optional: Select Use advanced networking to configure the following advanced networking properties:

  • Cluster network CIDR : Define an IP address block from which Pod IP addresses are allocated.
  • Cluster network host prefix : Define a subnet prefix length to assign to each node.
  • Service network CIDR : Define an IP address to use for service IP addresses.
  • Network type : Select either Software-Defined Networking (SDN) for standard networking or Open Virtual Networking (OVN) for IPv6, dual-stack networking, and telco features. In OpenShift Container Platform 4.12 and later releases, OVN is the default Container Network Interface (CNI). In OpenShift Container Platform 4.15 and later releases, Software-Defined Networking (SDN) is not supported.
  • Network configuration

3.9. Adding custom manifests

A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party.

You can upload a custom manifest from your file system to either the openshift folder or the manifests folder. There is no limit to the number of custom manifest files permitted.

Only one file can be uploaded at a time. However, each uploaded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually.

For a file containing a single custom manifest, accepted file extensions include .yaml , .yml , or .json .

Single custom manifest example

For a file containing multiple custom manifests, accepted file types include .yaml or .yml .

Multiple custom manifest example

  • When you install OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) external platform, you must add the custom manifests provided by Oracle. For additional external partner integrations such as vSphere or Nutanix, this step is optional.
  • For more information about custom manifests, see Additional Resources .

Uploading a custom manifest in the Assisted Installer user interface

When uploading a custom manifest, enter the manifest filename and select a destination folder.

Prerequisites

  • You have at least one custom manifest file saved in your file system.
  • On the Cluster details page of the wizard, select the Include custom manifests checkbox.
  • On the Custom manifest page, in the folder field, select the Assisted Installer folder where you want to save the custom manifest file. Options include openshift or manifest .
  • In the Filename field, enter a name for the manifest file, including the extension. For example, manifest1.json or multiple1.yaml .
  • Under Content , click the Upload icon or Browse button to upload a file. Alternatively, drag the file into the Content field from your file system.
  • To upload another manifest, click Add another manifest and repeat the process. This saves the previously uploaded manifest.
  • Click Next to save all manifests and proceed to the Review and create page. The uploaded custom manifests are listed under Custom manifests .

Modifying a custom manifest in the Assisted Installer user interface

You can change the folder and file name of an uploaded custom manifest. You can also copy the content of an existing manifest, or download it to the folder defined in the Chrome download settings.

It is not possible to modify the content of an uploaded manifest. However, you can overwrite the file.

  • You have uploaded at least one custom manifest file.
  • To change the folder, select a different folder for the manifest from the Folder list.
  • To modify the file name, type the new name for the manifest in the File name field.
  • To overwrite a manifest, save the new manifest in the same folder with the same file name.
  • To save a manifest as a file in your file system, click the Download icon.
  • To copy the manifest, click the Copy to clipboard icon.
  • To apply the changes, click either Add another manifest or Next .

Removing custom manifests in the Assisted Installer user interface

You can remove uploaded custom manifests before installation in one of two ways:

  • Removing one or more manifests individually.
  • Removing all manifests at once.

Once you have removed a manifest you cannot undo the action. The workaround is to upload the manifest again.

Removing a single manifest

You can delete one manifest at a time. This option does not allow you to delete the last remaining manifest.

  • You have uploaded at least two custom manifest files.
  • Navigate to the Custom manifests page.
  • Hover over the manifest name to display the Delete (minus) icon.
  • Click the icon and then click Delete in the dialog box.

Removing all manifests

You can remove all custom manifests at once. This also hides the Custom manifest page.

  • Navigate to the Cluster details page of the wizard.
  • Clear the Include custom manifests checkbox.
  • In the Remove custom manifests dialog box, click Remove .
  • Manifest configuration files
  • Multi-document YAML files

3.10. Preinstallation validations

The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass preinstallation validation.

  • Preinstallation validation

3.11. Installing the cluster

After you have completed the configuration and all the nodes are Ready , you can begin installation. The installation process takes a considerable amount of time, and you can monitor the installation from the Assisted Installer web console. Nodes will reboot during the installation, and they will initialize after installation.

  • Press Begin installation .
  • Click the link in the Status column of the Host Inventory list to see the installation status of a particular host.

3.12. Completing the installation

After the cluster is installed and initialized, the Assisted Installer indicates that the installation is finished. The Assisted Installer provides the console URL, the kubeadmin username and password, and the kubeconfig file. Additionally, the Assisted Installer provides cluster details including the OpenShift Container Platform version, base domain, CPU architecture, API and Ingress IP addresses, and the cluster and service network IP addresses.

  • You have installed the oc CLI tool.
  • Make a copy of the kubeadmin username and password.

Download the kubeconfig file and copy it to the auth directory under your working directory:

The kubeconfig file is available for download for 24 hours after completing the installation.

Add the kubeconfig file to your environment:

Login with the oc CLI tool:

Replace <password> with the password of the kubeadmin user.

  • Click the web console URL or click Launch OpenShift Console to open the console.
  • Enter the kubeadmin username and password. Follow the instructions in the OpenShift Container Platform console to configure an identity provider and configure alert receivers.
  • Add a bookmark of the OpenShift Container Platform console.
  • Complete any postinstallation platform integration steps.
  • Nutanix postinstallation configuration
  • vSphere postinstallation configuration

Chapter 4. Installing with the Assisted Installer API

After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster using the Assisted Installer API. To use the API, you must perform the following procedures:

  • Set up the API authentication.
  • Configure the pull secret.
  • Register a new cluster definition.
  • Create an infrastructure environment for the cluster.

Once you perform these steps, you can modify the cluster definition, create discovery ISOs, add hosts to the cluster, and install the cluster. This document does not cover every endpoint of the Assisted Installer API , but you can review all of the endpoints in the API viewer or the swagger.yaml file.

4.1. Generating the offline token

Download the offline token from the Assisted Installer web console. You will use the offline token to set the API token.

  • Install jq .
  • Log in to the OpenShift Cluster Manager as a user with cluster creation privileges.
  • In the menu, click Downloads .
  • In the Tokens section under OpenShift Cluster Manager API Token , click View API Token .

Click Load Token .

Disable pop-up blockers.

  • In the Your API token section, copy the offline token.

In your terminal, set the offline token to the OFFLINE_TOKEN variable:

To make the offline token permanent, add it to your profile.

(Optional) Confirm the OFFLINE_TOKEN variable definition.

4.2. Authenticating with the REST API

API calls require authentication with the API token. Assuming you use API_TOKEN as a variable name, add -H "Authorization: Bearer ${API_TOKEN}" to API calls to authenticate with the REST API.

The API token expires after 15 minutes.

  • You have generated the OFFLINE_TOKEN variable.

On the command line terminal, set the API_TOKEN variable using the OFFLINE_TOKEN to validate the user.

Confirm the API_TOKEN variable definition:

Create a script in your path for one of the token generating methods. For example:

Then, save the file.

Change the file mode to make it executable:

Refresh the API token:

Verify that you can access the API by running the following command:

Example output

4.3. Configuring the pull secret

Many of the Assisted Installer API calls require the pull secret. Download the pull secret to a file so that you can reference it in API calls. The pull secret is a JSON object that will be included as a value within the request’s JSON object. The pull secret JSON must be formatted to escape the quotes. For example:

  • In the menu, click OpenShift .
  • In the submenu, click Downloads .
  • In the Tokens section under Pull secret , click Download .

To use the pull secret from a shell variable, execute the following command:

To slurp the pull secret file using jq , reference it in the pull_secret variable, piping the value to tojson to ensure that it is properly formatted as escaped JSON. For example:

Confirm the PULL_SECRET variable definition:

4.4. Optional: Generating the SSH public key

During the installation of OpenShift Container Platform, you can optionally provide an SSH public key to the installation program. This is useful for initiating an SSH connection to a remote node when troubeshooting an installation error.

If you do not have an existing SSH key pair on your local machine to use for the authentication, create one now.

  • Generate the OFFLINE_TOKEN and API_TOKEN variables.

From the root user in your terminal, get the SSH public key:

Set the SSH public key to the CLUSTER_SSHKEY variable:

Confirm the CLUSTER_SSHKEY variable definition:

4.5. Registering a new cluster

To register a new cluster definition with the API, use the /v2/clusters endpoint. Registering a new cluster requires the following settings:

  • openshift-version
  • pull_secret
  • cpu_architecture

See the cluster-create-params model in the API viewer for details on the fields you can set when registering a new cluster. When setting the olm_operators field, see Additional Resources for details on installing Operators.

After you create the cluster definition, you can modify the cluster definition and provide values for additional settings.

  • For certain installation platforms and OpenShift Container Platform versions, you can also create a mixed-architecture cluster by combining two different architectures on the same cluster. For details, see Additional Resources .
  • If you are installing OpenShift Container Platform on a third-party platform, see Additional Resources for the relevant instructions.
  • For clusters of between five to ten nodes, you can choose to schedule workloads to run on control plane nodes in addition to the worker nodes, while registering a cluster. For details, see Configuring schedulable control plane nodes in Additional resources .
  • You have generated a valid API_TOKEN . Tokens expire every 15 minutes.
  • You have downloaded the pull secret.
  • Optional: You have assigned the pull secret to the $PULL_SECRET variable.

Register a new cluster.

Optional: You can register a new cluster by slurping the pull secret file in the request:

Optional: You can register a new cluster by writing the configuration to a JSON file and then referencing it in the request:

Assign the returned cluster_id to the CLUSTER_ID variable and export it:

If you close your terminal session, you need to export the CLUSTER_ID variable again in a new terminal session.

Check the status of the new cluster:

Once you register a new cluster definition, create the infrastructure environment for the cluster.

You cannot see the cluster configuration settings in the Assisted Installer user interface until you create the infrastructure environment.

  • Modifying a cluster
  • Installing a mixed-architecture cluster
  • Optional: Installing on Oracle Cloud Infrastructure

4.5.1. Optional: Installing Operators

You can install the following Operators when you register a new cluster:

OpenShift Virtualization Operator

Currently, OpenShift Virtualization is not supported on IBM zSystems and IBM Power.

  • Multicluster engine Operator
  • OpenShift Data Foundation Operator
  • LVM Storage Operator

Run the following command:

  • OpenShift Virtualization documentation
  • Red Hat OpenShift Cluster Manager documentation
  • Red Hat OpenShift Data Foundation documentation
  • Logical Volume Manager Storage documentation

4.6. Modifying a cluster

To modify a cluster definition with the API, use the /v2/clusters/{cluster_id} endpoint. Modifying a cluster resource is a common operation for adding settings such as changing the network type or enabling user-managed networking. See the v2-cluster-update-params model in the API viewer for details on the fields you can set when modifying a cluster definition.

You can add or remove Operators from a cluster resource that has already been registered.

To create partitions on nodes, see Configuring storage on nodes in the OpenShift Container Platform documentation.

  • You have created a new cluster resource.

Modify the cluster. For example, change the SSH key:

4.6.1. Modifying Operators

You can add or remove Operators from a cluster resource that has already been registered as part of a previous installation. This is only possible before you start the OpenShift Container Platform installation.

You set the required Operator definition by using the PATCH method for the /v2/clusters/{cluster_id} endpoint.

  • You have refreshed the API token.
  • You have exported the CLUSTER_ID as an environment variable.

Run the following command to modify the Operators:

Sample output

The output is the description of the new cluster state. The monitored_operators property in the output contains Operators of two types:

  • "operator_type": "builtin" : Operators of this type are an integral part of OpenShift Container Platform.
  • "operator_type": "olm" : Operators of this type are added manually by a user or automatically, as a dependency. In this example, the LVM Storage Operator is added automatically as a dependency of OpenShift Virtualization.

4.7. Registering a new infrastructure environment

Once you register a new cluster definition with the Assisted Installer API, create an infrastructure environment using the v2/infra-envs endpoint. Registering a new infrastructure environment requires the following settings:

See the infra-env-create-params model in the API viewer for details on the fields you can set when registering a new infrastructure environment. You can modify an infrastructure environment after you create it. As a best practice, consider including the cluster_id when creating a new infrastructure environment. The cluster_id will associate the infrastructure environment with a cluster definition. When creating the new infrastructure environment, the Assisted Installer will also generate a discovery ISO.

  • Optional: You have registered a new cluster definition and exported the cluster_id .

Register a new infrastructure environment. Provide a name, preferably something including the cluster name. This example provides the cluster ID to associate the infrastructure environment with the cluster resource. The following example specifies the image_type . You can specify either full-iso or minimal-iso . The default value is minimal-iso .

Optional: You can register a new infrastructure environment by slurping the pull secret file in the request:

Optional: You can register a new infrastructure environment by writing the configuration to a JSON file and then referencing it in the request:

Assign the returned id to the INFRA_ENV_ID variable and export it:

Once you create an infrastructure environment and associate it to a cluster definition via the cluster_id , you can see the cluster settings in the Assisted Installer web user interface. If you close your terminal session, you need to re-export the id in a new terminal session.

4.8. Modifying an infrastructure environment

You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. Modifying an infrastructure environment is a common operation for adding settings such as networking, SSH keys, or ignition configuration overrides.

See the infra-env-update-params model in the API viewer for details on the fields you can set when modifying an infrastructure environment. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.

  • You have created a new infrastructure environment.

Modify the infrastructure environment:

4.8.1. Optional: Adding kernel arguments

Providing kernel arguments to the Red Hat Enterprise Linux CoreOS (RHCOS) kernel via the Assisted Installer means passing specific parameters or options to the kernel at boot time, particularly when you cannot customize the kernel parameters of the discovery ISO. Kernel parameters can control various aspects of the kernel’s behavior and the operating system’s configuration, affecting hardware interaction, system performance, and functionality. Kernel arguments are used to customize or inform the node’s RHCOS kernel about the hardware configuration, debugging preferences, system services, and other low-level settings.

The RHCOS installer kargs modify command supports the append , delete , and replace options.

You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.

Modify the kernel arguments:

4.9. Adding hosts

After configuring the cluster resource and infrastructure environment, download the discovery ISO image. You can choose from two images:

  • Full ISO image: Use the full ISO image when booting must be self-contained. The image includes everything needed to boot and start the Assisted Installer agent. The ISO image is about 1GB in size. This is the recommended method for the s390x architecture when installing with RHEL KVM.
  • Minimal ISO image: Use the minimal ISO image when bandwidth over the virtual media connection is limited. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

Currently, ISO images are not supported for installations on IBM Z ( s390x ) with z/VM. For details, see Booting hosts using iPXE .

You can boot hosts with the discovery image using three methods. For details, see Booting hosts with the discovery image .

  • You have created a cluster.
  • You have created an infrastructure environment.
  • You have completed the configuration.
  • If the cluster hosts are behind a firewall that requires the use of a proxy, you have configured the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.
  • You have selected an image type or will use the default minimal-iso .
  • Configure the discovery image if needed. For details, see Configuring the discovery image .

Get the download URL:

Download the discovery image:

Replace <url> with the download URL from the previous step.

  • Boot the host(s) with the discovery image.
  • Assign a role to host(s).
  • Configuring the discovery image
  • Booting hosts with the discovery image
  • Adding hosts on Nutanix with the API
  • Assigning roles to hosts
  • Booting hosts using iPXE

4.10. Modifying hosts

After adding hosts, modify the hosts as needed. The most common modifications are to the host_name and the host_role parameters.

You can modify a host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. See the host-update-params model in the API viewer for details on the fields you can set when modifying a host.

A host might be one of two roles:

  • master : A host with the master role will operate as a control plane host.
  • worker : A host with the worker role will operate as a worker host.

By default, the Assisted Installer sets a host to auto-assign , which means the installation program determines whether the host is a master or worker role automatically. Use the following procedure to set the host’s role:

  • You have added hosts to the cluster.

Get the host IDs:

Modify the host:

4.10.1. Modifying storage disk configuration

Each host retrieved during host discovery can have multiple storage disks. You can optionally modify the default configurations for each disk.

  • Configure the cluster and discover the hosts. For details, see Additional resources .

Viewing the storage disks

You can view the hosts in your cluster, and the disks on each host. This enables you to perform actions on a specific disk.

Get the host IDs for the cluster:

This is the ID of a single host. Multiple host IDs are separated by commas.

Get the disks for a specific host:

This is the output for one disk. It contains the disk_id and installation_eligibility properties for the disk.

You can select any disk whose installation_eligibility property is eligible: true to be the installation disk.

  • Get the host and storage disk IDs. For details, see Viewing the storage disks .

Optional: Identify the current installation disk:

Assign a new installation disk:

4.11. Adding custom manifests

A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party. To create a custom manifest with the API, use the /v2/clusters/$CLUSTER_ID/manifests endpoint.

You can upload a base64-encoded custom manifest to either the openshift folder or the manifests folder with the Assisted Installer API. There is no limit to the number of custom manifests permitted.

Only one base64-encoded JSON manifest can be uploaded at a time. However, each uploaded base64-encoded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually.

  • You have registered a new cluster definition and exported the cluster_id to the $CLUSTER_ID BASH variable.
  • Create a custom manifest file.
  • Save the custom manifest file using the appropriate extension for the file format.

Add the custom manifest to the cluster by executing the following command:

Replace manifest.json with the name of your manifest file. The second instance of manifest.json is the path to the file. Ensure the path is correct.

The base64 -w 0 command base64-encodes the manifest as a string and omits carriage returns. Encoding with carriage returns will generate an exception.

Verify that the Assisted Installer added the manifest:

Replace manifest.json with the name of your manifest file.

4.12. Preinstallation validations

  • Preinstallation validations

4.13. Installing the cluster

Once the cluster hosts past validation, you can install the cluster.

  • You have created a cluster and infrastructure environment.
  • You have added hosts to the infrastructure environment.
  • The hosts have passed validation.

Install the cluster:

Chapter 5. Optional: Enabling disk encryption

You can enable encryption of installation disks using either the TPM v2 or Tang encryption modes.

In some situations, when you enable TPM disk encryption in the firmware for a bare-metal host and then boot it from an ISO that you generate with the Assisted Installer, the cluster deployment can get stuck. This can happen if there are left-over TPM encryption keys from a previous installation on the host. For more information, see BZ#2011634 . If you experience this problem, contact Red Hat support.

5.1. Enabling TPM v2 encryption

  • Check to see if TPM v2 encryption is enabled in the BIOS on each host. Most Dell systems require this. Check the manual for your computer. The Assisted Installer will also validate that TPM is enabled in the firmware. See the disk-encruption model in the Assisted Installer API for additional details.

Verify that a TPM v2 encryption chip is installed on each node and enabled in the firmware.

  • Optional: Using the web console, in the Cluster details step of the user interface wizard, choose to enable TPM v2 encryption on either the control plane nodes, workers, or both.

Optional: Using the API, follow the "Modifying hosts" procedure. Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tpmv2 .

Enable TPM v2 encryption:

Valid settings for enable_on are all , master , worker , or none .

5.2. Enabling Tang encryption

  • You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key.
  • Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. You can set multiple Tang servers, but the Assisted Installer must be able to connect to all of them during installation.

On the Tang server, retrieve the thumbprint for the Tang server using tang-show-keys :

Optional: Replace <port> with the port number. The default port number is 80 .

Example thumbprint

Optional: Retrieve the thumbprint for the Tang server using jose .

Ensure jose is installed on the Tang server:

On the Tang server, retrieve the thumbprint using jose :

Replace <public_key> with the public exchange key for the Tang server.

  • Optional: In the Cluster details step of the user interface wizard, choose to enable Tang encryption on either the control plane nodes, workers, or both. You will be required to enter URLs and thumbprints for the Tang servers.

Optional: Using the API, follow the "Modifying hosts" procedure.

Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tang . Set disk_encyrption.tang_servers to provide the URL and thumbprint details about one or more Tang servers:

Valid settings for enable_on are all , master , worker , or none . Within the tang_servers value, comment out the quotes within the object(s).

5.3. Additional resources

  • Modifying hosts

Chapter 6. Optional: Configuring schedulable control plane nodes

In a high availability deployment, three or more nodes comprise the control plane. The control plane nodes are used for managing OpenShift Container Platform and for running the OpenShift containers. The remaining nodes are workers, used to run the customer containers and workloads. There can be anywhere between one to thousands of worker nodes.

For a single-node OpenShift cluster or for a cluster that comprises up to four nodes, the system automatically schedules the workloads to run on the control plane nodes.

For clusters of between five to ten nodes, you can choose to schedule workloads to run on the control plane nodes in addition to the worker nodes. This option is recommended for enhancing efficiency and preventing underutilized resources. You can select this option either during the installation setup, or as part of the post-installation steps.

For larger clusters of more than ten nodes, this option is not recommended.

This section explains how to schedule workloads to run on control plane nodes using the Assisted Installer web console and API, as part of the installation setup.

For instructions on how to configure schedulable control plane nodes following an installation, see Configuring control plane nodes as schedulable in the OpenShift Container Platform documentation.

When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes.

6.1. Configuring schedulable control planes using the web console

  • You have set the cluster details.
  • You are installing OpenShift Container Platform 4.14 or later.
  • Log in to the Red Hat Hybrid Cloud Console and follow the instructions for installing OpenShift Container Platform using the Assisted Installer web console. For details, see Installing with the Assisted Installer web console in Additional Resources .
  • When you reach the Host discovery page, click Add hosts .
  • Optionally change the Provisioning type and additional settings as required. All options are compatible with schedulable control planes.
  • Click Generate Discovery ISO to download the ISO.

Set Run workloads on control plane nodes to on.

For four nodes or less, this switch is activated automatically and cannot be changed.

6.2. Configuring schedulable control planes using the API

Use the schedulable_masters attribute to enable workloads to run on control plane nodes.

  • You have created a $PULL_SECRET variable.
  • Follow the instructions for installing Assisted Installer using the Assisted Installer API. For details, see Installing with the Assisted Installer API in Additional Resources .

When you reach the step for registering a new cluster, set the schedulable_masters attribute as follows:

6.3. Additional resources

  • Installing with the Assisted Installer web console
  • Installing with the Assisted Installer API

Chapter 7. Configuring the discovery image

The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can use Ignition to customize the discovery image.

Modifications to the discovery image will not persist in the system.

7.1. Creating an Ignition configuration file

Ignition is a low-level system configuration utility, which is part of the temporary initial root filesystem, the initramfs . When Ignition runs on the first boot, it finds configuration data in the Ignition configuration file and applies it to the host before switch_root is called to pivot to the host’s root filesystem.

Ignition uses a JSON configuration specification file to represent the set of changes that occur on the first boot.

Ignition versions newer than 3.2 are not supported, and will raise an error.

Create an Ignition file and specify the configuration specification version:

Add configuration data to the Ignition file. For example, add a password to the core user.

Generate a password hash:

Add the generated password hash to the core user:

Save the Ignition file and export it to the IGNITION_FILE variable:

7.2. Modifying the discovery image with Ignition

Once you create an Ignition configuration file, you can modify the discovery image by patching the infrastructure environment using the Assisted Installer API.

  • If you used the web console to create the cluster, you have set up the API authentication.
  • You have an infrastructure environment and you have exported the infrastructure environment id to the INFRA_ENV_ID variable.
  • You have a valid Ignition file and have exported the file name as $IGNITION_FILE .

Create an ignition_config_override JSON object and redirect it to a file:

Patch the infrastructure environment:

The ignition_config_override object references the Ignition file.

  • Download the updated discovery image.

Chapter 8. Booting hosts with the discovery image

The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can boot hosts with the discovery image using three methods:

  • Redfish virtual media

8.1. Creating an ISO image on a USB drive

You can install the Assisted Installer agent using a USB drive that contains the discovery ISO image. Starting the host with the USB drive prepares the host for the software installation.

  • On the administration host, insert a USB drive into a USB port.

Copy the ISO image to the USB drive, for example:

is the location of the connected USB drive, for example, /dev/sdb .

After the ISO is copied to the USB drive, you can use the USB drive to install the Assisted Installer agent on the cluster host.

8.2. Booting with a USB drive

To register nodes with the Assisted Installer using a bootable USB drive, use the following procedure.

  • Insert the RHCOS discovery ISO USB drive into the target host.
  • Configure the boot drive order in the server firmware settings to boot from the attached discovery ISO, and then reboot the server.

Wait for the host to boot up.

  • For web console installations, on the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts.

For API installations, refresh the token, check the enabled host count, and gather the host IDs:

8.3. Booting from an HTTP-hosted ISO image using the Redfish API

You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API.

  • Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO.
  • Copy the ISO file to an HTTP server accessible in your network.

Boot the host from the hosted ISO file, for example:

Call the redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command:

Set the host to boot from the VirtualMedia device by running the following command:

Reboot the host:

Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command:

8.4. Booting hosts using iPXE

The Assisted Installer provides an iPXE script including all the artifacts needed to boot the discovery image for an infrastructure environment. Due to the limitations of the current HTTPS implementation of iPXE, the recommendation is to download and expose the needed artifacts in an HTTP server. Currently, even if iPXE supports HTTPS protocol, the supported algorithms are old and not recommended.

The full list of supported ciphers is in https://ipxe.org/crypto .

  • You have created an infrastructure environment by using the API or you have created a cluster by using the web console.
  • You have your infrastructure environment ID exported in your shell as $INFRA_ENV_ID .
  • You have credentials to use when accessing the API and have exported a token as $API_TOKEN in your shell.

If you configure iPXE by using the web console, the $INFRA_ENV_ID and $API_TOKEN variables are preset.

  • You have an HTTP server to host the images.

IBM Power only supports PXE, which also requires: You have installed grub2 at /var/lib/tftpboot You have installed DHCP and TFTP for PXE

Download the iPXE script directly from the web console, or get the iPXE script from the Assisted Installer:

Download the required artifacts by extracting URLs from the ipxe-script .

Download the initial RAM disk:

Download the linux kernel:

Download the root filesystem:

Change the URLs to the different artifacts in the ipxe-script` to match your local HTTP server. For example:

Optional: When installing with RHEL KVM on IBM zSystems you must boot the host by specifying additional kernel arguments

If you install with iPXE on RHEL KVM, in some circumstances, the VMs on the VM host are not rebooted on first boot and need to be started manually.

Optional: When installing on IBM Power you must download intramfs, kernel, and root as follows:

  • Copy initrd.img and kernel.img to PXE directory `/var/lib/tftpboot/rhcos `
  • Copy rootfs.img to HTTPD directory `/var/www/html/install `

Add following entry to `/var/lib/tftpboot/boot/grub2/grub.cfg `:

Chapter 9. Assigning roles to hosts

You can assign roles to your discovered hosts. These roles define the function of the host within the cluster. The roles can be one of the standard Kubernetes types: control plane (master) or worker .

The host must meet the minimum requirements for the role you selected. You can find the hardware requirements by referring to the Prerequisites section of this document or using the preflight requirement API.

If you do not select a role, the system selects one for you. You can change the role at any time before installation starts.

9.1. Selecting a role by using the web console

You can select a role after the host finishes its discovery.

  • Go to the Host Discovery tab and scroll down to the Host Inventory table.
  • Select the Auto-assign drop-down for the required host.
  • Select Control plane node to assign this host a control plane role.
  • Select Worker to assign this host a worker role.
  • Check the validation status.

9.2. Selecting a role by by using the API

You can select a role for the host using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. A host may be one of two roles:

By default, the Assisted Installer sets a host to auto-assign , which means the installer will determine whether the host is a master or worker role automatically. Use this procedure to set the host’s role.

Modify the host_role setting:

Replace <host_id> with the ID of the host.

9.3. Auto-assigning roles

Assisted Installer selects a role automatically for hosts if you do not assign a role yourself. The role selection mechanism factors the host’s memory, CPU, and disk space. It aims to assign a control plane role to the 3 weakest hosts that meet the minimum requirements for control plane nodes. All other hosts default to worker nodes. The goal is to provide enough resources to run the control plane and reserve the more capacity-intensive hosts for running the actual workloads.

You can override the auto-assign decision at any time before installation.

The validations make sure that the auto selection is a valid one.

9.4. Additional resources

Chapter 10. preinstallation validations, 10.1. definition of preinstallation validations.

The Assisted Installer aims to make cluster installation as simple, efficient, and error-free as possible. The Assisted Installer performs validation checks on the configuration and the gathered telemetry before starting an installation.

The Assisted Installer will use the information provided prior to installation, such as control plane topology, network configuration and hostnames. It will also use real time telemetry from the hosts you are attempting to install.

When a host boots the discovery ISO, an agent will start on the host. The agent will send information about the state of the host to the Assisted Installer.

The Assisted Installer uses all of this information to compute real time preinstallation validations. All validations are either blocking or non-blocking to the installation.

10.2. Blocking and non-blocking validations

A blocking validation will prevent progress of the installation, meaning that you will need to resolve the issue and pass the blocking validation before you can proceed.

A non-blocking validation is a warning and will tell you of things that might cause you a problem.

10.3. Validation types

The Assisted Installer performs two types of validation:

Host validations ensure that the configuration of a given host is valid for installation.

Cluster validations ensure that the configuration of the whole cluster is valid for installation.

10.4. Host validations

10.4.1. getting host validations by using the rest api.

If you use the web console, many of these validations will not show up by name. To get a list of validations consistent with the labels, use the following procedure.

  • You have installed the jq utility.
  • You have created an Infrastructure Environment by using the API or have created a cluster by using the web console.
  • You have hosts booted with the discovery ISO
  • You have your Cluster ID exported in your shell as CLUSTER_ID .
  • You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell.

Get all validations for all hosts:

Get non-passing validations for all hosts:

10.4.2. Host validations in detail

ParameterValidation typeDescription

non-blocking

Checks that the host has recently communicated with the Assisted Installer.

non-blocking

Checks that the Assisted Installer received the inventory from the host.

non-blocking

Checks that the number of CPU cores meets the minimum requirements.

non-blocking

Checks that the amount of memory meets the minimum requirements.

non-blocking

Checks that at least one available disk meets the eligibility criteria.

blocking

Checks that the number of cores meets the minimum requirements for the host role.

blocking

Checks that the amount of memory meets the minimum requirements for the host role.

blocking

For day 2 hosts, checks that the host can download ignition configuration from the day 1 cluster.

blocking

The majority group is the largest full-mesh connectivity group on the cluster, where all members can communicate with all other members. This validation checks that hosts in a multi-node, day 1 cluster are in the majority group.

blocking

Checks that the platform is valid for the network settings.

non-blocking

Checks if an NTP server has been successfully used to synchronize time on the host.

non-blocking

Checks if container images have been successfully pulled from the image registry.

blocking

Checks that disk speed metrics from an earlier installation meet requirements, if they exist.

blocking

Checks that the average network latency between hosts in the cluster meets the requirements.

blocking

Checks that the network packet loss between hosts in the cluster meets the requirements.

blocking

Checks that the host has a default route configured.

blocking

For a multi node cluster with user managed networking. Checks that the host is able to resolve the API domain name for the cluster.

blocking

For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal API domain name for the cluster.

blocking

For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal apps domain name for the cluster.

non-blocking

Checks that the host is compatible with the cluster platform

blocking

Checks that the wildcard DNS *.<cluster_name>.<base_domain> is not configured, because this causes known problems for OpenShift

non-blocking

Checks that the type of host and disk encryption configured meet the requirements.

blocking

Checks that this host does not have any overlapping subnets.

blocking

Checks that the hostname is unique in the cluster.

blocking

Checks the validity of the hostname, meaning that it matches the general form of hostnames and is not forbidden.

blocking

Checks that the host IP is in the address range of the machine CIDR.

blocking

Validates that the cluster meets the requirements of the Local Storage Operator.

blocking

Validates that the cluster meets the requirements of the Openshift Data Foundation Operator.

blocking

Validates that the cluster meets the requirements of Container Native Virtualization.

blocking

Validates that the cluster meets the requirements of the Logical Volume Manager Operator.

non-blocking

Verifies that each valid disk sets to . In VSphere this will result in each disk having a UUID.

blocking

Checks that the discovery agent version is compatible with the agent docker image version.

blocking

Checks that installation disk is not skipping disk formatting.

blocking

Checks that all disks marked to skip formatting are in the inventory. A disk ID can change on reboot, and this validation prevents issues caused by that.

blocking

Checks the connection of the installation media to the host.

non-blocking

Checks that the machine network definition exists for the cluster.

blocking

Checks that the platform is compatible with the network settings. Some platforms are only permitted when installing Single Node Openshift or when using User Managed Networking.

10.5. Cluster validations

10.5.1. getting cluster validations by using the rest api.

If you use the web console, many of these validations will not show up by name. To obtain a list of validations consistent with the labels, use the following procedure.

Get all cluster validations:

Get non-passing cluster validations:

10.5.2. Cluster validations in detail

ParameterValidation typeDescription

non-blocking

Checks that the machine network definition exists for the cluster.

non-blocking

Checks that the cluster network definition exists for the cluster.

non-blocking

Checks that the service network definition exists for the cluster.

blocking

Checks that the defined networks do not overlap.

blocking

Checks that the defined networks share the same address families (valid address families are IPv4, IPv6)

blocking

Checks the cluster network prefix to ensure that it is valid and allows enough address space for all hosts.

blocking

For a non user managed networking cluster. Checks that or are members of the machine CIDR if they exist.

non-blocking

For a non user managed networking cluster. Checks that exist.

blocking

For a non user managed networking cluster. Checks if the belong to the machine CIDR and are not in use.

blocking

For a non user managed networking cluster. Checks that exist.

non-blocking

For a non user managed networking cluster. Checks if the belong to the machine CIDR and are not in use.

blocking

Checks that all hosts in the cluster are in the "ready to install" status.

blocking

This validation only applies to multi-node clusters.

non-blocking

Checks that the base DNS domain exists for the cluster.

non-blocking

Checks that the pull secret exists. Does not check that the pull secret is valid or authorized.

blocking

Checks that each of the host clocks are no more than 4 minutes out of sync with each other.

blocking

Validates that the cluster meets the requirements of the Local Storage Operator.

blocking

Validates that the cluster meets the requirements of the Openshift Data Foundation Operator.

blocking

Validates that the cluster meets the requirements of Container Native Virtualization.

blocking

Validates that the cluster meets the requirements of the Logical Volume Manager Operator.

blocking

Checks the validity of the network type if it exists.

Chapter 11. Network configuration

This section describes the basics of network configuration using the Assisted Installer.

11.1. Cluster networking

There are various network types and addresses used by OpenShift and listed in the table below.

TypeDNSDescription

 

The IP address pools from which Pod IP addresses are allocated.

 

The IP address pool for services.

 

The IP address blocks for machines forming the cluster.

The VIP to use for API communication. This setting must either be provided or preconfigured in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address.

The VIPs to use for API communication. This setting must either be provided or preconfigured in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the setting.

The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address.

The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the setting.

OpenShift Container Platform 4.12 introduces the new apiVIPs and ingressVIPs settings to accept multiple IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP and IngressVIP , but you must set both the new and old settings when modifying the configuration using the API.

Depending on the desired network stack, you can choose different network controllers. Currently, the Assisted Service can deploy OpenShift Container Platform clusters using one of the following configurations:

  • Dual-stack (IPv4 + IPv6)

Supported network controllers depend on the selected stack and are summarized in the table below. For a detailed Container Network Interface (CNI) network provider feature comparison, refer to the OCP Networking documentation .

StackSDNOVN

IPv4

Yes

Yes

IPv6

No

Yes

Dual-stack

No

Yes

OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but not for OpenShift Container Platform 4.15 and later releases.

11.1.1. Limitations

11.1.1.1. sdn.

  • The SDN controller is not supported with single-node OpenShift.
  • The SDN controller does not support IPv6.
  • The SDN controller is not supported for OpenShift Container Platform 4.15 and later releases. For more information, see Deprecation of the OpenShift SDN network plugin in the OpenShift Container Platform release notes.

11.1.1.2. OVN-Kubernetes

Please see the OVN-Kubernetes limitations section in the OCP documentation .

11.1.2. Cluster network

The cluster network is a network from which every Pod deployed in the cluster gets its IP address. Given that the workload may live across many nodes forming the cluster, it’s important for the network provider to be able to easily find an individual node based on the Pod’s IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix .

The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster may assign addresses for the multi-node cluster:

Creating a 3-node cluster using the snippet above may create the following network topology:

  • Pods scheduled in node #1 get IPs from 10.128.0.0/23
  • Pods scheduled in node #2 get IPs from 10.128.2.0/23
  • Pods scheduled in node #3 get IPs from 10.128.4.0/23

Explaining OVN-K8s internals is out of scope for this document, but the pattern described above provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes.

11.1.3. Machine network

The machine network is a network used by all the hosts forming the cluster to communicate with each other. This is also the subnet that must include the API and Ingress VIPs.

11.1.4. SNO compared to multi-node cluster

Depending on whether you are deploying a Single Node OpenShift or a multi-node cluster, different values are mandatory. The table below explains this in more detail.

ParameterSNOMulti-Node Cluster with DHCP modeMulti-Node Cluster without DHCP mode

Required

Required

Required

Required

Required

Required

Auto-assign possible (*)

Auto-assign possible (*)

Auto-assign possible (*)

Forbidden

Forbidden

Required

Forbidden

Forbidden

Required in 4.12 and later releases

Forbidden

Forbidden

Required

Forbidden

Forbidden

Required in 4.12 and later releases

(*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly.

11.1.5. Air-gapped environments

The workflow for deploying a cluster without Internet access has some prerequisites which are out of scope of this document. You may consult the Zero Touch Provisioning the hard way Git repository for some insights.

11.2. VIP DHCP allocation

The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server.

If you enable the feature, instead of using api_vips and ingress_vips from the cluster configuration, the service will send a lease allocation request and based on the reply it will use VIPs accordingly. The service will allocate the IP addresses from the Machine Network.

Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier.

VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later.

11.2.1. Example payload to enable autoallocation

11.2.2. example payload to disable autoallocation, 11.3. additional resources.

  • Bare metal IPI documentation provides additional explanation of the syntax for the VIP addresses.

11.4. Understanding differences between user- and cluster-managed networking

User managed networking is a feature in the Assisted Installer that allows customers with non-standard network topologies to deploy OpenShift Container Platform clusters. Examples include:

  • Customers with an external load balancer who do not want to use keepalived and VRRP for handling VIP addressses.
  • Deployments with cluster nodes distributed across many distinct L2 network segments.

11.4.1. Validations

There are various network validations happening in the Assisted Installer before it allows the installation to start. When you enable User Managed Networking, the following validations change:

  • L3 connectivity check (ICMP) is performed instead of L2 check (ARP)

11.5. Static network configuration

You may use static network configurations when generating or updating the discovery ISO.

11.5.1. Prerequisites

  • You are familiar with NMState .

11.5.2. NMState configuration

The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time.

11.5.2.1. Example of NMState configuration

11.5.3. mac interface mapping.

MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host.

The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces.

11.5.3.1. Example of MAC interface mapping

11.5.4. additional nmstate configuration examples.

The examples below are only meant to show a partial configuration. They are not meant to be used as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they may leave your machines with no network connectivity.

11.5.4.1. Tagged VLAN

11.5.4.2. network bond, 11.6. applying a static network configuration with the api.

You can apply a static network configuration using the Assisted Installer API.

  • You have created an infrastructure environment using the API or have created a cluster using the web console.
  • You have YAML files with a static network configuration available as server-a.yaml and server-b.yaml .

Create a temporary file /tmp/request-body.txt with the API request:

Send the request to the Assisted Service API endpoint:

11.7. Additional resources

  • Applying a static network configuration with the web console

11.8. Converting to dual-stack networking

Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets.

11.8.1. Prerequisites

  • You are familiar with OVN-K8s documentation

11.8.2. Example payload for Single Node OpenShift

11.8.3. example payload for an openshift container platform cluster consisting of many nodes, 11.8.4. limitations.

The api_vips IP address and ingress_vips IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values.

11.9. Additional resources

  • Understanding OpenShift networking
  • OpenShift SDN - CNI network provider
  • OVN-Kubernetes - CNI network provider
  • Dual-stack Service configuration scenarios
  • Installing on bare metal OCP .
  • Cluster Network Operator configuration .

Chapter 12. Expanding the cluster

You can expand a cluster installed with the Assisted Installer by adding hosts using the user interface or the API.

  • API connectivity failure when adding nodes to a cluster
  • Configuring multi-architecture compute machines on an OpenShift cluster

12.1. Checking for multi-architecture support

You must check that your cluster can support multiple architectures before you add a node with a different architecture.

  • Log in to the cluster using the CLI.

Check that your cluster uses the architecture payload by running the following command:

Verification

If you see the following output, your cluster supports multiple architectures:

12.2. Installing a multi-architecture cluster

A cluster with an x86_64 control plane can support worker nodes that have two different CPU architectures. Mixed-architecture clusters combine the strengths of each architecture and support a variety of workloads.

For example, you can add arm64 , IBM Power , or IBM zSystems worker nodes to an existing OpenShift Container Platform cluster with an x86_64 .

The main steps of the installation are as follows:

  • Create and register a multi-architecture cluster.
  • Create an x86_64 infrastructure environment, download the ISO discovery image for x86_64 , and add the control plane. The control plane must have the x86_64 architecture.
  • Create an arm64 , IBM Power , or IBM zSystems infrastructure environment, download the ISO discovery images for arm64 , IBM Power or IBM zSystems , and add the worker nodes.

Supported platforms

The table below lists the platforms that support a mixed-architecture cluster for each OpenShift Container Platform version. Use the appropriate platforms for the version you are installing.

OpenShift Container Platform versionSupported platformsDay 1 control plane architectureDay 2 node architecture

4.12.0

4.13.0

4.14.0

Technology Preview (TP) features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope .

  • Start the procedure for installing OpenShift Container Platform using the API. For details, see Installing with the Assisted Installer API in the Additional Resources section.

When you reach the "Registering a new cluster" step of the installation, register the cluster as a multi-architecture cluster:

When you reach the "Registering a new infrastructure environment" step of the installation, set cpu_architecture to x86_64 :

When you reach the "Adding hosts" step of the installation, set host_role to master :

For more information, see Assigning Roles to Hosts in Additional Resources .

  • Download the discovery image for the x86_64 architecture.
  • Boot the x86_64 architecture hosts using the generated discovery image.
  • Start the installation and wait for the cluster to be fully installed.

Repeat the "Registering a new infrastructure environment" step of the installation. This time, set cpu_architecture to one of the following: ppc64le (for IBM Power), s390x (for IBM Z), or arm64 . For example:

Repeat the "Adding hosts" step of the installation. This time, set host_role to worker :

For more details, see Assigning Roles to Hosts in Additional Resources .

  • Download the discovery image for the arm64 , ppc64le or s390x architecture.
  • Boot the architecture hosts using the generated discovery image.

View the arm64 , ppc64le or s390x worker nodes in the cluster by running the following command:

12.3. Adding hosts with the web console

You can add hosts to clusters that were created using the Assisted Installer .

Adding hosts to Assisted Installer clusters is only supported for clusters running OpenShift Container Platform version 4.11 and up.

  • Log in to OpenShift Cluster Manager and click the cluster that you want to expand.
  • Click Add hosts and download the discovery ISO for the new host, adding an SSH public key and configuring cluster-wide proxy settings as needed.
  • Optional: Modify ignition files as needed.
  • Boot the target host using the discovery ISO, and wait for the host to be discovered in the console.
  • Select the host role. It can be either a worker or a control plane host.
  • Start the installation.

As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. When prompted, approve the pending CSRs to complete the installation.

When the host is successfully installed, it is listed as a host in the cluster web console.

New hosts will be encrypted using the same method as the original cluster.

12.4. Adding hosts with the API

You can add hosts to clusters using the Assisted Installer REST API.

  • Install the OpenShift Cluster Manager CLI ( ocm ).
  • Log in to OpenShift Cluster Manager as a user with cluster creation privileges.
  • Ensure that all the required DNS records exist for the cluster that you want to expand.
  • Authenticate against the Assisted Installer REST API and generate an API token for your session. The generated token is valid for 15 minutes only.

Set the $API_URL variable by running the following command:

Import the cluster by running the following commands:

Set the $CLUSTER_ID variable. Log in to the cluster and run the following command:

Set the $CLUSTER_REQUEST variable that is used to import the cluster:

Import the cluster and set the $CLUSTER_ID variable. Run the following command:

Generate the InfraEnv resource for the cluster and set the $INFRA_ENV_ID variable by running the following commands:

  • Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com .

Set the $INFRA_ENV_REQUEST variable:

Post the $INFRA_ENV_REQUEST to the /v2/infra-envs API and set the $INFRA_ENV_ID variable:

Get the URL of the discovery ISO for the cluster host by running the following command:

Download the ISO:

  • Boot the new worker host from the downloaded rhcos-live-minimal.iso .

Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up:

Set the $HOST_ID variable for the new host, for example:

Check that the host is ready to install by running the following command:

Ensure that you copy the entire command including the complete jq expression.

When the previous command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command:

As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host.

You must approve the CSRs to complete the installation.

Keep running the following API call to monitor the cluster installation:

Optional: Run the following command to see all the events for the cluster:

  • Log in to the cluster and approve the pending CSRs to complete the installation.

Check that the new host was successfully added to the cluster with a status of Ready :

12.5. Installing a primary control plane node on a healthy cluster

This procedure describes how to install a primary control plane node on a healthy OpenShift Container Platform cluster.

If the cluster is unhealthy, additional operations are required before they can be managed. See Additional Resources for more information.

  • You have installed a healthy cluster with a minimum of three nodes.
  • You have assigned role: master to a single node.

Retrieve pending CertificateSigningRequests (CSRs):

Approve pending CSRs:

Confirm the primary node is in Ready status:

The etcd-operator requires a Machine Custom Resource (CR) referencing the new node when the cluster runs with a functional Machine API.

Link the Machine CR with BareMetalHost and Node :

Create the BareMetalHost CR with a unique .metadata.name value:

Apply the BareMetalHost CR:

Create the Machine CR using the unique .machine.name value:

Apply the Machine CR:

Link BareMetalHost , Machine , and Node using the link-machine-and-node.sh script:

Confirm etcd members:

Confirm the etcd-operator configuration applies to all nodes:

Confirm etcd-operator health:

Confirm node health:

Confirm the ClusterOperators health:

Confirm the ClusterVersion :

Remove the old control plane node:

Delete the BareMetalHost CR:

Confirm the Machine is unhealthy:

Delete the Machine CR:

Confirm removal of the Node CR:

Check etcd-operator logs to confirm status of the etcd cluster:

Remove the physical machine to allow etcd-operator to reconcile the cluster members:

  • Installing a primary control plane node on an unhealthy cluster

12.6. Installing a primary control plane node on an unhealthy cluster

This procedure describes how to install a primary control plane node on an unhealthy OpenShift Container Platform cluster.

  • You have created a control plane.

Confirm initial state of the cluster:

Confirm the etcd-operator detects the cluster as unhealthy:

Confirm the etcdctl members:

Confirm that etcdctl reports an unhealthy member of the cluster:

Remove the unhealthy control plane by deleting the Machine Custom Resource:

The Machine and Node Custom Resources (CRs) will not be deleted if the unhealthy cluster cannot run successfully.

Confirm that etcd-operator has not removed the unhealthy machine:

Remove the unhealthy etcdctl member manually:

Remove the unhealthy cluster by deleting the etcdctl member Custom Resource:

Confirm members of etcdctl by running the following command:

Review and approve Certificate Signing Requests

Review the Certificate Signing Requests (CSRs):

Approve all pending CSRs:

Confirm ready status of the control plane node:

Validate the Machine , Node and BareMetalHost Custom Resources.

The etcd-operator requires Machine CRs to be present if the cluster is running with the functional Machine API. Machine CRs are displayed during the Running phase when present.

Create Machine Custom Resource linked with BareMetalHost and Node .

Make sure there is a Machine CR referencing the newly added node.

Boot-it-yourself will not create BareMetalHost and Machine CRs, so you must create them. Failure to create the BareMetalHost and Machine CRs will generate errors when running etcd-operator .

Add BareMetalHost Custom Resource:

Add Machine Custom Resource:

Link BareMetalHost , Machine , and Node by running the link-machine-and-node.sh script:

Confirm the etcd operator has configured all nodes:

Confirm health of etcdctl :

Confirm the health of the nodes:

Confirm the health of the ClusterOperators :

12.7. Additional resources

  • Installing a primary control plane node on a healthy cluster
  • Authenticating with the REST API

Chapter 13. Optional: Installing on Nutanix

If you install OpenShift Container Platform on Nutanix, the Assisted Installer can integrate the OpenShift Container Platform cluster with the Nutanix platform, which exposes the Machine API to Nutanix and enables autoscaling and dynamically provisioning storage containers with the Nutanix Container Storage Interface (CSI).

To deploy an OpenShift Container Platform cluster and maintain its daily operation, you need access to a Nutanix account with the necessary environment requirements. For details, see Environment requirements .

13.1. Adding hosts on Nutanix with the UI

To add hosts on Nutanix with the user interface (UI), generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

After this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.

  • You have created a cluster profile in the Assisted Installer UI.
  • You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name.
  • In Cluster details , select Nutanix from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional.
  • In Host discovery, click the Add hosts button.

Optional: Add an SSH public key so that you can connect to the Nutanix VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation.

Select the desired provisioning type.

Minimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot.

In Networking , select Cluster-managed networking . Nutanix does not support User-managed networking .

  • Optional: Configure the discovery image if you want to boot it with an ignition file. See Configuring the discovery image for additional details.
  • Click Generate Discovery ISO .
  • Copy the Discovery ISO URL .
  • In the Nutanix Prism UI, follow the directions to upload the discovery image from the Assisted Installer .

In the Nutanix Prism UI, create the control plane (master) VMs through Prism Central .

  • Enter the Name . For example, control-plane or master .
  • Enter the Number of VMs . This should be 3 for the control plane.
  • Ensure the remaining settings meet the minimum requirements for control plane hosts.

In the Nutanix Prism UI, create the worker VMs through Prism Central .

  • Enter the Name . For example, worker .
  • Enter the Number of VMs . You should create at least 2 worker nodes.
  • Ensure the remaining settings meet the minimum requirements for worker hosts.
  • Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a Ready status.
  • Continue with the installation procedure.

13.2. Adding hosts on Nutanix with the API

To add hosts on Nutanix with the API, generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

Once this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.

  • You have set up the Assisted Installer API authentication.
  • You have created an Assisted Installer cluster profile.
  • You have created an Assisted Installer infrastructure environment.
  • You have completed the Assisted Installer cluster configuration.
  • Configure the discovery image if you want it to boot with an ignition file.

Create a Nutanix cluster configuration file to hold the environment variables:

If you have to start a new terminal session, you can reload the environment variables easily. For example:

Assign the Nutanix cluster’s name to the NTX_CLUSTER_NAME environment variable in the configuration file:

Replace <cluster_name> with the name of the Nutanix cluster.

Assign the Nutanix cluster’s subnet name to the NTX_SUBNET_NAME environment variable in the configuration file:

Replace <subnet_name> with the name of the Nutanix cluster’s subnet.

Create the Nutanix image configuration file:

Replace <image_url> with the image URL downloaded from the previous step.

Create the Nutanix image:

Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 .

Assign the returned UUID to the NTX_IMAGE_UUID environment variable in the configuration file:

Get the Nutanix cluster UUID:

Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <nutanix_cluster_name> with the name of the Nutanix cluster.

Assign the returned Nutanix cluster UUID to the NTX_CLUSTER_UUID environment variable in the configuration file:

Replace <uuid> with the returned UUID of the Nutanix cluster.

Get the Nutanix cluster’s subnet UUID:

Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <subnet_name> with the name of the cluster’s subnet.

Assign the returned Nutanix subnet UUID to the NTX_CLUSTER_UUID environment variable in the configuration file:

Replace <uuid> with the returned UUID of the cluster subnet.

Ensure the Nutanix environment variables are set:

Create a VM configuration file for each Nutanix host. Create three control plane (master) VMs and at least two worker VMs. For example:

Replace <host_name> with the name of the host.

Boot each Nutanix virtual machine:

Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <vm_config_file_name> with the name of the VM configuration file.

Assign the returned VM UUID to a unique environment variable in the configuration file:

Replace <uuid> with the returned UUID of the VM.

The environment variable must have a unique name for each VM.

Wait until the Assisted Installer has discovered each VM and they have passed validation.

Modify the cluster definition to enable integration with Nutanix:

13.3. Nutanix postinstallation configuration

Follow the steps below to complete and validate the OpenShift Container Platform integration with the Nutanix cloud provider.

  • The Assisted Installer has finished installing the cluster successfully.
  • The cluster is connected to console.redhat.com .
  • You have access to the Red Hat OpenShift Container Platform command line interface.

13.3.1. Updating the Nutanix configuration settings

After installing OpenShift Container Platform on the Nutanix platform using the Assisted Installer, you must update the following Nutanix configuration settings manually:

  • <prismcentral_username> : The Nutanix Prism Central username.
  • <prismcentral_password> : The Nutanix Prism Central password.
  • <prismcentral_address> : The Nutanix Prism Central address.
  • <prismcentral_port> : The Nutanix Prism Central port.
  • <prismelement_username> : The Nutanix Prism Element username.
  • <prismelement_password> : The Nutanix Prism Element password.
  • <prismelement_address> : The Nutanix Prism Element address.
  • <prismelement_port> : The Nutanix Prism Element port.
  • <prismelement_clustername> : The Nutanix Prism Element cluster name.
  • <nutanix_storage_container> : The Nutanix Prism storage container.

In the OpenShift Container Platform command line interface, update the Nutanix cluster configuration settings:

For additional details, see Creating a machine set on Nutanix .

Create the Nutanix secret:

When installing OpenShift Container Platform version 4.13 or later, update the Nutanix cloud provider configuration:

Get the Nutanix cloud provider configuration YAML file:

Create a backup of the configuration file:

Open the configuration YAML file:

Edit the configuration YAML file as follows:

Apply the configuration updates:

13.3.2. Creating the Nutanix CSI Operator group

Create an Operator group for the Nutanix CSI Operator.

For a description of operator groups and related concepts, see Common Operator Framework Terms in Additional Resources .

Open the Nutanix CSI Operator Group YAML file:

Edit the YAML file as follows:

Create the Operator Group:

13.3.3. Installing the Nutanix CSI Operator

The Nutanix Container Storage Interface (CSI) Operator for Kubernetes deploys and manages the Nutanix CSI Driver.

For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the Operator section of the Nutanix CSI Operator document in Additional Resources .

Get the parameter values for the Nutanix CSI Operator YAML file:

Check that the Nutanix CSI Operator exists:

Assign the default channel for the Operator to a BASH variable:

Assign the starting cluster service version (CSV) for the Operator to a BASH variable:

Assign the catalog source for the subscription to a BASH variable:

Assign the Nutanix CSI Operator source namespace to a BASH variable:

Create the Nutanix CSI Operator YAML file using the BASH variables:

Create the CSI Nutanix Operator:

Run the following command until the Operator subscription state changes to AtLatestKnown . This indicates that the Operator subscription has been created, and may take some time.

13.3.4. Deploying the Nutanix CSI storage driver

The Nutanix Container Storage Interface (CSI) Driver for Kubernetes provides scalable and persistent storage for stateful applications.

For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the CSI Driver using the Operator section of the Nutanix CSI Operator document in Additional Resources .

Create a NutanixCsiStorage resource to deploy the driver:

Create a Nutanix secret YAML file for the CSI storage driver:

13.3.5. Validating the postinstallation configurations

Run the following steps to validate the configuration.

Verify that you can create a storage class:

Verify that you can create the Nutanix persistent volume claim (PVC):

Create the persistent volume claim (PVC):

Validate that the persistent volume claim (PVC) status is Bound:

  • Creating a machine set on Nutanix .
  • Nutanix CSI Operator
  • Storage Management
  • Common Operator Framework Terms

Chapter 14. Optional: Installing on vSphere

The Assisted Installer integrates the OpenShift Container Platform cluster with the vSphere platform, which exposes the Machine API to vSphere and enables autoscaling.

14.1. Adding hosts on vSphere

You can add hosts to the Assisted Installer cluster using the online vSphere client or the govc vSphere CLI tool. The following procedure demonstrates adding hosts with the govc CLI tool. To use the online vSphere Client, refer to the documentation for vSphere.

To add hosts on vSphere with the vSphere govc CLI, generate the discovery image ISO from the Assisted Installer. The minimal discovery image ISO is the default setting. This image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

After this is complete, you must create an image for the vSphere platform and create the vSphere virtual machines.

  • You are using vSphere 7.0.2 or higher.
  • You have the vSphere govc CLI tool installed and configured.
  • You have set clusterSet disk.enableUUID to true in vSphere.
  • You have created a cluster in the Assisted Installer web console, or
  • You have created an Assisted Installer cluster profile and infrastructure environment with the API.
  • You have exported your infrastructure environment ID in your shell as $INFRA_ENV_ID .
  • In Cluster details , select vSphere from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional.
  • In Host discovery , click the Add hosts button and select the provisioning type.

Add an SSH public key so that you can connect to the vSphere VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation.

Select the desired discovery image ISO.

In Networking , select Cluster-managed networking or User-managed networking :

  • Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy or the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates and add the additional certificates.

Download the discovery ISO:

Replace <discovery_url> with the Discovery ISO URL from the preceding step.

On the command line, power down and destroy any preexisting virtual machines:

Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder.

Remove preexisting ISO images from the data store, if there are any:

Replace <iso_datastore> with the name of the data store. Replace image with the name of the ISO image.

Upload the Assisted Installer discovery ISO:

Replace <iso_datastore> with the name of the data store.

All nodes in the cluster must boot from the discovery image.

Boot three control plane (master) nodes:

See vm.create for details.

The foregoing example illustrates the minimum required resources for control plane nodes.

Boot at least two worker nodes:

The foregoing example illustrates the minimum required resources for worker nodes.

Ensure the VMs are running:

After 2 minutes, shut down the VMs:

Set the disk.enableUUID setting to TRUE :

You must set disk.enableUUID to TRUE on all of the nodes to enable autoscaling with vSphere.

Restart the VMs:

  • Select roles if needed.
  • In Networking , uncheck Allocate IPs via DHCP server .
  • Set the API VIP address.
  • Set the Ingress VIP address.

14.2. vSphere postinstallation configuration using the CLI

After installing an OpenShift Container Platform cluster using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually:

  • vCenter username
  • vCenter password
  • vCenter address
  • vCenter cluster

Generate a base64-encoded username and password for vCenter:

Replace <vcenter_username> with your vCenter username.

Replace <vcenter_password> with your vCenter password.

Backup the vSphere credentials:

Edit the vSphere credentials:

Replace <vcenter_address> with the vCenter address. Replace <vcenter_username_encoded> with the base64-encoded version of your vSphere username. Replace <vcenter_password_encoded> with the base64-encoded version of your vSphere password.

Replace the vSphere credentials:

Redeploy the kube-controller-manager pods:

Backup the vSphere cloud provider configuration:

Edit the cloud provider configuration:

Replace <vcenter_address> with the vCenter address. Replace <datacenter> with the name of the data center. Replace <datastore> with the name of the data store. Replace <folder> with the folder containing the cluster VMs.

Apply the cloud provider configuration:

Taint the nodes with the uninitialized taint:

Follow steps 9 through 12 if you are installing OpenShift Container Platform 4.13 or later.

Identify the nodes to taint:

Run the following command for each node:

Replace <node_name> with the name of the node.

Back up the infrastructures configuration:

Edit the infrastructures configuration:

Replace <vcenter_address> with your vCenter address. Replace <datacenter> with the name of your vCenter data center. Replace <datastore> with the name of your vCenter data store. Replace <folder> with the folder containing the cluster VMs. Replace <vcenter_cluster> with the vSphere vCenter cluster where OpenShift Container Platform is installed.

Apply the infrastructures configuration:

14.3. vSphere postinstallation configuration using the web console

  • Default data store
  • Virtual machine folder
  • In the Administrator perspective, navigate to Home → Overview .
  • Under Status , click vSphere connection to open the vSphere connection configuration wizard.
  • In the vCenter field, enter the network address of the vSphere vCenter server. This can be either a domain name or an IP address. It appears in the vSphere web client URL; for example https://[your_vCenter_address]/ui .

In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed.

This step is mandatory if you installed OpenShift Container Platform 4.13 or later.

  • In the Username field, enter your vSphere vCenter username.

In the Password field, enter your vSphere vCenter password.

The system stores the username and password in the vsphere-creds secret in the kube-system namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable.

  • In the Datacenter field, enter the name of the vSphere data center that contains the virtual machines used to host the cluster; for example, SDDC-Datacenter .

In the Default data store field, enter the vSphere data store that stores the persistent data volumes; for example, /SDDC-Datacenter/datastore/datastorename .

Updating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere PersistentVolumes .

  • In the Virtual Machine Folder field, enter the data center folder that contains the virtual machine of the cluster; for example, /SDDC-Datacenter/vm/ci-ln-hjg4vg2-c61657-t2gzr . For the OpenShift Container Platform installation to succeed, all virtual machines comprising the cluster must be located in a single data center folder.
  • Click Save Configuration . This updates the cloud-provider-config file in the openshift-config namespace, and starts the configuration process.
  • Reopen the vSphere connection configuration wizard and expand the Monitored operators panel. Check that the status of the operators is either Progressing or Healthy .

The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims objects might become disconnected.

Follow the steps below to monitor the configuration process.

Check that the configuration process completed successfully:

  • In the OpenShift Container Platform Administrator perspective, navigate to Home → Overview .
  • Under Status click Operators . Wait for all operator statuses to change from Progressing to All succeeded . A Failed status indicates that the configuration failed.
  • Under Status , click Control Plane . Wait for the response rate of all Control Pane components to return to 100%. A Failed control plane component indicates that the configuration failed.

A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again.

Check that you are able to bind PersistentVolumeClaims objects by performing the following steps:

Create a StorageClass object using the following YAML:

Create a PersistentVolumeClaims object using the following YAML:

For instructions, see Dynamic provisioning in the OpenShift Container Platform documentation. To troubleshoot a PersistentVolumeClaims object, navigate to Storage → PersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform web console.

Chapter 15. Optional: Installing on Oracle Cloud Infrastructure (OCI)

From OpenShift Container Platform 4.14 and later versions, you can use the Assisted Installer to install a cluster on Oracle Cloud Infrastructure by using infrastructure that you provide. Oracle Cloud Infrastructure provides services that can meet your needs for regulatory compliance, performance, and cost-effectiveness. You can access OCI Resource Manager configurations to provision and configure OCI resources.

For OpenShift Container Platform 4.14 and 4.15, the OCI integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

This section is a summary of the steps required in the Assisted Installer web console to support the integration with Oracle Cloud Infrastructure. It does not document the steps to be performed in Oracle Cloud Infrastructure, nor does it cover the integration between the two platforms. For a complete and comprehensive procedure, see Using the Assisted Installer to install a cluster on OCI .

15.1. Generating an OCI-compatible discovery ISO image

Generate the discovery ISO image in Assisted Installer by completing the required steps. You must upload the image to the Oracle Cloud Infrastructure before you install OpenShift Container Platform on Oracle Cloud Infrastructure.

  • You created a child compartment and an object storage bucket on Oracle Cloud Infrastructure. See Creating OCI resources and services in the OpenShift Container Platform documentation.
  • You meet the requirements necessary for installing a cluster. For details, see Prerequisites .
  • On the Cluster type page, click the Datacenter tab.
  • In the Assisted Installer section, click Create cluster .

On the Cluster Details page, complete the following fields:

  • In the Cluster name field, specify the name of your cluster, such as ocidemo .
  • In the Base domain field, specify the base domain of the cluster, such as splat-oci.devcluster.openshift.com . Take the base domain from OCI after creating a compartment and a zone.
  • In the OpenShift version field, specify OpenShift 4.15 or a later version.
  • In the CPU architecture field, specify x86_64 or Arm64 .
  • From the Integrate with external partner platforms list, select Oracle Cloud Infrastructure . The Include custom manifests checkbox is automatically selected.
  • On the Operators page, click Next .

On the Host Discovery page, perform the following actions:

  • Click Add host to display a dialog box.
  • For the SSH public key field, upload a public SSH key from your local system. You can generate an SSH key pair with ssh-keygen .
  • Click Generate Discovery ISO to generate the discovery image ISO file.
  • Download the file to your local system. You will then upload the file to the bucket in Oracle Cloud Infrastructure as an Object.

15.2. Assigning node roles and custom manifests

After you provision Oracle Cloud Infrastructure (OCI) resources and upload OpenShift Container Platform custom manifest configuration files to OCI, you must complete the remaining cluster installation steps on the Assisted Installer before you can create an instance OCI.

  • You created a resource stack on OCI, and the stack includes the custom manifest configuration files and OCI Resource Manager configuration resources. For details, see Downloading manifest files and deployment resources in the OpenShift Container Platform documentation.
  • From the Red Hat Hybrid Cloud Console , go to the Host discovery page.
  • Under the Role column, assign a node role, either Control plane node or Worker , for each targeted hostname. Click Next .
  • Accept the default settings for the Storage and Networking pages.
  • Click Next to go to the Custom manifests page.
  • In the Folder field, select manifests .
  • In the File name field, enter a value such as oci-ccm.yml .
  • In the Content section, click Browse . Select the CCM manifest located in custom_ manifest/manifests/oci-ccm.yml .

Click Add another manifest . Repeat the same steps for the following manifests provided by Oracle:

  • CSI driver manifest: custom_ manifest/manifests/oci-csi.yml .
  • CCM machine configuration: custom_ manifest/openshift/machineconfig-ccm.yml .
  • CSI driver machine configuration: custom_ manifest/openshift/machineconfig-csi.yml .
  • Complete the Review and create step to create your OpenShift Container Platform cluster on OCI.
  • Click Install cluster to finalize the cluster installation.

Chapter 16. Troubleshooting

There are cases where the Assisted Installer cannot begin the installation or the cluster fails to install properly. In these events, it is helpful to understand the likely failure modes as well as how to troubleshoot the failure.

16.1. Troubleshooting discovery ISO issues

The Assisted Installer uses an ISO image to run an agent that registers the host to the cluster and performs hardware and network validations before attempting to install OpenShift. You can follow these procedures to troubleshoot problems related to the host discovery.

Once you start the host with the discovery ISO image, the Assisted Installer discovers the host and presents it in the Assisted Service web console. See Configuring the discovery image for additional details.

16.1.1. Verify the discovery agent is running

  • You have created an infrastructure environment by using the API or have created a cluster by using the web console.
  • You booted a host with the Infrastructure Environment discovery ISO and the host failed to register.
  • You have SSH access to the host.
  • You provided an SSH public key in the "Add hosts" dialog before generating the Discovery ISO so that you can SSH into your machine without a password.
  • Verify that your host machine is powered on.
  • If you selected DHCP networking , check that the DHCP server is enabled.
  • If you selected Static IP, bridges and bonds networking, check that your configurations are correct.

Verify that you can access your host machine using SSH, a console such as the BMC, or a virtual machine console:

You can specify private key file using the -i parameter if it isn’t stored in the default directory.

If you fail to ssh to the host, the host failed during boot or it failed to configure the network.

Upon login you should see this message:

Example login

screenshot of assisted iso login message

Check the agent service logs:

In the following example, the errors indicate there is a network issue:

Example agent service log screenshot of agent service log

screenshot of agent service log

If there is an error pulling the agent image, check the proxy settings. Verify that the host is connected to the network. You can use nmcli to get additional information about your network configuration.

16.1.2. Verify the agent can access the assisted-service

  • You verified the discovery agent is running.

Check the agent logs to verify the agent can access the Assisted Service:

The errors in the following example indicate that the agent failed to access the Assisted Service.

Example agent log

screenshot of the agent log failing to access the Assisted Service

Check the proxy settings you configured for the cluster. If configured, the proxy must allow access to the Assisted Service URL.

16.2. Troubleshooting minimal discovery ISO issues

The minimal ISO image should be used when bandwidth over the virtual media connection is limited. It includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The resulting ISO image is about 100MB in size compared to 1GB for the full ISO image.

16.2.1. Troubleshooting minimal ISO boot failure by interrupting the boot process

If your environment requires static network configuration to access the Assisted Installer service, any issues with that configuration might prevent the minimal ISO from booting properly. If the boot screen shows that the host has failed to download the root file system image, the network might not be configured correctly.

You can interrupt the kernel boot early in the bootstrap process, before the root file system image is downloaded. This allows you to access the root console and review the network configurations.

Example rootfs download failure

Failed root file system image download

Add the .spec.kernelArguments stanza to the infraEnv object of the cluster you are deploying:

For details on modifying an infrastructure environment, see Additional Resources .

  • Wait for the related nodes to reboot automatically and for the boot to abort at the iniqueue stage, before rootfs is downloaded. You will be redirected to the root console.

Identify and change the incorrect network configurations. Here are some useful diagnostic commands:

View system logs by using journalctl , for example:

View network connection information by using nmcli , as follows:

Check the configuration files for incorrect network connections, for example:

  • Press control+d to resume the bootstrap process. The server downloads rootfs and completes the process.
  • Reopen the infraEnv object and remove the .spec.kernelArguments stanza.
  • Modifying an infrastructure environment

16.3. Correcting a host’s boot order

Once the installation that runs as part of the Discovery Image completes, the Assisted Installer reboots the host.  The host must boot from its installation disk to continue forming the cluster.  If you have not correctly configured the host’s boot order, it will boot from another disk instead, interrupting the installation.

If the host boots the discovery image again, the Assisted Installer will immediately detect this event and set the host’s status to Installing Pending User Action .  Alternatively, if the Assisted Installer does not detect that the host has booted the correct disk within the allotted time, it will also set this host status.

  • Reboot the host and set its boot order to boot from the installation disk. If you didn’t select an installation disk, the Assisted Installer selected one for you. To view the selected installation disk, click to expand the host’s information in the host inventory, and check which disk has the “Installation disk” role.

16.4. Rectifying partially-successful installations

There are cases where the Assisted Installer declares an installation to be successful even though it encountered errors:

  • If you requested to install OLM operators and one or more failed to install, log into the cluster’s console to remediate the failures.
  • If you requested to install more than two worker nodes and at least one failed to install, but at least two succeeded, add the failed workers to the installed cluster.

16.5. API connectivity failure when adding nodes to a cluster

When you add a node to an existing cluster as part of day 2 operations, the node downloads the ignition configuration file from the day 1 cluster. If the download fails and the node is unable to connect to the cluster, the status of the host in the Host discovery step changes to Insufficient . Clicking this status displays the following error message:

There are a number of possible reasons for the connectivity failure. Here are some recommended actions.

Check the IP address and domain name of the cluster:

  • Click the set the IP or domain used to reach the cluster hyperlink.
  • In the Update cluster hostname window, enter the correct IP address or domain name for the cluster.
  • Check your DNS settings to ensure that the DNS can resolve the domain that you provided.
  • Ensure that port 22624 is open in all firewalls.

Check the agent logs of the host to verify that the agent can access the Assisted Service via SSH:

For more details, see Verify the agent can access the Assisted Service .

Legal Notice

  • Developer resources
  • Cloud learning hub
  • Interactive labs
  • Training and certification
  • Customer support
  • See all documentation

Try, buy, & sell

  • Product trial center
  • Red Hat Marketplace
  • Red Hat Ecosystem Catalog
  • Red Hat Store
  • Buy online (Japan)

Communities

  • Customer Portal Community
  • How we contribute

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog .

  • About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Red Hat legal and privacy links

  • Contact Red Hat
  • Red Hat Blog
  • Diversity, equity, and inclusion
  • Cool Stuff Store
  • Red Hat Summit
  • Privacy statement
  • Terms of use
  • All policies and guidelines
  • Digital accessibility

IMAGES

  1. Static DHCP

    static dhcp assignment

  2. How do I assign static DHCP leases? : Support

    static dhcp assignment

  3. Static DHCP Offers Consistent Remote Access

    static dhcp assignment

  4. DHCP Server

    static dhcp assignment

  5. IP Address Assignment Methods

    static dhcp assignment

  6. dhcp-assignment-process

    static dhcp assignment

VIDEO

  1. Cara konfigurasi DHCP Routing Static Menggunakan Cisco

  2. Cara konfigurasi DHCP routing static menggunakan cisco

  3. Configuring Static Routing, Subnetting, and DHCP on a Server: A Comprehensive Guide

  4. konfigurasi dhcp routing static

  5. Migrate OPNsense ISC DHCP static mappings to Kea DHCP Service

  6. How to assign IP addresses using DHCP in linux

COMMENTS

  1. How to Set Up Static DHCP So Your Computer's IP Address ...

    Click on Setup, and under Basic Setup, make sure DHCP is turned on. Scroll down to "Network Address Server Settings (DHCP)" and make a note of the starting IP address and the maximum number of users. The addresses you configure should fall within this range. Here, my range of IPs would be 192.168.1.100 - 192.168.1.114.

  2. How to Set Up a Static IP Address

    10 minutes. TOOLS. Windows 10 or 11. Step 1: Open the Command Prompt. Your first step should be to track down your computer's current IP address, subnet mask, and default gateway. Do this by ...

  3. DHCP vs Static IP: How to Set a Static IP or Enable DHCP in Windows 10

    Here are the steps to set a static IP in Windows 10: From the search box on the taskbar, type control panel. Then click Control Panel. Click View network status and tasks. Click Change adapter settings. Double-click the adapter you wish to modify the IP address.

  4. Static and dynamic IP address configurations for DHCP

    This two-part article series covers static and dynamic IP address settings and the configuration of a DHCP server. This article (part one) defines network identities, contrasts static and dynamic configurations, and covers the commands needed to manage the settings. Part two covers the deployment of a DHCP server, DHCP scope configuration, and ...

  5. DHCP

    The DHCP—Static Mapping feature enables assignment of static IP addresses without creating numerous host pools with manual bindings by using a customer-created text file that the DHCP server reads. The benefit of this feature is that it eliminates the need for a long configuration file and reduces the space required in NVRAM to maintain ...

  6. DHCP Static Binding on Cisco IOS

    DHCP(config)#ip dhcp pool R1-STATIC DHCP(dhcp-config)#host 192.168.1.100 255.255.255. DHCP ... IPv6 Address Assignment Example; IPv6 EUI-64 explained; IPv6 Summarization Example; IPv6 Solicited Node Multicast Address; IPv6 Neighbor Discovery Protocol; IPv6 Stateless Autoconfiguration;

  7. Static DHCP

    Introduction. Static DHCP (aka DHCP reservation) is a useful feature which makes the DHCP server on your router always assign the same IP address to a specific computer on your LAN.To be more specific, the DHCP server assigns this static IP to a unique MAC address assigned to each NIC on your LAN. Your computer boots and requests its IP from the router's DHCP server.

  8. DHCP Server configuration for dynamic and static IP assignment

    To configure a static IP Address assignment using the DHCP server, you must add the host option in the dhcp.conf file. If the host option is placed in the subnet, you can reserve IP Addresses ...

  9. How to configure a static IP on Windows 10 or 11

    To set a static TCP/IP configuration on Windows 11, use these steps: Open Start. Search for Command Prompt, right-click the top result, and select the Run as administrator option. Type the ...

  10. How to Configure Static IP Address on Ubuntu 20.04

    Depending on the interface you want to modify, click either on the Network or Wi-Fi tab. To open the interface settings, click on the cog icon next to the interface name. In "IPV4" Method" tab, select "Manual" and enter your static IP address, Netmask and Gateway. Once done, click on the "Apply" button.

  11. Static and dynamic IP address configurations: DHCP deployment

    First, install the DHCP service on your selected Linux box. This box should have a static IP address. DHCP is a very lightweight service, so feel free to co-locate other services such as name resolution on the same device. Note: By using the -y option, yum will automatically install any dependencies necessary.

  12. How to Set Static IP Addresses On Your Router

    Without DHCP, you would need to hop on a computer, log into your router's admin panel, and manually assign an available address to your friend's device, say 10.0.0.99. That address would be permanently assigned to your friend's iPad unless you went in later and manually released the address. With DHCP, however, life is so much easier.

  13. Dynamic Host Configuration Protocol (DHCP) vs Static IP Assignment

    DHCP is used to temporarily assign IP addresses while the static IP assignment is about the manual configuration of a dedicated IP for each device on the network. In contrast to the DHCP in which IP addresses change over time or sometimes differently, the static IP assignments make it possible to always use the same IP address for one device.

  14. Assigning a fixed IP address to a machine in a DHCP network

    In the server switch IP from DHCP mode to manual and assign an IP that is beyond the ones that the router would assign to other devices (eg. 192.168.1.100). ... My router allows for static DHCP leases. Static leases are used to assign fixed IP addresses and symbolic hostnames to DHCP clients. So, you supply the MAC address of the server and it ...

  15. DHCP vs Static IP: What's the Difference?

    Since static IP address requires manual configurations, it can create network issues if you use it without a good understanding of TCP/IP. While DHCP is a protocol for automating the task of assigning IP addresses. DHCP is advantageous for network administrators because it removes the repetitive task of assigning multiple IP addresses to each ...

  16. Setup DHCP or static IP address from command line in Linux

    Step 4.3 - Alternative way of setting Static IP in a DHCP network. If you're connected to a network where you have DHCP enabled but want to assign a static IP to your interface, you can use the following command to assign Static IP in a DHCP network, netmask and Gateway.

  17. What does static DHCP mean? Static Dynamic is confusing

    3. Static DHCP : Static DHCP is where you can specify the MAC address of any machine in DHCP server so that PC with stated MAC address will always get the same IP address from DHCP server because MAC address is bound with IP Address in DHCP server. Dynamic DHCP : here no Bind of MAC address and IP address so when a user newly request for IP ...

  18. Solved: static Address in DHCP Pool

    I want to assign a specific IP to a specific host by MAC, i add the following but Host still take different IP Address: ... domain-name ***** dns-server x.x.x.x x.x.x.x . ip dhcp pool static host 10.13..165 255.255.. hardware-address xxxx.xxxx.xxxx.xx . also i make release IP address on client and clear ip dhcp binding and renew ip address ...

  19. DHCP Reservation vs Static IP address

    18. Using DHCP reservations offers you a sort of poor-man's IP address management solution. You can see and change IP addresses from a single console and makes it so you can see what addresses are available without having to resort to an Excel spreadsheet (or worse, a ping and pray system). That being said, many applications require a static IP.

  20. Dynamic Host Configuration Protocol (DHCP)

    The two different approaches of network configuration to manage IP addresses for devices are DHCP (Dynamic Host Configuration Protocol) and Static IP Assignment which you can use to configure computer networks and to assign IP addresses to devices on a network are the focus. A distinctive feature of DHCP is the automated assignment of IP addresses

  21. Assigning static addresses with a DHCP Server in AnyConnect VPN

    Solved: Assigning static addresses with a DHCP Server in AnyConnect VPN - Cisco Community. Solved: Hi This is the topology: Internet ---- ASA ------LAN ----DHCP Server (Windows) The customers wants to assign static addreses through AnyConnect with a Windows DHCP Server.

  22. Cisco Secure Firewall Threat Defense Virtual Getting Started Guide

    Inside and outside interfaces—Assign a static IP address to the inside interface, and use DHCP for the outside interface. DHCP server—Use a DHCP server on the inside interface for clients. Default route—Add a default route through the outside interface. NAT—Use interface PAT on the outside interface.

  23. How to give your Xbox Series X|S or Xbox One a Static IP address

    Here's how you can set a Static IP address on your Xbox console using your home Wi-Fi or wired router. ... your router will typically have DHCP enabled, and will automatically assign IP addresses ...

  24. Layer 3 Routing

    Assign the 10.255.253.1/24 IP address to the interface of the third-party gateway. Create a static route on the third-party gateway that matches the subnet of the network configured in UniFi (for example 192.168.2./24) and use 10.255.253.2 as the next-hop. If more than one network is configured in UniFi, add additional static routes.

  25. GlobalProtect Support for PAN-OS-11.2-DHCP-Based IP Address ...

    DHCP Based IP Address Assignment feature in PAN OS 11.2.0 release is supported for VM-Series Virtual Firewalls only. The feature is not supported for hardware next-generation firewall platforms. ... the gateway falls back to the private Static IP pool for the allocation of IP addresses for the endpoints. When the GlobalProtect gateway assigns ...

  26. Installing OpenShift Container Platform with the Assisted Installer

    Whether to use a DHCP server or a static network configuration Whether to use IPv4 or dual-stack networking Whether to install OpenShift Virtualization ... of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server.