code
stringlengths 26
870k
| docstring
stringlengths 1
65.6k
| func_name
stringlengths 1
194
| language
stringclasses 1
value | repo
stringlengths 8
68
| path
stringlengths 5
194
| url
stringlengths 46
254
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
def __init__(
self, cmd: AzCliCommand, client: ContainerServiceClient, raw_parameters: Dict, resource_type: ResourceType
):
"""Internal controller of aks_create.
Break down the all-in-one aks_create function into several relatively independent functions (some of them have
a certain order dependency) that only focus on a specific profile or process a specific piece of logic.
In addition, an overall control function is provided. By calling the aforementioned independent functions one
by one, a complete ManagedCluster object is gradually decorated and finally requests are sent to create a
cluster.
"""
super().__init__(cmd, client)
self.__raw_parameters = raw_parameters
self.resource_type = resource_type
self.init_models()
self.init_context()
self.agentpool_decorator_mode = AgentPoolDecoratorMode.MANAGED_CLUSTER
self.init_agentpool_decorator_context() | Internal controller of aks_create.
Break down the all-in-one aks_create function into several relatively independent functions (some of them have
a certain order dependency) that only focus on a specific profile or process a specific piece of logic.
In addition, an overall control function is provided. By calling the aforementioned independent functions one
by one, a complete ManagedCluster object is gradually decorated and finally requests are sent to create a
cluster. | __init__ | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def init_models(self) -> None:
"""Initialize an AKSManagedClusterModels object to store the models.
:return: None
"""
self.models = AKSManagedClusterModels(self.cmd, self.resource_type) | Initialize an AKSManagedClusterModels object to store the models.
:return: None | init_models | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def init_context(self) -> None:
"""Initialize an AKSManagedClusterContext object to store the context in the process of assemble the
ManagedCluster object.
:return: None
"""
self.context = AKSManagedClusterContext(
self.cmd, AKSManagedClusterParamDict(self.__raw_parameters), self.models, DecoratorMode.CREATE
) | Initialize an AKSManagedClusterContext object to store the context in the process of assemble the
ManagedCluster object.
:return: None | init_context | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def init_agentpool_decorator_context(self) -> None:
"""Initialize an AKSAgentPoolAddDecorator object to assemble the AgentPool profile.
:return: None
"""
self.agentpool_decorator = AKSAgentPoolAddDecorator(
self.cmd, self.client, self.__raw_parameters, self.resource_type, self.agentpool_decorator_mode
)
self.agentpool_context = self.agentpool_decorator.context
self.context.attach_agentpool_context(self.agentpool_context) | Initialize an AKSAgentPoolAddDecorator object to assemble the AgentPool profile.
:return: None | init_agentpool_decorator_context | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def _ensure_mc(self, mc: ManagedCluster) -> None:
"""Internal function to ensure that the incoming `mc` object is valid and the same as the attached
`mc` object in the context.
If the incoming `mc` is not valid or is inconsistent with the `mc` in the context, raise a CLIInternalError.
:return: None
"""
if not isinstance(mc, self.models.ManagedCluster):
raise CLIInternalError(
"Unexpected mc object with type '{}'.".format(type(mc))
)
if self.context.mc != mc:
raise CLIInternalError(
"Inconsistent state detected. The incoming `mc` "
"is not the same as the `mc` in the context."
) | Internal function to ensure that the incoming `mc` object is valid and the same as the attached
`mc` object in the context.
If the incoming `mc` is not valid or is inconsistent with the `mc` in the context, raise a CLIInternalError.
:return: None | _ensure_mc | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def _remove_defaults_in_mc(self, mc: ManagedCluster) -> ManagedCluster:
"""Internal function to remove values from properties with default values of the `mc` object.
Removing default values is to prevent getters from mistakenly overwriting user provided values with default
values in the object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
defaults_in_mc = {}
for attr_name, attr_value in vars(mc).items():
if not attr_name.startswith("_") and attr_name != "location" and attr_value is not None:
defaults_in_mc[attr_name] = attr_value
setattr(mc, attr_name, None)
self.context.set_intermediate("defaults_in_mc", defaults_in_mc, overwrite_exists=True)
return mc | Internal function to remove values from properties with default values of the `mc` object.
Removing default values is to prevent getters from mistakenly overwriting user provided values with default
values in the object.
:return: the ManagedCluster object | _remove_defaults_in_mc | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def _restore_defaults_in_mc(self, mc: ManagedCluster) -> ManagedCluster:
"""Internal function to restore values of properties with default values of the `mc` object.
Restoring default values is to keep the content of the request sent by cli consistent with that before the
refactoring.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
defaults_in_mc = self.context.get_intermediate("defaults_in_mc", {})
for key, value in defaults_in_mc.items():
if getattr(mc, key, None) is None:
setattr(mc, key, value)
return mc | Internal function to restore values of properties with default values of the `mc` object.
Restoring default values is to keep the content of the request sent by cli consistent with that before the
refactoring.
:return: the ManagedCluster object | _restore_defaults_in_mc | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_workload_identity_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up workload identity for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
profile = self.context.get_workload_identity_profile()
if profile:
if mc.security_profile is None:
mc.security_profile = self.models.ManagedClusterSecurityProfile()
mc.security_profile.workload_identity = profile
return mc | Set up workload identity for the ManagedCluster object.
:return: the ManagedCluster object | set_up_workload_identity_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_defender(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up defender for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
defender = self.context.get_defender_config()
if defender:
if mc.security_profile is None:
mc.security_profile = self.models.ManagedClusterSecurityProfile()
mc.security_profile.defender = defender
return mc | Set up defender for the ManagedCluster object.
:return: the ManagedCluster object | set_up_defender | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_azure_keyvault_kms(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up security profile azureKeyVaultKms for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if self.context.get_enable_azure_keyvault_kms():
key_id = self.context.get_azure_keyvault_kms_key_id()
if key_id:
if mc.security_profile is None:
mc.security_profile = self.models.ManagedClusterSecurityProfile()
mc.security_profile.azure_key_vault_kms = self.models.AzureKeyVaultKms(
enabled=True,
key_id=key_id,
)
key_vault_network_access = self.context.get_azure_keyvault_kms_key_vault_network_access()
mc.security_profile.azure_key_vault_kms.key_vault_network_access = key_vault_network_access
if key_vault_network_access == CONST_AZURE_KEYVAULT_NETWORK_ACCESS_PRIVATE:
mc.security_profile.azure_key_vault_kms.key_vault_resource_id = (
self.context.get_azure_keyvault_kms_key_vault_resource_id()
)
return mc | Set up security profile azureKeyVaultKms for the ManagedCluster object.
:return: the ManagedCluster object | set_up_azure_keyvault_kms | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_image_cleaner(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up security profile imageCleaner for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
interval_hours = self.context.get_image_cleaner_interval_hours()
if self.context.get_enable_image_cleaner():
if mc.security_profile is None:
mc.security_profile = self.models.ManagedClusterSecurityProfile()
if not interval_hours:
# default value for intervalHours - one week
interval_hours = 24 * 7
mc.security_profile.image_cleaner = self.models.ManagedClusterSecurityProfileImageCleaner(
enabled=True,
interval_hours=interval_hours,
)
return mc | Set up security profile imageCleaner for the ManagedCluster object.
:return: the ManagedCluster object | set_up_image_cleaner | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def init_mc(self) -> ManagedCluster:
"""Initialize a ManagedCluster object with required parameter location and attach it to internal context.
When location is not assigned, function "get_rg_location" will be called to get the location of the provided
resource group, which internally used ResourceManagementClient to send the request.
:return: the ManagedCluster object
"""
# Initialize a ManagedCluster object with mandatory parameter location.
mc = self.models.ManagedCluster(
location=self.context.get_location(),
)
# attach mc to AKSContext
self.context.attach_mc(mc)
return mc | Initialize a ManagedCluster object with required parameter location and attach it to internal context.
When location is not assigned, function "get_rg_location" will be called to get the location of the provided
resource group, which internally used ResourceManagementClient to send the request.
:return: the ManagedCluster object | init_mc | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_agentpool_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up agent pool profiles for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
agentpool_profile = self.agentpool_decorator.construct_agentpool_profile_default()
mc.agent_pool_profiles = [agentpool_profile]
return mc | Set up agent pool profiles for the ManagedCluster object.
:return: the ManagedCluster object | set_up_agentpool_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_mc_properties(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up misc direct properties for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
mc.tags = self.context.get_tags()
mc.kubernetes_version = self.context.get_kubernetes_version()
mc.dns_prefix = self.context.get_dns_name_prefix()
mc.disk_encryption_set_id = self.context.get_node_osdisk_diskencryptionset_id()
mc.disable_local_accounts = self.context.get_disable_local_accounts()
mc.enable_rbac = not self.context.get_disable_rbac()
return mc | Set up misc direct properties for the ManagedCluster object.
:return: the ManagedCluster object | set_up_mc_properties | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_linux_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up linux profile for the ManagedCluster object.
Linux profile is just used for SSH access to VMs, so it will be omitted if --no-ssh-key option was specified.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
ssh_key_value, no_ssh_key = self.context.get_ssh_key_value_and_no_ssh_key()
if not no_ssh_key:
ssh_config = self.models.ContainerServiceSshConfiguration(
public_keys=[
self.models.ContainerServiceSshPublicKey(
key_data=ssh_key_value
)
]
)
linux_profile = self.models.ContainerServiceLinuxProfile(
admin_username=self.context.get_admin_username(), ssh=ssh_config
)
mc.linux_profile = linux_profile
return mc | Set up linux profile for the ManagedCluster object.
Linux profile is just used for SSH access to VMs, so it will be omitted if --no-ssh-key option was specified.
:return: the ManagedCluster object | set_up_linux_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_windows_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up windows profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
(
windows_admin_username,
windows_admin_password,
) = self.context.get_windows_admin_username_and_password()
if windows_admin_username or windows_admin_password:
# license
windows_license_type = None
if self.context.get_enable_ahub():
windows_license_type = "Windows_Server"
# gmsa
gmsa_profile = None
if self.context.get_enable_windows_gmsa():
gmsa_dns_server, gmsa_root_domain_name = self.context.get_gmsa_dns_server_and_root_domain_name()
gmsa_profile = self.models.WindowsGmsaProfile(
enabled=True,
dns_server=gmsa_dns_server,
root_domain_name=gmsa_root_domain_name,
)
# this would throw an error if windows_admin_username is empty (the user enters an empty
# string after being prompted), since admin_username is a required parameter
windows_profile = self.models.ManagedClusterWindowsProfile(
# [SuppressMessage("Microsoft.Security", "CS002:SecretInNextLine", Justification="variable name")]
admin_username=windows_admin_username,
admin_password=windows_admin_password,
license_type=windows_license_type,
gmsa_profile=gmsa_profile,
)
mc.windows_profile = windows_profile
return mc | Set up windows profile for the ManagedCluster object.
:return: the ManagedCluster object | set_up_windows_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_storage_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up storage profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if hasattr(self.models, "ManagedClusterStorageProfile"):
mc.storage_profile = self.context.get_storage_profile()
return mc | Set up storage profile for the ManagedCluster object.
:return: the ManagedCluster object | set_up_storage_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_service_principal_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up service principal profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
# If customer explicitly provide a service principal, disable managed identity.
(
service_principal,
client_secret,
) = self.context.get_service_principal_and_client_secret()
enable_managed_identity = self.context.get_enable_managed_identity()
# Skip create service principal profile for the cluster if the cluster enables managed identity
# and customer doesn't explicitly provide a service principal.
if not (
enable_managed_identity and
not service_principal and
not client_secret
):
service_principal_profile = (
self.models.ManagedClusterServicePrincipalProfile(
client_id=service_principal, secret=client_secret
)
)
mc.service_principal_profile = service_principal_profile
return mc | Set up service principal profile for the ManagedCluster object.
:return: the ManagedCluster object | set_up_service_principal_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def process_add_role_assignment_for_vnet_subnet(self, mc: ManagedCluster) -> None:
"""Add role assignment for vent subnet.
This function will store an intermediate need_post_creation_vnet_permission_granting.
The function "subnet_role_assignment_exists" will be called to verify if the role assignment already exists for
the subnet, which internally used AuthorizationManagementClient to send the request.
The wrapper function "get_identity_by_msi_client" will be called by "get_user_assigned_identity_client_id" to
get the identity object, which internally use ManagedServiceIdentityClient to send the request.
The function "add_role_assignment" will be called to add role assignment for the subnet, which internally used
AuthorizationManagementClient to send the request.
:return: None
"""
self._ensure_mc(mc)
need_post_creation_vnet_permission_granting = False
vnet_subnet_id = self.context.get_vnet_subnet_id()
skip_subnet_role_assignment = (
self.context.get_skip_subnet_role_assignment()
)
if (
vnet_subnet_id and
not skip_subnet_role_assignment and
not self.context.external_functions.subnet_role_assignment_exists(self.cmd, vnet_subnet_id)
):
# if service_principal_profile is None, then this cluster is an MSI cluster,
# and the service principal does not exist. Two cases:
# 1. For system assigned identity, we just tell user to grant the
# permission after the cluster is created to keep consistent with portal experience.
# 2. For user assigned identity, we can grant needed permission to
# user provided user assigned identity before creating managed cluster.
service_principal_profile = mc.service_principal_profile
assign_identity = self.context.get_assign_identity()
if service_principal_profile is None and not assign_identity:
need_post_creation_vnet_permission_granting = True
else:
scope = vnet_subnet_id
if assign_identity:
identity_object_id = self.context.get_user_assigned_identity_object_id()
if not self.context.external_functions.add_role_assignment(
self.cmd,
"Network Contributor",
identity_object_id,
is_service_principal=False,
scope=scope,
):
logger.warning(
"Could not create a role assignment for subnet. Are you an Owner on this subscription?"
)
else:
identity_client_id = service_principal_profile.client_id
if not self.context.external_functions.add_role_assignment(
self.cmd,
"Network Contributor",
identity_client_id,
scope=scope,
):
logger.warning(
"Could not create a role assignment for subnet. Are you an Owner on this subscription?"
)
# store need_post_creation_vnet_permission_granting as an intermediate
self.context.set_intermediate(
"need_post_creation_vnet_permission_granting",
need_post_creation_vnet_permission_granting,
overwrite_exists=True,
) | Add role assignment for vent subnet.
This function will store an intermediate need_post_creation_vnet_permission_granting.
The function "subnet_role_assignment_exists" will be called to verify if the role assignment already exists for
the subnet, which internally used AuthorizationManagementClient to send the request.
The wrapper function "get_identity_by_msi_client" will be called by "get_user_assigned_identity_client_id" to
get the identity object, which internally use ManagedServiceIdentityClient to send the request.
The function "add_role_assignment" will be called to add role assignment for the subnet, which internally used
AuthorizationManagementClient to send the request.
:return: None | process_add_role_assignment_for_vnet_subnet | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def process_attach_acr(self, mc: ManagedCluster) -> None:
"""Attach acr for the cluster.
The function "ensure_aks_acr" will be called to create an AcrPull role assignment for the acr, which
internally used AuthorizationManagementClient to send the request.
:return: None
"""
self._ensure_mc(mc)
attach_acr = self.context.get_attach_acr()
if attach_acr:
# If enable_managed_identity, attach acr operation will be handled after the cluster is created
if not self.context.get_enable_managed_identity():
service_principal_profile = mc.service_principal_profile
self.context.external_functions.ensure_aks_acr(
self.cmd,
assignee=service_principal_profile.client_id,
acr_name_or_id=attach_acr,
# not actually used
subscription_id=self.context.get_subscription_id(),
) | Attach acr for the cluster.
The function "ensure_aks_acr" will be called to create an AcrPull role assignment for the acr, which
internally used AuthorizationManagementClient to send the request.
:return: None | process_attach_acr | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_network_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up network profile for the ManagedCluster object.
Build load balancer profile, verify outbound type and load balancer sku first, then set up network profile.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
# build load balancer profile, which is part of the network profile
load_balancer_profile = create_load_balancer_profile(
self.context.get_load_balancer_managed_outbound_ip_count(),
self.context.get_load_balancer_managed_outbound_ipv6_count(),
self.context.get_load_balancer_outbound_ips(),
self.context.get_load_balancer_outbound_ip_prefixes(),
self.context.get_load_balancer_outbound_ports(),
self.context.get_load_balancer_idle_timeout(),
self.context.get_load_balancer_backend_pool_type(),
models=self.models.load_balancer_models,
)
# verify outbound type
# Note: Validation internally depends on load_balancer_sku, which is a temporary value that is
# dynamically completed.
outbound_type = self.context.get_outbound_type()
# verify load balancer sku
load_balancer_sku = safe_lower(self.context.get_load_balancer_sku())
# verify network_plugin, pod_cidr, service_cidr, dns_service_ip, docker_bridge_address, network_policy
network_plugin = self.context.get_network_plugin()
network_plugin_mode = self.context.get_network_plugin_mode()
(
pod_cidr,
service_cidr,
dns_service_ip,
docker_bridge_address,
network_policy,
) = (
self.context.get_pod_cidr_and_service_cidr_and_dns_service_ip_and_docker_bridge_address_and_network_policy()
)
network_profile = None
# set up pod_cidrs, service_cidrs and ip_families
(
pod_cidrs,
service_cidrs,
ip_families
) = (
self.context.get_pod_cidrs_and_service_cidrs_and_ip_families()
)
network_dataplane = self.context.get_network_dataplane()
(acns_enabled, acns_observability, acns_security) = self.context.get_acns_enablement()
if acns_enabled is not None:
acns = self.models.AdvancedNetworking(
enabled=acns_enabled,
)
if acns_observability is not None:
acns.observability = self.models.AdvancedNetworkingObservability(
enabled=acns_observability,
)
if acns_security is not None:
acns.security = self.models.AdvancedNetworkingSecurity(
enabled=acns_security,
)
if any(
[
network_plugin,
network_plugin_mode,
pod_cidr,
pod_cidrs,
service_cidr,
service_cidrs,
ip_families,
dns_service_ip,
docker_bridge_address,
network_policy,
network_dataplane,
]
):
# Attention: RP would return UnexpectedLoadBalancerSkuForCurrentOutboundConfiguration internal server error
# if load_balancer_sku is set to basic and load_balancer_profile is assigned.
# Attention: SDK provides default values for pod_cidr, service_cidr, dns_service_ip, docker_bridge_cidr
# and outbound_type, and they might be overwritten to None.
network_profile = self.models.ContainerServiceNetworkProfile(
network_plugin=network_plugin,
network_plugin_mode=network_plugin_mode,
pod_cidr=pod_cidr,
pod_cidrs=pod_cidrs,
service_cidr=service_cidr,
service_cidrs=service_cidrs,
ip_families=ip_families,
dns_service_ip=dns_service_ip,
docker_bridge_cidr=docker_bridge_address,
network_policy=network_policy,
network_dataplane=network_dataplane,
load_balancer_sku=load_balancer_sku,
load_balancer_profile=load_balancer_profile,
outbound_type=outbound_type,
)
else:
if load_balancer_sku == CONST_LOAD_BALANCER_SKU_STANDARD or load_balancer_profile:
network_profile = self.models.ContainerServiceNetworkProfile(
network_plugin=network_plugin,
load_balancer_sku=load_balancer_sku,
load_balancer_profile=load_balancer_profile,
outbound_type=outbound_type,
)
if load_balancer_sku == CONST_LOAD_BALANCER_SKU_BASIC:
# load balancer sku must be standard when load balancer profile is provided
network_profile = self.models.ContainerServiceNetworkProfile(
network_plugin=network_plugin,
load_balancer_sku=load_balancer_sku,
)
# build nat gateway profile, which is part of the network profile
nat_gateway_profile = create_nat_gateway_profile(
self.context.get_nat_gateway_managed_outbound_ip_count(),
self.context.get_nat_gateway_idle_timeout(),
models=self.models.nat_gateway_models,
)
load_balancer_sku = self.context.get_load_balancer_sku()
if load_balancer_sku != CONST_LOAD_BALANCER_SKU_BASIC:
network_profile.nat_gateway_profile = nat_gateway_profile
if acns_enabled is not None:
network_profile.advanced_networking = acns
mc.network_profile = network_profile
return mc | Set up network profile for the ManagedCluster object.
Build load balancer profile, verify outbound type and load balancer sku first, then set up network profile.
:return: the ManagedCluster object | set_up_network_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def build_http_application_routing_addon_profile(self) -> ManagedClusterAddonProfile:
"""Build http application routing addon profile.
:return: a ManagedClusterAddonProfile object
"""
http_application_routing_addon_profile = self.models.ManagedClusterAddonProfile(
enabled=True,
)
return http_application_routing_addon_profile | Build http application routing addon profile.
:return: a ManagedClusterAddonProfile object | build_http_application_routing_addon_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def build_kube_dashboard_addon_profile(self) -> ManagedClusterAddonProfile:
"""Build kube dashboard addon profile.
:return: a ManagedClusterAddonProfile object
"""
kube_dashboard_addon_profile = self.models.ManagedClusterAddonProfile(
enabled=True,
)
return kube_dashboard_addon_profile | Build kube dashboard addon profile.
:return: a ManagedClusterAddonProfile object | build_kube_dashboard_addon_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def build_monitoring_addon_profile(self) -> ManagedClusterAddonProfile:
"""Build monitoring addon profile.
The function "ensure_container_insights_for_monitoring" will be called to create a deployment which publishes
the Container Insights solution to the Log Analytics workspace.
When workspace_resource_id is not assigned, function "ensure_default_log_analytics_workspace_for_monitoring"
will be called to create a workspace, which internally used ResourceManagementClient to send the request.
:return: a ManagedClusterAddonProfile object
"""
# determine the value of constants
addon_consts = self.context.get_addon_consts()
CONST_MONITORING_LOG_ANALYTICS_WORKSPACE_RESOURCE_ID = addon_consts.get(
"CONST_MONITORING_LOG_ANALYTICS_WORKSPACE_RESOURCE_ID"
)
CONST_MONITORING_USING_AAD_MSI_AUTH = addon_consts.get(
"CONST_MONITORING_USING_AAD_MSI_AUTH"
)
# TODO: can we help the user find a workspace resource ID?
monitoring_addon_profile = self.models.ManagedClusterAddonProfile(
enabled=True,
config={
CONST_MONITORING_LOG_ANALYTICS_WORKSPACE_RESOURCE_ID: self.context.get_workspace_resource_id(),
CONST_MONITORING_USING_AAD_MSI_AUTH: "true"
if self.context.get_enable_msi_auth_for_monitoring()
else "false",
},
)
# post-process, create a deployment
self.context.external_functions.ensure_container_insights_for_monitoring(
self.cmd, monitoring_addon_profile,
self.context.get_subscription_id(),
self.context.get_resource_group_name(),
self.context.get_name(),
self.context.get_location(),
remove_monitoring=False,
aad_route=self.context.get_enable_msi_auth_for_monitoring(),
create_dcr=True,
create_dcra=False,
enable_syslog=self.context.get_enable_syslog(),
data_collection_settings=self.context.get_data_collection_settings(),
is_private_cluster=self.context.get_enable_private_cluster(),
ampls_resource_id=self.context.get_ampls_resource_id(),
enable_high_log_scale_mode=self.context.get_enable_high_log_scale_mode(),
)
# set intermediate
self.context.set_intermediate("monitoring_addon_enabled", True, overwrite_exists=True)
return monitoring_addon_profile | Build monitoring addon profile.
The function "ensure_container_insights_for_monitoring" will be called to create a deployment which publishes
the Container Insights solution to the Log Analytics workspace.
When workspace_resource_id is not assigned, function "ensure_default_log_analytics_workspace_for_monitoring"
will be called to create a workspace, which internally used ResourceManagementClient to send the request.
:return: a ManagedClusterAddonProfile object | build_monitoring_addon_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def build_azure_policy_addon_profile(self) -> ManagedClusterAddonProfile:
"""Build azure policy addon profile.
:return: a ManagedClusterAddonProfile object
"""
azure_policy_addon_profile = self.models.ManagedClusterAddonProfile(
enabled=True,
)
return azure_policy_addon_profile | Build azure policy addon profile.
:return: a ManagedClusterAddonProfile object | build_azure_policy_addon_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def build_virtual_node_addon_profile(self) -> ManagedClusterAddonProfile:
"""Build virtual node addon profile.
:return: a ManagedClusterAddonProfile object
"""
# determine the value of constants
addon_consts = self.context.get_addon_consts()
CONST_VIRTUAL_NODE_SUBNET_NAME = addon_consts.get(
"CONST_VIRTUAL_NODE_SUBNET_NAME"
)
virtual_node_addon_profile = self.models.ManagedClusterAddonProfile(
enabled=True,
config={CONST_VIRTUAL_NODE_SUBNET_NAME: self.context.get_aci_subnet_name()}
)
# set intermediate
self.context.set_intermediate("virtual_node_addon_enabled", True, overwrite_exists=True)
return virtual_node_addon_profile | Build virtual node addon profile.
:return: a ManagedClusterAddonProfile object | build_virtual_node_addon_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def build_ingress_appgw_addon_profile(self) -> ManagedClusterAddonProfile:
"""Build ingress appgw addon profile.
:return: a ManagedClusterAddonProfile object
"""
# determine the value of constants
addon_consts = self.context.get_addon_consts()
CONST_INGRESS_APPGW_APPLICATION_GATEWAY_NAME = addon_consts.get(
"CONST_INGRESS_APPGW_APPLICATION_GATEWAY_NAME"
)
CONST_INGRESS_APPGW_SUBNET_CIDR = addon_consts.get(
"CONST_INGRESS_APPGW_SUBNET_CIDR"
)
CONST_INGRESS_APPGW_APPLICATION_GATEWAY_ID = addon_consts.get(
"CONST_INGRESS_APPGW_APPLICATION_GATEWAY_ID"
)
CONST_INGRESS_APPGW_SUBNET_ID = addon_consts.get(
"CONST_INGRESS_APPGW_SUBNET_ID"
)
CONST_INGRESS_APPGW_WATCH_NAMESPACE = addon_consts.get(
"CONST_INGRESS_APPGW_WATCH_NAMESPACE"
)
ingress_appgw_addon_profile = self.models.ManagedClusterAddonProfile(enabled=True, config={})
appgw_name = self.context.get_appgw_name()
appgw_subnet_cidr = self.context.get_appgw_subnet_cidr()
appgw_id = self.context.get_appgw_id()
appgw_subnet_id = self.context.get_appgw_subnet_id()
appgw_watch_namespace = self.context.get_appgw_watch_namespace()
if appgw_name is not None:
ingress_appgw_addon_profile.config[CONST_INGRESS_APPGW_APPLICATION_GATEWAY_NAME] = appgw_name
if appgw_subnet_cidr is not None:
ingress_appgw_addon_profile.config[CONST_INGRESS_APPGW_SUBNET_CIDR] = appgw_subnet_cidr
if appgw_id is not None:
ingress_appgw_addon_profile.config[CONST_INGRESS_APPGW_APPLICATION_GATEWAY_ID] = appgw_id
if appgw_subnet_id is not None:
ingress_appgw_addon_profile.config[CONST_INGRESS_APPGW_SUBNET_ID] = appgw_subnet_id
if appgw_watch_namespace is not None:
ingress_appgw_addon_profile.config[CONST_INGRESS_APPGW_WATCH_NAMESPACE] = appgw_watch_namespace
# set intermediate
self.context.set_intermediate("ingress_appgw_addon_enabled", True, overwrite_exists=True)
return ingress_appgw_addon_profile | Build ingress appgw addon profile.
:return: a ManagedClusterAddonProfile object | build_ingress_appgw_addon_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def build_confcom_addon_profile(self) -> ManagedClusterAddonProfile:
"""Build confcom addon profile.
:return: a ManagedClusterAddonProfile object
"""
# determine the value of constants
addon_consts = self.context.get_addon_consts()
CONST_ACC_SGX_QUOTE_HELPER_ENABLED = addon_consts.get(
"CONST_ACC_SGX_QUOTE_HELPER_ENABLED"
)
confcom_addon_profile = self.models.ManagedClusterAddonProfile(
enabled=True, config={CONST_ACC_SGX_QUOTE_HELPER_ENABLED: "false"})
if self.context.get_enable_sgxquotehelper():
confcom_addon_profile.config[CONST_ACC_SGX_QUOTE_HELPER_ENABLED] = "true"
return confcom_addon_profile | Build confcom addon profile.
:return: a ManagedClusterAddonProfile object | build_confcom_addon_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def build_open_service_mesh_addon_profile(self) -> ManagedClusterAddonProfile:
"""Build open service mesh addon profile.
:return: a ManagedClusterAddonProfile object
"""
open_service_mesh_addon_profile = self.models.ManagedClusterAddonProfile(
enabled=True,
config={},
)
return open_service_mesh_addon_profile | Build open service mesh addon profile.
:return: a ManagedClusterAddonProfile object | build_open_service_mesh_addon_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def build_azure_keyvault_secrets_provider_addon_profile(self) -> ManagedClusterAddonProfile:
"""Build azure keyvault secrets provider addon profile.
:return: a ManagedClusterAddonProfile object
"""
# determine the value of constants
addon_consts = self.context.get_addon_consts()
CONST_SECRET_ROTATION_ENABLED = addon_consts.get(
"CONST_SECRET_ROTATION_ENABLED"
)
CONST_ROTATION_POLL_INTERVAL = addon_consts.get(
"CONST_ROTATION_POLL_INTERVAL"
)
azure_keyvault_secrets_provider_addon_profile = (
self.models.ManagedClusterAddonProfile(
enabled=True,
config={
CONST_SECRET_ROTATION_ENABLED: "false",
CONST_ROTATION_POLL_INTERVAL: "2m",
},
)
)
if self.context.get_enable_secret_rotation():
azure_keyvault_secrets_provider_addon_profile.config[
CONST_SECRET_ROTATION_ENABLED
] = "true"
if self.context.get_rotation_poll_interval() is not None:
azure_keyvault_secrets_provider_addon_profile.config[
CONST_ROTATION_POLL_INTERVAL
] = self.context.get_rotation_poll_interval()
return azure_keyvault_secrets_provider_addon_profile | Build azure keyvault secrets provider addon profile.
:return: a ManagedClusterAddonProfile object | build_azure_keyvault_secrets_provider_addon_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_addon_profiles(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up addon profiles for the ManagedCluster object.
This function will store following intermediates: monitoring_addon_enabled, virtual_node_addon_enabled and
ingress_appgw_addon_enabled.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
# determine the value of constants
addon_consts = self.context.get_addon_consts()
CONST_MONITORING_ADDON_NAME = addon_consts.get(
"CONST_MONITORING_ADDON_NAME"
)
CONST_VIRTUAL_NODE_ADDON_NAME = addon_consts.get(
"CONST_VIRTUAL_NODE_ADDON_NAME"
)
CONST_HTTP_APPLICATION_ROUTING_ADDON_NAME = addon_consts.get(
"CONST_HTTP_APPLICATION_ROUTING_ADDON_NAME"
)
CONST_KUBE_DASHBOARD_ADDON_NAME = addon_consts.get(
"CONST_KUBE_DASHBOARD_ADDON_NAME"
)
CONST_AZURE_POLICY_ADDON_NAME = addon_consts.get(
"CONST_AZURE_POLICY_ADDON_NAME"
)
CONST_INGRESS_APPGW_ADDON_NAME = addon_consts.get(
"CONST_INGRESS_APPGW_ADDON_NAME"
)
CONST_CONFCOM_ADDON_NAME = addon_consts.get("CONST_CONFCOM_ADDON_NAME")
CONST_OPEN_SERVICE_MESH_ADDON_NAME = addon_consts.get(
"CONST_OPEN_SERVICE_MESH_ADDON_NAME"
)
CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME = addon_consts.get(
"CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME"
)
addon_profiles = {}
# error out if any unrecognized or duplicate addon provided
# error out if '--enable-addons=monitoring' isn't set but workspace_resource_id is
# error out if '--enable-addons=virtual-node' is set but aci_subnet_name and vnet_subnet_id are not
addons = self.context.get_enable_addons()
if "http_application_routing" in addons:
addon_profiles[
CONST_HTTP_APPLICATION_ROUTING_ADDON_NAME
] = self.build_http_application_routing_addon_profile()
if "kube-dashboard" in addons:
addon_profiles[
CONST_KUBE_DASHBOARD_ADDON_NAME
] = self.build_kube_dashboard_addon_profile()
if "monitoring" in addons:
addon_profiles[
CONST_MONITORING_ADDON_NAME
] = self.build_monitoring_addon_profile()
if "azure-policy" in addons:
addon_profiles[
CONST_AZURE_POLICY_ADDON_NAME
] = self.build_azure_policy_addon_profile()
if "virtual-node" in addons:
# TODO: how about aciConnectorwindows, what is its addon name?
os_type = self.context.get_virtual_node_addon_os_type()
addon_profiles[
CONST_VIRTUAL_NODE_ADDON_NAME + os_type
] = self.build_virtual_node_addon_profile()
if "ingress-appgw" in addons:
addon_profiles[
CONST_INGRESS_APPGW_ADDON_NAME
] = self.build_ingress_appgw_addon_profile()
if "confcom" in addons:
addon_profiles[
CONST_CONFCOM_ADDON_NAME
] = self.build_confcom_addon_profile()
if "open-service-mesh" in addons:
addon_profiles[
CONST_OPEN_SERVICE_MESH_ADDON_NAME
] = self.build_open_service_mesh_addon_profile()
if "azure-keyvault-secrets-provider" in addons:
addon_profiles[
CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME
] = self.build_azure_keyvault_secrets_provider_addon_profile()
mc.addon_profiles = addon_profiles
return mc | Set up addon profiles for the ManagedCluster object.
This function will store following intermediates: monitoring_addon_enabled, virtual_node_addon_enabled and
ingress_appgw_addon_enabled.
:return: the ManagedCluster object | set_up_addon_profiles | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_aad_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up aad profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
aad_profile = None
enable_aad = self.context.get_enable_aad()
if enable_aad:
aad_profile = self.models.ManagedClusterAADProfile(
managed=True,
enable_azure_rbac=self.context.get_enable_azure_rbac(),
# ids -> i_ds due to track 2 naming issue
admin_group_object_i_ds=self.context.get_aad_admin_group_object_ids(),
tenant_id=self.context.get_aad_tenant_id()
)
mc.aad_profile = aad_profile
return mc | Set up aad profile for the ManagedCluster object.
:return: the ManagedCluster object | set_up_aad_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_oidc_issuer_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up OIDC issuer profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
oidc_issuer_profile = self.context.get_oidc_issuer_profile()
if oidc_issuer_profile is not None:
mc.oidc_issuer_profile = oidc_issuer_profile
return mc | Set up OIDC issuer profile for the ManagedCluster object.
:return: the ManagedCluster object | set_up_oidc_issuer_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_workload_auto_scaler_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up workload auto-scaler profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if self.context.get_enable_keda():
if mc.workload_auto_scaler_profile is None:
mc.workload_auto_scaler_profile = self.models.ManagedClusterWorkloadAutoScalerProfile()
mc.workload_auto_scaler_profile.keda = self.models.ManagedClusterWorkloadAutoScalerProfileKeda(enabled=True)
if self.context.get_enable_vpa():
if mc.workload_auto_scaler_profile is None:
mc.workload_auto_scaler_profile = self.models.ManagedClusterWorkloadAutoScalerProfile()
if mc.workload_auto_scaler_profile.vertical_pod_autoscaler is None:
mc.workload_auto_scaler_profile.vertical_pod_autoscaler = self.models.ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler(enabled=True)
else:
mc.workload_auto_scaler_profile.vertical_pod_autoscaler.enabled = True
return mc | Set up workload auto-scaler profile for the ManagedCluster object.
:return: the ManagedCluster object | set_up_workload_auto_scaler_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_api_server_access_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up api server access profile and fqdn subdomain for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
api_server_access_profile = None
api_server_authorized_ip_ranges = self.context.get_api_server_authorized_ip_ranges()
enable_private_cluster = self.context.get_enable_private_cluster()
disable_public_fqdn = self.context.get_disable_public_fqdn()
private_dns_zone = self.context.get_private_dns_zone()
if api_server_authorized_ip_ranges or enable_private_cluster:
api_server_access_profile = self.models.ManagedClusterAPIServerAccessProfile(
authorized_ip_ranges=api_server_authorized_ip_ranges,
enable_private_cluster=True if enable_private_cluster else None,
enable_private_cluster_public_fqdn=False if disable_public_fqdn else None,
private_dns_zone=private_dns_zone
)
mc.api_server_access_profile = api_server_access_profile
fqdn_subdomain = self.context.get_fqdn_subdomain()
mc.fqdn_subdomain = fqdn_subdomain
return mc | Set up api server access profile and fqdn subdomain for the ManagedCluster object.
:return: the ManagedCluster object | set_up_api_server_access_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_identity(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up identity for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
identity = None
enable_managed_identity = self.context.get_enable_managed_identity()
assign_identity = self.context.get_assign_identity()
if enable_managed_identity and not assign_identity:
identity = self.models.ManagedClusterIdentity(
type="SystemAssigned"
)
elif enable_managed_identity and assign_identity:
user_assigned_identity = {
assign_identity: self.models.ManagedServiceIdentityUserAssignedIdentitiesValue()
}
identity = self.models.ManagedClusterIdentity(
type="UserAssigned",
user_assigned_identities=user_assigned_identity
)
mc.identity = identity
return mc | Set up identity for the ManagedCluster object.
:return: the ManagedCluster object | set_up_identity | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_identity_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up identity profile for the ManagedCluster object.
The wrapper function "get_identity_by_msi_client" will be called (by "get_user_assigned_identity_object_id") to
get the identity object, which internally use ManagedServiceIdentityClient to send the request.
The function "ensure_cluster_identity_permission_on_kubelet_identity" will be called to create a role
assignment if necessary, which internally used AuthorizationManagementClient to send the request.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
identity_profile = None
assign_kubelet_identity = self.context.get_assign_kubelet_identity()
if assign_kubelet_identity:
kubelet_identity = self.context.get_identity_by_msi_client(assign_kubelet_identity)
identity_profile = {
'kubeletidentity': self.models.UserAssignedIdentity(
resource_id=assign_kubelet_identity,
client_id=kubelet_identity.client_id, # TODO: may remove, rp would take care of this
object_id=kubelet_identity.principal_id # TODO: may remove, rp would take care of this
)
}
cluster_identity_object_id = self.context.get_user_assigned_identity_object_id()
# ensure the cluster identity has "Managed Identity Operator" role at the scope of kubelet identity
self.context.external_functions.ensure_cluster_identity_permission_on_kubelet_identity(
self.cmd,
cluster_identity_object_id,
assign_kubelet_identity)
mc.identity_profile = identity_profile
return mc | Set up identity profile for the ManagedCluster object.
The wrapper function "get_identity_by_msi_client" will be called (by "get_user_assigned_identity_object_id") to
get the identity object, which internally use ManagedServiceIdentityClient to send the request.
The function "ensure_cluster_identity_permission_on_kubelet_identity" will be called to create a role
assignment if necessary, which internally used AuthorizationManagementClient to send the request.
:return: the ManagedCluster object | set_up_identity_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_http_proxy_config(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up http proxy config for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
mc.http_proxy_config = self.context.get_http_proxy_config()
return mc | Set up http proxy config for the ManagedCluster object.
:return: the ManagedCluster object | set_up_http_proxy_config | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_auto_upgrade_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up auto upgrade profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
auto_upgrade_profile = None
auto_upgrade_channel = self.context.get_auto_upgrade_channel()
if auto_upgrade_channel:
auto_upgrade_profile = self.models.ManagedClusterAutoUpgradeProfile(upgrade_channel=auto_upgrade_channel)
mc.auto_upgrade_profile = auto_upgrade_profile
node_os_upgrade_channel = self.context.get_node_os_upgrade_channel()
if node_os_upgrade_channel:
if mc.auto_upgrade_profile is None:
mc.auto_upgrade_profile = self.models.ManagedClusterAutoUpgradeProfile()
mc.auto_upgrade_profile.node_os_upgrade_channel = node_os_upgrade_channel
return mc | Set up auto upgrade profile for the ManagedCluster object.
:return: the ManagedCluster object | set_up_auto_upgrade_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_azure_service_mesh_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up azure service mesh for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
profile = self.context.get_initial_service_mesh_profile()
if profile is not None:
mc.service_mesh_profile = profile
return mc | Set up azure service mesh for the ManagedCluster object.
:return: the ManagedCluster object | set_up_azure_service_mesh_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_auto_scaler_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up autoscaler profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
cluster_autoscaler_profile = self.context.get_cluster_autoscaler_profile()
mc.auto_scaler_profile = cluster_autoscaler_profile
return mc | Set up autoscaler profile for the ManagedCluster object.
:return: the ManagedCluster object | set_up_auto_scaler_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_azure_container_storage(self, mc: ManagedCluster) -> ManagedCluster: # pylint: disable=too-many-locals
"""Set up azure container storage for the Managed Cluster object
:return: ManagedCluster
"""
self._ensure_mc(mc)
# read the azure container storage values passed
pool_type = self.context.raw_param.get("enable_azure_container_storage")
enable_azure_container_storage = pool_type is not None
ephemeral_disk_volume_type = self.context.raw_param.get("ephemeral_disk_volume_type")
ephemeral_disk_nvme_perf_tier = self.context.raw_param.get("ephemeral_disk_nvme_perf_tier")
if (ephemeral_disk_volume_type is not None or ephemeral_disk_nvme_perf_tier is not None) and \
not enable_azure_container_storage:
params_defined_arr = []
if ephemeral_disk_volume_type is not None:
params_defined_arr.append('--ephemeral-disk-volume-type')
if ephemeral_disk_nvme_perf_tier is not None:
params_defined_arr.append('--ephemeral-disk-nvme-perf-tier')
params_defined = 'and '.join(params_defined_arr)
raise RequiredArgumentMissingError(
f'Cannot set {params_defined} without the parameter --enable-azure-container-storage.'
)
if enable_azure_container_storage:
pool_name = self.context.raw_param.get("storage_pool_name")
pool_option = self.context.raw_param.get("storage_pool_option")
pool_sku = self.context.raw_param.get("storage_pool_sku")
pool_size = self.context.raw_param.get("storage_pool_size")
if not mc.agent_pool_profiles:
raise UnknownError("Encountered an unexpected error while getting the agent pools from the cluster.")
agentpool = mc.agent_pool_profiles[0]
agentpool_details = {}
pool_details = {}
pool_details["vm_size"] = agentpool.vm_size
pool_details["count"] = agentpool.count
pool_details["os_type"] = agentpool.os_type
pool_details["mode"] = agentpool.mode
pool_details["node_taints"] = agentpool.node_taints
pool_details["zoned"] = agentpool.availability_zones is not None
agentpool_details[agentpool.name] = pool_details
# Marking the only agentpool name as the valid nodepool for
# installing Azure Container Storage during `az aks create`
nodepool_list = agentpool.name
from azure.cli.command_modules.acs.azurecontainerstorage._validators import (
validate_enable_azure_container_storage_params
)
from azure.cli.command_modules.acs.azurecontainerstorage._consts import (
CONST_ACSTOR_IO_ENGINE_LABEL_KEY,
CONST_ACSTOR_IO_ENGINE_LABEL_VAL,
CONST_DISK_TYPE_EPHEMERAL_VOLUME_ONLY,
CONST_EPHEMERAL_NVME_PERF_TIER_STANDARD,
)
from azure.cli.command_modules.acs.azurecontainerstorage._helpers import generate_vm_sku_cache_for_region
generate_vm_sku_cache_for_region(self.cmd.cli_ctx, self.context.get_location())
default_ephemeral_disk_volume_type = CONST_DISK_TYPE_EPHEMERAL_VOLUME_ONLY
default_ephemeral_disk_nvme_perf_tier = CONST_EPHEMERAL_NVME_PERF_TIER_STANDARD
validate_enable_azure_container_storage_params(
pool_type,
pool_name,
pool_sku,
pool_option,
pool_size,
nodepool_list,
agentpool_details,
False,
False,
False,
False,
False,
ephemeral_disk_volume_type,
ephemeral_disk_nvme_perf_tier,
default_ephemeral_disk_volume_type,
default_ephemeral_disk_nvme_perf_tier,
)
# Setup Azure Container Storage labels on the nodepool
nodepool_labels = agentpool.node_labels
if nodepool_labels is None:
nodepool_labels = {}
nodepool_labels[CONST_ACSTOR_IO_ENGINE_LABEL_KEY] = CONST_ACSTOR_IO_ENGINE_LABEL_VAL
agentpool.node_labels = nodepool_labels
# set intermediates
self.context.set_intermediate("enable_azure_container_storage", True, overwrite_exists=True)
self.context.set_intermediate("azure_container_storage_nodepools", nodepool_list, overwrite_exists=True)
self.context.set_intermediate(
"current_ephemeral_nvme_perf_tier",
default_ephemeral_disk_nvme_perf_tier,
overwrite_exists=True
)
self.context.set_intermediate(
"existing_ephemeral_disk_volume_type",
default_ephemeral_disk_volume_type,
overwrite_exists=True
)
return mc | Set up azure container storage for the Managed Cluster object
:return: ManagedCluster | set_up_azure_container_storage | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_sku(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up sku (uptime sla) for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if self.context.get_tier() == CONST_MANAGED_CLUSTER_SKU_TIER_STANDARD:
mc.sku = self.models.ManagedClusterSKU(
name="Base",
tier="Standard"
)
if self.context.get_tier() == CONST_MANAGED_CLUSTER_SKU_TIER_PREMIUM:
mc.sku = self.models.ManagedClusterSKU(
name="Base",
tier="Premium"
)
return mc | Set up sku (uptime sla) for the ManagedCluster object.
:return: the ManagedCluster object | set_up_sku | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_extended_location(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up extended location (edge zone) for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
edge_zone = self.context.get_edge_zone()
if edge_zone:
mc.extended_location = self.models.ExtendedLocation(
name=edge_zone,
type=self.models.ExtendedLocationTypes.EDGE_ZONE
)
return mc | Set up extended location (edge zone) for the ManagedCluster object.
:return: the ManagedCluster object | set_up_extended_location | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_node_resource_group(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up node resource group for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
mc.node_resource_group = self.context.get_node_resource_group()
return mc | Set up node resource group for the ManagedCluster object.
:return: the ManagedCluster object | set_up_node_resource_group | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_k8s_support_plan(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up supportPlan for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
support_plan = self.context.get_k8s_support_plan()
if support_plan == KubernetesSupportPlan.AKS_LONG_TERM_SUPPORT:
if mc is None or mc.sku is None or mc.sku.tier.lower() != CONST_MANAGED_CLUSTER_SKU_TIER_PREMIUM.lower():
raise AzCLIError("Long term support is only available for premium tier clusters.")
mc.support_plan = support_plan
return mc | Set up supportPlan for the ManagedCluster object.
:return: the ManagedCluster object | set_up_k8s_support_plan | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_azure_monitor_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up azure monitor profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
# read the original value passed by the command
ksm_metric_labels_allow_list = self.context.raw_param.get("ksm_metric_labels_allow_list")
ksm_metric_annotations_allow_list = self.context.raw_param.get("ksm_metric_annotations_allow_list")
if ksm_metric_labels_allow_list is None:
ksm_metric_labels_allow_list = ""
if ksm_metric_annotations_allow_list is None:
ksm_metric_annotations_allow_list = ""
if self.context.get_enable_azure_monitor_metrics():
if mc.azure_monitor_profile is None:
mc.azure_monitor_profile = self.models.ManagedClusterAzureMonitorProfile()
mc.azure_monitor_profile.metrics = self.models.ManagedClusterAzureMonitorProfileMetrics(enabled=False)
mc.azure_monitor_profile.metrics.kube_state_metrics = self.models.ManagedClusterAzureMonitorProfileKubeStateMetrics( # pylint:disable=line-too-long
metric_labels_allowlist=str(ksm_metric_labels_allow_list),
metric_annotations_allow_list=str(ksm_metric_annotations_allow_list))
# set intermediate
self.context.set_intermediate("azuremonitormetrics_addon_enabled", True, overwrite_exists=True)
return mc | Set up azure monitor profile for the ManagedCluster object.
:return: the ManagedCluster object | set_up_azure_monitor_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_ingress_web_app_routing(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up the app routing profile in the ingress profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
addons = self.context.get_enable_addons()
if "web_application_routing" in addons or self.context.get_enable_app_routing():
if mc.ingress_profile is None:
mc.ingress_profile = self.models.ManagedClusterIngressProfile() # pylint: disable=no-member
mc.ingress_profile.web_app_routing = (
self.models.ManagedClusterIngressProfileWebAppRouting(enabled=True) # pylint: disable=no-member
)
if "web_application_routing" in addons:
dns_zone_resource_ids = self.context.get_dns_zone_resource_ids()
mc.ingress_profile.web_app_routing.dns_zone_resource_ids = dns_zone_resource_ids
return mc | Set up the app routing profile in the ingress profile for the ManagedCluster object.
:return: the ManagedCluster object | set_up_ingress_web_app_routing | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def set_up_node_resource_group_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up node resource group profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
node_resource_group_profile = None
nrg_lockdown_restriction_level = self.context.get_nrg_lockdown_restriction_level()
if nrg_lockdown_restriction_level:
node_resource_group_profile = self.models.ManagedClusterNodeResourceGroupProfile(restriction_level=nrg_lockdown_restriction_level)
mc.node_resource_group_profile = node_resource_group_profile
return mc | Set up node resource group profile for the ManagedCluster object.
:return: the ManagedCluster object | set_up_node_resource_group_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def construct_mc_profile_default(self, bypass_restore_defaults: bool = False) -> ManagedCluster:
"""The overall controller used to construct the default ManagedCluster profile.
The completely constructed ManagedCluster object will later be passed as a parameter to the underlying SDK
(mgmt-containerservice) to send the actual request.
:return: the ManagedCluster object
"""
# initialize the ManagedCluster object
mc = self.init_mc()
# DO NOT MOVE: remove defaults
self._remove_defaults_in_mc(mc)
# set up agentpool profile
mc = self.set_up_agentpool_profile(mc)
# set up misc direct mc properties
mc = self.set_up_mc_properties(mc)
# set up linux profile (for ssh access)
mc = self.set_up_linux_profile(mc)
# set up windows profile
mc = self.set_up_windows_profile(mc)
# set up service principal profile
mc = self.set_up_service_principal_profile(mc)
# add role assignment for vent subnet
self.process_add_role_assignment_for_vnet_subnet(mc)
# attach acr (add role assignment for acr)
self.process_attach_acr(mc)
# set up network profile
mc = self.set_up_network_profile(mc)
# set up addon profiles
mc = self.set_up_addon_profiles(mc)
# set up aad profile
mc = self.set_up_aad_profile(mc)
# set up oidc issuer profile
mc = self.set_up_oidc_issuer_profile(mc)
# set up api server access profile and fqdn subdomain
mc = self.set_up_api_server_access_profile(mc)
# set up identity
mc = self.set_up_identity(mc)
# set up identity profile
mc = self.set_up_identity_profile(mc)
# set up auto upgrade profile
mc = self.set_up_auto_upgrade_profile(mc)
# set up auto scaler profile
mc = self.set_up_auto_scaler_profile(mc)
# set up sku
mc = self.set_up_sku(mc)
# set up extended location
mc = self.set_up_extended_location(mc)
# set up node resource group
mc = self.set_up_node_resource_group(mc)
# set up defender
mc = self.set_up_defender(mc)
# set up workload identity profile
mc = self.set_up_workload_identity_profile(mc)
# set up storage profile
mc = self.set_up_storage_profile(mc)
# set up azure keyvalut kms
mc = self.set_up_azure_keyvault_kms(mc)
# set up image cleaner
mc = self.set_up_image_cleaner(mc)
# set up http proxy config
mc = self.set_up_http_proxy_config(mc)
# set up workload autoscaler profile
mc = self.set_up_workload_auto_scaler_profile(mc)
# set up app routing profile
mc = self.set_up_ingress_web_app_routing(mc)
# setup k8s support plan
mc = self.set_up_k8s_support_plan(mc)
# set up azure monitor metrics profile
mc = self.set_up_azure_monitor_profile(mc)
# set up azure service mesh profile
mc = self.set_up_azure_service_mesh_profile(mc)
# set up for azure container storage
mc = self.set_up_azure_container_storage(mc)
# set up metrics profile
mc = self.set_up_metrics_profile(mc)
# set up node resource group profile
mc = self.set_up_node_resource_group_profile(mc)
# DO NOT MOVE: keep this at the bottom, restore defaults
if not bypass_restore_defaults:
mc = self._restore_defaults_in_mc(mc)
return mc | The overall controller used to construct the default ManagedCluster profile.
The completely constructed ManagedCluster object will later be passed as a parameter to the underlying SDK
(mgmt-containerservice) to send the actual request.
:return: the ManagedCluster object | construct_mc_profile_default | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def check_is_postprocessing_required(self, mc: ManagedCluster) -> bool:
"""Helper function to check if postprocessing is required after sending a PUT request to create the cluster.
:return: bool
"""
# some addons require post cluster creation role assigment
monitoring_addon_enabled = self.context.get_intermediate("monitoring_addon_enabled", default_value=False)
ingress_appgw_addon_enabled = self.context.get_intermediate("ingress_appgw_addon_enabled", default_value=False)
virtual_node_addon_enabled = self.context.get_intermediate("virtual_node_addon_enabled", default_value=False)
azuremonitormetrics_addon_enabled = self.context.get_intermediate(
"azuremonitormetrics_addon_enabled",
default_value=False
)
enable_managed_identity = self.context.get_enable_managed_identity()
attach_acr = self.context.get_attach_acr()
need_grant_vnet_permission_to_cluster_identity = self.context.get_intermediate(
"need_post_creation_vnet_permission_granting", default_value=False
)
enable_azure_container_storage = self.context.get_intermediate(
"enable_azure_container_storage",
default_value=False
)
# pylint: disable=too-many-boolean-expressions
if (
monitoring_addon_enabled or
ingress_appgw_addon_enabled or
virtual_node_addon_enabled or
azuremonitormetrics_addon_enabled or
(enable_managed_identity and attach_acr) or
need_grant_vnet_permission_to_cluster_identity or
enable_azure_container_storage
):
return True
return False | Helper function to check if postprocessing is required after sending a PUT request to create the cluster.
:return: bool | check_is_postprocessing_required | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def immediate_processing_after_request(self, mc: ManagedCluster) -> None:
"""Immediate processing performed when the cluster has not finished creating after a PUT request to the cluster
has been sent.
:return: None
"""
# vnet
need_grant_vnet_permission_to_cluster_identity = self.context.get_intermediate(
"need_post_creation_vnet_permission_granting", default_value=False
)
if need_grant_vnet_permission_to_cluster_identity:
# Grant vnet permission to system assigned identity RIGHT AFTER the cluster is put, this operation can
# reduce latency for the role assignment take effect
instant_cluster = self.client.get(self.context.get_resource_group_name(), self.context.get_name())
if not self.context.external_functions.add_role_assignment(
self.cmd,
"Network Contributor",
instant_cluster.identity.principal_id,
scope=self.context.get_vnet_subnet_id(),
is_service_principal=False,
):
logger.warning(
"Could not create a role assignment for subnet. Are you an Owner on this subscription?"
) | Immediate processing performed when the cluster has not finished creating after a PUT request to the cluster
has been sent.
:return: None | immediate_processing_after_request | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def postprocessing_after_mc_created(self, cluster: ManagedCluster) -> None:
"""Postprocessing performed after the cluster is created.
:return: None
"""
# monitoring addon
monitoring_addon_enabled = self.context.get_intermediate("monitoring_addon_enabled", default_value=False)
if monitoring_addon_enabled:
enable_msi_auth_for_monitoring = self.context.get_enable_msi_auth_for_monitoring()
if not enable_msi_auth_for_monitoring:
# add cluster spn/msi Monitoring Metrics Publisher role assignment to publish metrics to MDM
# mdm metrics is supported only in azure public cloud, so add the role assignment only in this cloud
cloud_name = self.cmd.cli_ctx.cloud.name
if cloud_name.lower() == "azurecloud":
from azure.mgmt.core.tools import resource_id
cluster_resource_id = resource_id(
subscription=self.context.get_subscription_id(),
resource_group=self.context.get_resource_group_name(),
namespace="Microsoft.ContainerService",
type="managedClusters",
name=self.context.get_name(),
)
self.context.external_functions.add_monitoring_role_assignment(
cluster, cluster_resource_id, self.cmd
)
elif self.context.raw_param.get("enable_addons") is not None:
# Create the DCR Association here
addon_consts = self.context.get_addon_consts()
CONST_MONITORING_ADDON_NAME = addon_consts.get("CONST_MONITORING_ADDON_NAME")
self.context.external_functions.ensure_container_insights_for_monitoring(
self.cmd,
cluster.addon_profiles[CONST_MONITORING_ADDON_NAME],
self.context.get_subscription_id(),
self.context.get_resource_group_name(),
self.context.get_name(),
self.context.get_location(),
remove_monitoring=False,
aad_route=self.context.get_enable_msi_auth_for_monitoring(),
create_dcr=False,
create_dcra=True,
enable_syslog=self.context.get_enable_syslog(),
data_collection_settings=self.context.get_data_collection_settings(),
is_private_cluster=self.context.get_enable_private_cluster(),
ampls_resource_id=self.context.get_ampls_resource_id(),
enable_high_log_scale_mode=self.context.get_enable_high_log_scale_mode(),
)
# ingress appgw addon
ingress_appgw_addon_enabled = self.context.get_intermediate("ingress_appgw_addon_enabled", default_value=False)
if ingress_appgw_addon_enabled:
self.context.external_functions.add_ingress_appgw_addon_role_assignment(cluster, self.cmd)
# virtual node addon
virtual_node_addon_enabled = self.context.get_intermediate("virtual_node_addon_enabled", default_value=False)
if virtual_node_addon_enabled:
self.context.external_functions.add_virtual_node_role_assignment(
self.cmd, cluster, self.context.get_vnet_subnet_id()
)
# attach acr
enable_managed_identity = self.context.get_enable_managed_identity()
attach_acr = self.context.get_attach_acr()
if enable_managed_identity and attach_acr:
# Attach ACR to cluster enabled managed identity
if cluster.identity_profile is None or cluster.identity_profile["kubeletidentity"] is None:
logger.warning(
"Your cluster is successfully created, but we failed to attach "
"acr to it, you can manually grant permission to the identity "
"named <ClUSTER_NAME>-agentpool in MC_ resource group to give "
"it permission to pull from ACR."
)
else:
kubelet_identity_object_id = cluster.identity_profile["kubeletidentity"].object_id
self.context.external_functions.ensure_aks_acr(
self.cmd,
assignee=kubelet_identity_object_id,
acr_name_or_id=attach_acr,
subscription_id=self.context.get_subscription_id(),
is_service_principal=False,
)
# azure monitor metrics addon (v2)
azuremonitormetrics_addon_enabled = self.context.get_intermediate(
"azuremonitormetrics_addon_enabled",
default_value=False
)
if azuremonitormetrics_addon_enabled:
# Create the DC* objects, AMW, recording rules and grafana link here
self.context.external_functions.ensure_azure_monitor_profile_prerequisites(
self.cmd,
self.context.get_subscription_id(),
self.context.get_resource_group_name(),
self.context.get_name(),
self.context.get_location(),
self.__raw_parameters,
self.context.get_disable_azure_monitor_metrics(),
True
)
# enable azure container storage
enable_azure_container_storage = self.context.get_intermediate("enable_azure_container_storage")
if enable_azure_container_storage:
if cluster.identity_profile is None or cluster.identity_profile["kubeletidentity"] is None:
logger.warning(
"Unexpected error getting kubelet's identity for the cluster. "
"Unable to perform the azure container storage operation."
)
return
# Get the node_resource_group from the cluster object since
# `mc` in `context` still doesn't have the updated node_resource_group.
if cluster.node_resource_group is None:
logger.warning(
"Unexpected error getting cluster's node resource group. "
"Unable to perform the azure container storage operation."
)
return
pool_name = self.context.raw_param.get("storage_pool_name")
pool_type = self.context.raw_param.get("enable_azure_container_storage")
pool_option = self.context.raw_param.get("storage_pool_option")
pool_sku = self.context.raw_param.get("storage_pool_sku")
pool_size = self.context.raw_param.get("storage_pool_size")
ephemeral_disk_volume_type = self.context.raw_param.get("ephemeral_disk_volume_type")
ephemeral_disk_nvme_perf_tier = self.context.raw_param.get("ephemeral_disk_nvme_perf_tier")
existing_ephemeral_disk_volume_type = self.context.get_intermediate("existing_ephemeral_disk_volume_type")
existing_ephemeral_nvme_perf_tier = self.context.get_intermediate("current_ephemeral_nvme_perf_tier")
kubelet_identity_object_id = cluster.identity_profile["kubeletidentity"].object_id
node_resource_group = cluster.node_resource_group
agent_pool_vm_sizes = []
if len(cluster.agent_pool_profiles) > 0:
# Cluster creation has only 1 agentpool
agentpool_profile = cluster.agent_pool_profiles[0]
agent_pool_vm_sizes.append(agentpool_profile.vm_size)
self.context.external_functions.perform_enable_azure_container_storage(
self.cmd,
self.context.get_subscription_id(),
self.context.get_resource_group_name(),
self.context.get_name(),
node_resource_group,
kubelet_identity_object_id,
pool_name,
pool_type,
pool_size,
pool_sku,
pool_option,
agent_pool_vm_sizes,
ephemeral_disk_volume_type,
ephemeral_disk_nvme_perf_tier,
True,
existing_ephemeral_disk_volume_type,
existing_ephemeral_nvme_perf_tier,
) | Postprocessing performed after the cluster is created.
:return: None | postprocessing_after_mc_created | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def create_mc(self, mc: ManagedCluster) -> ManagedCluster:
"""Send request to create a real managed cluster.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
# Due to SPN replication latency, we do a few retries here
max_retry = 30
error_msg = ""
for _ in range(0, max_retry):
try:
cluster = self.put_mc(mc)
return cluster
# CloudError was raised before, but since the adoption of track 2 SDK,
# HttpResponseError would be raised instead
except HttpResponseError as ex:
error_msg = str(ex)
if "not found in Active Directory tenant" in ex.message:
time.sleep(3)
else:
raise map_azure_error_to_cli_error(ex)
raise AzCLIError("Maximum number of retries exceeded. " + error_msg) | Send request to create a real managed cluster.
:return: the ManagedCluster object | create_mc | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def __init__(
self, cmd: AzCliCommand, client: ContainerServiceClient, raw_parameters: Dict, resource_type: ResourceType
):
"""Internal controller of aks_update.
Break down the all-in-one aks_update function into several relatively independent functions (some of them have
a certain order dependency) that only focus on a specific profile or process a specific piece of logic.
In addition, an overall control function is provided. By calling the aforementioned independent functions one
by one, a complete ManagedCluster object is gradually updated and finally requests are sent to update an
existing cluster.
"""
super().__init__(cmd, client)
self.__raw_parameters = raw_parameters
self.resource_type = resource_type
self.init_models()
self.init_context()
self.agentpool_decorator_mode = AgentPoolDecoratorMode.MANAGED_CLUSTER
self.init_agentpool_decorator_context() | Internal controller of aks_update.
Break down the all-in-one aks_update function into several relatively independent functions (some of them have
a certain order dependency) that only focus on a specific profile or process a specific piece of logic.
In addition, an overall control function is provided. By calling the aforementioned independent functions one
by one, a complete ManagedCluster object is gradually updated and finally requests are sent to update an
existing cluster. | __init__ | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def init_models(self) -> None:
"""Initialize an AKSManagedClusterModels object to store the models.
:return: None
"""
self.models = AKSManagedClusterModels(self.cmd, self.resource_type) | Initialize an AKSManagedClusterModels object to store the models.
:return: None | init_models | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def init_context(self) -> None:
"""Initialize an AKSManagedClusterContext object to store the context in the process of assemble the
ManagedCluster object.
:return: None
"""
self.context = AKSManagedClusterContext(
self.cmd, AKSManagedClusterParamDict(self.__raw_parameters), self.models, DecoratorMode.UPDATE
) | Initialize an AKSManagedClusterContext object to store the context in the process of assemble the
ManagedCluster object.
:return: None | init_context | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def init_agentpool_decorator_context(self) -> None:
"""Initialize an AKSAgentPoolAddDecorator object to assemble the AgentPool profile.
:return: None
"""
self.agentpool_decorator = AKSAgentPoolUpdateDecorator(
self.cmd, self.client, self.__raw_parameters, self.resource_type, self.agentpool_decorator_mode
)
self.agentpool_context = self.agentpool_decorator.context
self.context.attach_agentpool_context(self.agentpool_context) | Initialize an AKSAgentPoolAddDecorator object to assemble the AgentPool profile.
:return: None | init_agentpool_decorator_context | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def check_raw_parameters(self):
"""Helper function to check whether any parameters are set.
If the values of all the parameters are the default values, the command execution will be terminated early and
raise a RequiredArgumentMissingError. Neither the request to fetch or update the ManagedCluster object will be
sent.
:return: None
"""
# exclude some irrelevant or mandatory parameters
excluded_keys = ("cmd", "client", "resource_group_name", "name")
# check whether the remaining parameters are set
# the default value None or False (and other empty values, like empty string) will be considered as not set
is_changed = any(v for k, v in self.context.raw_param.items() if k not in excluded_keys)
# special cases
# some parameters support the use of empty string or dictionary to update/remove previously set values
is_default = (
self.context.get_cluster_autoscaler_profile() is None and
self.context.get_api_server_authorized_ip_ranges() is None and
self.context.get_nodepool_labels() is None and
self.context.get_nodepool_taints() is None and
self.context.get_load_balancer_managed_outbound_ip_count() is None and
self.context.get_load_balancer_managed_outbound_ipv6_count() is None and
self.context.get_load_balancer_idle_timeout() is None and
self.context.get_load_balancer_outbound_ports() is None and
self.context.get_nat_gateway_managed_outbound_ip_count() is None and
self.context.get_nat_gateway_idle_timeout() is None
)
if not is_changed and is_default:
reconcilePrompt = 'no argument specified to update would you like to reconcile to current settings?'
if not prompt_y_n(reconcilePrompt, default="n"):
# Note: Uncomment the followings to automatically generate the error message.
option_names = [
'"{}"'.format(format_parameter_name_to_option_name(x))
for x in self.context.raw_param.keys()
if x not in excluded_keys
]
error_msg = "Please specify one or more of {}.".format(
" or ".join(option_names)
)
raise RequiredArgumentMissingError(error_msg) | Helper function to check whether any parameters are set.
If the values of all the parameters are the default values, the command execution will be terminated early and
raise a RequiredArgumentMissingError. Neither the request to fetch or update the ManagedCluster object will be
sent.
:return: None | check_raw_parameters | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def _ensure_mc(self, mc: ManagedCluster) -> None:
"""Internal function to ensure that the incoming `mc` object is valid and the same as the attached `mc` object
in the context.
If the incomding `mc` is not valid or is inconsistent with the `mc` in the context, raise a CLIInternalError.
:return: None
"""
if not isinstance(mc, self.models.ManagedCluster):
raise CLIInternalError(
"Unexpected mc object with type '{}'.".format(type(mc))
)
if self.context.mc != mc:
raise CLIInternalError(
"Inconsistent state detected. The incoming `mc` is not the same as the `mc` in the context."
) | Internal function to ensure that the incoming `mc` object is valid and the same as the attached `mc` object
in the context.
If the incomding `mc` is not valid or is inconsistent with the `mc` in the context, raise a CLIInternalError.
:return: None | _ensure_mc | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def fetch_mc(self) -> ManagedCluster:
"""Get the ManagedCluster object currently in use and attach it to internal context.
Internally send request using ContainerServiceClient and parameters name (cluster) and resource group name.
:return: the ManagedCluster object
"""
mc = self.client.get(self.context.get_resource_group_name(), self.context.get_name())
# attach mc to AKSContext
self.context.attach_mc(mc)
return mc | Get the ManagedCluster object currently in use and attach it to internal context.
Internally send request using ContainerServiceClient and parameters name (cluster) and resource group name.
:return: the ManagedCluster object | fetch_mc | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_agentpool_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update agentpool profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if not mc.agent_pool_profiles:
raise UnknownError(
"Encounter an unexpected error while getting agent pool profiles from the cluster in the process of "
"updating agentpool profile."
)
agentpool_profile = self.agentpool_decorator.update_agentpool_profile_default(mc.agent_pool_profiles)
mc.agent_pool_profiles[0] = agentpool_profile
# update nodepool labels for all nodepools
nodepool_labels = self.context.get_nodepool_labels()
if nodepool_labels is not None:
for agent_profile in mc.agent_pool_profiles:
agent_profile.node_labels = nodepool_labels
# update nodepool taints for all nodepools
nodepool_taints = self.context.get_nodepool_taints()
if nodepool_taints is not None:
for agent_profile in mc.agent_pool_profiles:
agent_profile.node_taints = nodepool_taints
return mc | Update agentpool profile for the ManagedCluster object.
:return: the ManagedCluster object | update_agentpool_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_auto_scaler_profile(self, mc):
"""Update autoscaler profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
cluster_autoscaler_profile = self.context.get_cluster_autoscaler_profile()
if cluster_autoscaler_profile is not None:
# update profile (may clear profile with empty dictionary)
mc.auto_scaler_profile = cluster_autoscaler_profile
return mc | Update autoscaler profile for the ManagedCluster object.
:return: the ManagedCluster object | update_auto_scaler_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_tags(self, mc: ManagedCluster) -> ManagedCluster:
"""Update tags for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
tags = self.context.get_tags()
if tags is not None:
mc.tags = tags
return mc | Update tags for the ManagedCluster object.
:return: the ManagedCluster object | update_tags | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_upgrade_settings(self, mc: ManagedCluster) -> ManagedCluster:
"""Update upgrade settings for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
existing_until = None
if mc.upgrade_settings is not None and mc.upgrade_settings.override_settings is not None and mc.upgrade_settings.override_settings.until is not None:
existing_until = mc.upgrade_settings.override_settings.until
force_upgrade = self.context.get_force_upgrade()
override_until = self.context.get_upgrade_override_until()
if force_upgrade is not None or override_until is not None:
if mc.upgrade_settings is None:
mc.upgrade_settings = self.models.ClusterUpgradeSettings()
if mc.upgrade_settings.override_settings is None:
mc.upgrade_settings.override_settings = self.models.UpgradeOverrideSettings()
# sets force_upgrade
if force_upgrade is not None:
mc.upgrade_settings.override_settings.force_upgrade = force_upgrade
# sets until
if override_until is not None:
try:
mc.upgrade_settings.override_settings.until = parse(override_until)
except Exception: # pylint: disable=broad-except
raise InvalidArgumentValueError(
f"{override_until} is not a valid datatime format."
)
elif force_upgrade:
default_extended_until = datetime.datetime.utcnow() + datetime.timedelta(days=3)
if existing_until is None or existing_until.timestamp() < default_extended_until.timestamp():
mc.upgrade_settings.override_settings.until = default_extended_until
return mc | Update upgrade settings for the ManagedCluster object.
:return: the ManagedCluster object | update_upgrade_settings | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def process_attach_detach_acr(self, mc: ManagedCluster) -> None:
"""Attach or detach acr for the cluster.
The function "ensure_aks_acr" will be called to create or delete an AcrPull role assignment for the acr, which
internally used AuthorizationManagementClient to send the request.
:return: None
"""
self._ensure_mc(mc)
subscription_id = self.context.get_subscription_id()
assignee, is_service_principal = self.context.get_assignee_from_identity_or_sp_profile()
attach_acr = self.context.get_attach_acr()
detach_acr = self.context.get_detach_acr()
if attach_acr:
self.context.external_functions.ensure_aks_acr(
self.cmd,
assignee=assignee,
acr_name_or_id=attach_acr,
subscription_id=subscription_id,
is_service_principal=is_service_principal,
)
if detach_acr:
self.context.external_functions.ensure_aks_acr(
self.cmd,
assignee=assignee,
acr_name_or_id=detach_acr,
subscription_id=subscription_id,
detach=True,
is_service_principal=is_service_principal,
) | Attach or detach acr for the cluster.
The function "ensure_aks_acr" will be called to create or delete an AcrPull role assignment for the acr, which
internally used AuthorizationManagementClient to send the request.
:return: None | process_attach_detach_acr | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_azure_service_mesh_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update azure service mesh profile for the ManagedCluster object.
"""
self._ensure_mc(mc)
mc.service_mesh_profile = self.context.update_azure_service_mesh_profile()
return mc | Update azure service mesh profile for the ManagedCluster object. | update_azure_service_mesh_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_sku(self, mc: ManagedCluster) -> ManagedCluster:
"""Update sku (uptime sla) for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
# Premium without LTS is ok (not vice versa)
if self.context.get_tier() == CONST_MANAGED_CLUSTER_SKU_TIER_PREMIUM:
mc.sku = self.models.ManagedClusterSKU(
name="Base",
tier="Premium"
)
if self.context.get_tier() == CONST_MANAGED_CLUSTER_SKU_TIER_STANDARD:
mc.sku = self.models.ManagedClusterSKU(
name="Base",
tier="Standard"
)
if self.context.get_tier() == CONST_MANAGED_CLUSTER_SKU_TIER_FREE:
mc.sku = self.models.ManagedClusterSKU(
name="Base",
tier="Free"
)
return mc | Update sku (uptime sla) for the ManagedCluster object.
:return: the ManagedCluster object | update_sku | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_outbound_type_in_network_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update outbound type of network profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
outboundType = self.context.get_outbound_type()
if outboundType:
mc.network_profile.outbound_type = outboundType
return mc | Update outbound type of network profile for the ManagedCluster object.
:return: the ManagedCluster object | update_outbound_type_in_network_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_load_balancer_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update load balancer profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if not mc.network_profile:
raise UnknownError(
"Encounter an unexpected error while getting network profile from the cluster in the process of "
"updating its load balancer profile."
)
outbound_type = self.context.get_outbound_type()
if outbound_type and outbound_type != CONST_OUTBOUND_TYPE_LOAD_BALANCER:
mc.network_profile.load_balancer_profile = None
else:
# In the internal function "_update_load_balancer_profile", it will check whether the provided parameters
# have been assigned, and if there are any, the corresponding profile will be modified; otherwise, it will
# remain unchanged.
mc.network_profile.load_balancer_profile = _update_load_balancer_profile(
managed_outbound_ip_count=self.context.get_load_balancer_managed_outbound_ip_count(),
managed_outbound_ipv6_count=self.context.get_load_balancer_managed_outbound_ipv6_count(),
outbound_ips=self.context.get_load_balancer_outbound_ips(),
outbound_ip_prefixes=self.context.get_load_balancer_outbound_ip_prefixes(),
outbound_ports=self.context.get_load_balancer_outbound_ports(),
idle_timeout=self.context.get_load_balancer_idle_timeout(),
backend_pool_type=self.context.get_load_balancer_backend_pool_type(),
profile=mc.network_profile.load_balancer_profile,
models=self.models.load_balancer_models)
return mc | Update load balancer profile for the ManagedCluster object.
:return: the ManagedCluster object | update_load_balancer_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_nat_gateway_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update nat gateway profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if not mc.network_profile:
raise UnknownError(
"Unexpectedly get an empty network profile in the process of updating nat gateway profile."
)
outbound_type = self.context.get_outbound_type()
if outbound_type and outbound_type != CONST_OUTBOUND_TYPE_MANAGED_NAT_GATEWAY:
mc.network_profile.nat_gateway_profile = None
else:
mc.network_profile.nat_gateway_profile = _update_nat_gateway_profile(
managed_outbound_ip_count=self.context.get_nat_gateway_managed_outbound_ip_count(),
idle_timeout=self.context.get_nat_gateway_idle_timeout(),
profile=mc.network_profile.nat_gateway_profile,
models=self.models.nat_gateway_models,
)
return mc | Update nat gateway profile for the ManagedCluster object.
:return: the ManagedCluster object | update_nat_gateway_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_disable_local_accounts(self, mc: ManagedCluster) -> ManagedCluster:
"""Update disable/enable local accounts for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if self.context.get_disable_local_accounts():
mc.disable_local_accounts = True
if self.context.get_enable_local_accounts():
mc.disable_local_accounts = False
return mc | Update disable/enable local accounts for the ManagedCluster object.
:return: the ManagedCluster object | update_disable_local_accounts | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_api_server_access_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update api server access profile and fqdn subdomain for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if mc.api_server_access_profile is None:
profile_holder = self.models.ManagedClusterAPIServerAccessProfile()
else:
profile_holder = mc.api_server_access_profile
api_server_authorized_ip_ranges = self.context.get_api_server_authorized_ip_ranges()
disable_public_fqdn = self.context.get_disable_public_fqdn()
enable_public_fqdn = self.context.get_enable_public_fqdn()
private_dns_zone = self.context.get_private_dns_zone()
if api_server_authorized_ip_ranges is not None:
# empty string is valid as it disables ip whitelisting
profile_holder.authorized_ip_ranges = api_server_authorized_ip_ranges
if disable_public_fqdn:
profile_holder.enable_private_cluster_public_fqdn = False
if enable_public_fqdn:
profile_holder.enable_private_cluster_public_fqdn = True
if private_dns_zone is not None:
profile_holder.private_dns_zone = private_dns_zone
# keep api_server_access_profile empty if none of its properties are updated
if (
profile_holder != mc.api_server_access_profile and
profile_holder == self.models.ManagedClusterAPIServerAccessProfile()
):
profile_holder = None
mc.api_server_access_profile = profile_holder
return mc | Update api server access profile and fqdn subdomain for the ManagedCluster object.
:return: the ManagedCluster object | update_api_server_access_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_windows_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update windows profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
enable_ahub = self.context.get_enable_ahub()
disable_ahub = self.context.get_disable_ahub()
windows_admin_password = self.context.get_windows_admin_password()
enable_windows_gmsa = self.context.get_enable_windows_gmsa()
disable_windows_gmsa = self.context.get_disable_windows_gmsa()
if any([enable_ahub, disable_ahub, windows_admin_password, enable_windows_gmsa, disable_windows_gmsa]) and not mc.windows_profile:
# seems we know the error
raise UnknownError(
"Encounter an unexpected error while getting windows profile from the cluster in the process of update."
)
if enable_ahub:
mc.windows_profile.license_type = 'Windows_Server'
if disable_ahub:
mc.windows_profile.license_type = 'None'
if windows_admin_password:
mc.windows_profile.admin_password = windows_admin_password
if enable_windows_gmsa:
gmsa_dns_server, gmsa_root_domain_name = self.context.get_gmsa_dns_server_and_root_domain_name()
mc.windows_profile.gmsa_profile = self.models.WindowsGmsaProfile(
enabled=True,
dns_server=gmsa_dns_server,
root_domain_name=gmsa_root_domain_name,
)
if disable_windows_gmsa:
mc.windows_profile.gmsa_profile = self.models.WindowsGmsaProfile(
enabled=False,
)
return mc | Update windows profile for the ManagedCluster object.
:return: the ManagedCluster object | update_windows_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_aad_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update aad profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if self.context.get_enable_aad():
mc.aad_profile = self.models.ManagedClusterAADProfile(
managed=True
)
aad_tenant_id = self.context.get_aad_tenant_id()
aad_admin_group_object_ids = self.context.get_aad_admin_group_object_ids()
enable_azure_rbac = self.context.get_enable_azure_rbac()
disable_azure_rbac = self.context.get_disable_azure_rbac()
if aad_tenant_id is not None:
mc.aad_profile.tenant_id = aad_tenant_id
if aad_admin_group_object_ids is not None:
# ids -> i_ds due to track 2 naming issue
mc.aad_profile.admin_group_object_i_ds = aad_admin_group_object_ids
if enable_azure_rbac:
mc.aad_profile.enable_azure_rbac = True
if disable_azure_rbac:
mc.aad_profile.enable_azure_rbac = False
return mc | Update aad profile for the ManagedCluster object.
:return: the ManagedCluster object | update_aad_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_oidc_issuer_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update OIDC issuer profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
oidc_issuer_profile = self.context.get_oidc_issuer_profile()
if oidc_issuer_profile is not None:
mc.oidc_issuer_profile = oidc_issuer_profile
return mc | Update OIDC issuer profile for the ManagedCluster object.
:return: the ManagedCluster object | update_oidc_issuer_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_auto_upgrade_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update auto upgrade profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
auto_upgrade_channel = self.context.get_auto_upgrade_channel()
if auto_upgrade_channel is not None:
if mc.auto_upgrade_profile is None:
mc.auto_upgrade_profile = self.models.ManagedClusterAutoUpgradeProfile()
mc.auto_upgrade_profile.upgrade_channel = auto_upgrade_channel
node_os_upgrade_channel = self.context.get_node_os_upgrade_channel()
if node_os_upgrade_channel is not None:
if mc.auto_upgrade_profile is None:
mc.auto_upgrade_profile = self.models.ManagedClusterAutoUpgradeProfile()
mc.auto_upgrade_profile.node_os_upgrade_channel = node_os_upgrade_channel
return mc | Update auto upgrade profile for the ManagedCluster object.
:return: the ManagedCluster object | update_auto_upgrade_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_network_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update ip families settings for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
ip_families = self.context.get_ip_families()
if ip_families:
mc.network_profile.ip_families = ip_families
self.update_network_plugin_settings(mc)
return mc | Update ip families settings for the ManagedCluster object.
:return: the ManagedCluster object | update_network_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_network_plugin_settings(self, mc: ManagedCluster) -> ManagedCluster:
"""Update network plugin settings of network profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
network_plugin_mode = self.context.get_network_plugin_mode()
if network_plugin_mode:
mc.network_profile.network_plugin_mode = network_plugin_mode
network_plugin = self.context.get_network_plugin()
if network_plugin:
mc.network_profile.network_plugin = network_plugin
(
pod_cidr,
_,
_,
_,
_
) = self.context.get_pod_cidr_and_service_cidr_and_dns_service_ip_and_docker_bridge_address_and_network_policy()
network_dataplane = self.context.get_network_dataplane()
if network_dataplane:
mc.network_profile.network_dataplane = network_dataplane
if pod_cidr:
mc.network_profile.pod_cidr = pod_cidr
network_policy = self.context.get_network_policy()
if network_policy:
mc.network_profile.network_policy = network_policy
return mc | Update network plugin settings of network profile for the ManagedCluster object.
:return: the ManagedCluster object | update_network_plugin_settings | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_network_profile_advanced_networking(self, mc: ManagedCluster) -> ManagedCluster:
"""Update advanced networking settings of network profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
(acns_enabled, acns_observability, acns_security) = self.context.get_acns_enablement()
if acns_enabled is not None:
acns = self.models.AdvancedNetworking(
enabled=acns_enabled,
)
if acns_observability is not None:
acns.observability = self.models.AdvancedNetworkingObservability(
enabled=acns_observability,
)
if acns_security is not None:
acns.security = self.models.AdvancedNetworkingSecurity(
enabled=acns_security,
)
if acns_enabled is not None:
mc.network_profile.advanced_networking = acns
return mc | Update advanced networking settings of network profile for the ManagedCluster object.
:return: the ManagedCluster object | update_network_profile_advanced_networking | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_http_proxy_config(self, mc: ManagedCluster) -> ManagedCluster:
"""Set up http proxy config for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
mc.http_proxy_config = self.context.get_http_proxy_config()
return mc | Set up http proxy config for the ManagedCluster object.
:return: the ManagedCluster object | update_http_proxy_config | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_identity(self, mc: ManagedCluster) -> ManagedCluster:
"""Update identity for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
current_identity_type = "spn"
current_user_assigned_identity = ""
if mc.identity is not None:
current_identity_type = mc.identity.type.casefold()
if mc.identity.user_assigned_identities is not None and len(mc.identity.user_assigned_identities) > 0:
current_user_assigned_identity = list(mc.identity.user_assigned_identities.keys())[0]
goal_identity_type = current_identity_type
assign_identity = self.context.get_assign_identity()
if self.context.get_enable_managed_identity():
if not assign_identity:
goal_identity_type = "systemassigned"
else:
goal_identity_type = "userassigned"
is_update_identity = ((current_identity_type != goal_identity_type) or
(current_identity_type == goal_identity_type and
current_identity_type == "userassigned" and
assign_identity is not None and
current_user_assigned_identity != assign_identity))
if is_update_identity:
if current_identity_type == "spn":
msg = (
"Your cluster is using service principal, and you are going to update "
"the cluster to use {} managed identity.\nAfter updating, your "
"cluster's control plane and addon pods will switch to use managed "
"identity, but kubelet will KEEP USING SERVICE PRINCIPAL "
"until you upgrade your agentpool.\n"
"Are you sure you want to perform this operation?"
).format(goal_identity_type)
elif current_identity_type != goal_identity_type:
msg = (
"Your cluster is already using {} managed identity, and you are going to "
"update the cluster to use {} managed identity.\n"
"Are you sure you want to perform this operation?"
).format(current_identity_type, goal_identity_type)
else:
msg = (
"Your cluster is already using userassigned managed identity, current control plane identity is {},"
"and you are going to update the cluster identity to {}.\n"
"Are you sure you want to perform this operation?"
).format(current_user_assigned_identity, assign_identity)
# gracefully exit if user does not confirm
if not self.context.get_yes() and not prompt_y_n(msg, default="n"):
raise DecoratorEarlyExitException
# update identity
if goal_identity_type == "systemassigned":
identity = self.models.ManagedClusterIdentity(
type="SystemAssigned"
)
elif goal_identity_type == "userassigned":
user_assigned_identity = {
assign_identity: self.models.ManagedServiceIdentityUserAssignedIdentitiesValue()
}
identity = self.models.ManagedClusterIdentity(
type="UserAssigned",
user_assigned_identities=user_assigned_identity
)
mc.identity = identity
return mc | Update identity for the ManagedCluster object.
:return: the ManagedCluster object | update_identity | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_azure_keyvault_secrets_provider_addon_profile(
self,
azure_keyvault_secrets_provider_addon_profile: ManagedClusterAddonProfile,
) -> ManagedClusterAddonProfile:
"""Update azure keyvault secrets provider addon profile.
:return: None
"""
# determine the value of constants
addon_consts = self.context.get_addon_consts()
CONST_SECRET_ROTATION_ENABLED = addon_consts.get(
"CONST_SECRET_ROTATION_ENABLED"
)
CONST_ROTATION_POLL_INTERVAL = addon_consts.get(
"CONST_ROTATION_POLL_INTERVAL"
)
if self.context.get_enable_secret_rotation():
azure_keyvault_secrets_provider_addon_profile = (
self.ensure_azure_keyvault_secrets_provider_addon_profile(
azure_keyvault_secrets_provider_addon_profile
)
)
azure_keyvault_secrets_provider_addon_profile.config[
CONST_SECRET_ROTATION_ENABLED
] = "true"
if self.context.get_disable_secret_rotation():
azure_keyvault_secrets_provider_addon_profile = (
self.ensure_azure_keyvault_secrets_provider_addon_profile(
azure_keyvault_secrets_provider_addon_profile
)
)
azure_keyvault_secrets_provider_addon_profile.config[
CONST_SECRET_ROTATION_ENABLED
] = "false"
if self.context.get_rotation_poll_interval() is not None:
azure_keyvault_secrets_provider_addon_profile = (
self.ensure_azure_keyvault_secrets_provider_addon_profile(
azure_keyvault_secrets_provider_addon_profile
)
)
azure_keyvault_secrets_provider_addon_profile.config[
CONST_ROTATION_POLL_INTERVAL
] = self.context.get_rotation_poll_interval()
return azure_keyvault_secrets_provider_addon_profile | Update azure keyvault secrets provider addon profile.
:return: None | update_azure_keyvault_secrets_provider_addon_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_addon_profiles(self, mc: ManagedCluster) -> ManagedCluster:
"""Update addon profiles for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
# determine the value of constants
addon_consts = self.context.get_addon_consts()
CONST_MONITORING_ADDON_NAME = addon_consts.get(
"CONST_MONITORING_ADDON_NAME"
)
CONST_INGRESS_APPGW_ADDON_NAME = addon_consts.get(
"CONST_INGRESS_APPGW_ADDON_NAME"
)
CONST_VIRTUAL_NODE_ADDON_NAME = addon_consts.get(
"CONST_VIRTUAL_NODE_ADDON_NAME"
)
CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME = addon_consts.get(
"CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME"
)
azure_keyvault_secrets_provider_addon_profile = None
if mc.addon_profiles is not None:
monitoring_addon_enabled = (
CONST_MONITORING_ADDON_NAME in mc.addon_profiles and
mc.addon_profiles[CONST_MONITORING_ADDON_NAME].enabled
)
ingress_appgw_addon_enabled = (
CONST_INGRESS_APPGW_ADDON_NAME in mc.addon_profiles and
mc.addon_profiles[CONST_INGRESS_APPGW_ADDON_NAME].enabled
)
virtual_node_addon_enabled = (
CONST_VIRTUAL_NODE_ADDON_NAME + self.context.get_virtual_node_addon_os_type() in mc.addon_profiles and
mc.addon_profiles[CONST_VIRTUAL_NODE_ADDON_NAME + self.context.get_virtual_node_addon_os_type()].enabled
)
# set intermediates, used later to ensure role assignments
self.context.set_intermediate(
"monitoring_addon_enabled", monitoring_addon_enabled, overwrite_exists=True
)
self.context.set_intermediate(
"ingress_appgw_addon_enabled", ingress_appgw_addon_enabled, overwrite_exists=True
)
self.context.set_intermediate(
"virtual_node_addon_enabled", virtual_node_addon_enabled, overwrite_exists=True
)
# get azure keyvault secrets provider profile
azure_keyvault_secrets_provider_addon_profile = mc.addon_profiles.get(
CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME
)
# update azure keyvault secrets provider profile
azure_keyvault_secrets_provider_addon_profile = (
self.update_azure_keyvault_secrets_provider_addon_profile(
azure_keyvault_secrets_provider_addon_profile
)
)
if azure_keyvault_secrets_provider_addon_profile:
# mc.addon_profiles should not be None if azure_keyvault_secrets_provider_addon_profile is not None
mc.addon_profiles[
CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME
] = azure_keyvault_secrets_provider_addon_profile
return mc | Update addon profiles for the ManagedCluster object.
:return: the ManagedCluster object | update_addon_profiles | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_storage_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update storage profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
mc.storage_profile = self.context.get_storage_profile()
return mc | Update storage profile for the ManagedCluster object.
:return: the ManagedCluster object | update_storage_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_defender(self, mc: ManagedCluster) -> ManagedCluster:
"""Update defender for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
defender = self.context.get_defender_config()
if defender:
if mc.security_profile is None:
mc.security_profile = self.models.ManagedClusterSecurityProfile()
mc.security_profile.defender = defender
return mc | Update defender for the ManagedCluster object.
:return: the ManagedCluster object | update_defender | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_workload_identity_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update workload identity profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
profile = self.context.get_workload_identity_profile()
if profile:
if mc.security_profile is None:
mc.security_profile = self.models.ManagedClusterSecurityProfile()
mc.security_profile.workload_identity = profile
return mc | Update workload identity profile for the ManagedCluster object.
:return: the ManagedCluster object | update_workload_identity_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_k8s_support_plan(self, mc: ManagedCluster) -> ManagedCluster:
"""Update supportPlan for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
support_plan = self.context.get_k8s_support_plan()
if support_plan == KubernetesSupportPlan.AKS_LONG_TERM_SUPPORT:
if mc is None or mc.sku is None or mc.sku.tier.lower() != CONST_MANAGED_CLUSTER_SKU_TIER_PREMIUM.lower():
raise AzCLIError("Long term support is only available for premium tier clusters.")
mc.support_plan = support_plan
return mc | Update supportPlan for the ManagedCluster object.
:return: the ManagedCluster object | update_k8s_support_plan | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_azure_keyvault_kms(self, mc: ManagedCluster) -> ManagedCluster:
"""Update security profile azureKeyvaultKms for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if self.context.get_enable_azure_keyvault_kms():
# get kms profile
if mc.security_profile is None:
mc.security_profile = self.models.ManagedClusterSecurityProfile()
azure_key_vault_kms_profile = mc.security_profile.azure_key_vault_kms
if azure_key_vault_kms_profile is None:
azure_key_vault_kms_profile = self.models.AzureKeyVaultKms()
mc.security_profile.azure_key_vault_kms = azure_key_vault_kms_profile
# set enabled
azure_key_vault_kms_profile.enabled = True
# set key id
azure_key_vault_kms_profile.key_id = self.context.get_azure_keyvault_kms_key_id()
# set network access, should never be None for now, can be safely assigned, temp fix for rp
# the value is obtained from user input or backfilled from existing mc or to default value
azure_key_vault_kms_profile.key_vault_network_access = (
self.context.get_azure_keyvault_kms_key_vault_network_access()
)
# set key vault resource id
if azure_key_vault_kms_profile.key_vault_network_access == CONST_AZURE_KEYVAULT_NETWORK_ACCESS_PRIVATE:
azure_key_vault_kms_profile.key_vault_resource_id = (
self.context.get_azure_keyvault_kms_key_vault_resource_id()
)
else:
azure_key_vault_kms_profile.key_vault_resource_id = ""
if self.context.get_disable_azure_keyvault_kms():
# get kms profile
if mc.security_profile is None:
mc.security_profile = self.models.ManagedClusterSecurityProfile()
azure_key_vault_kms_profile = mc.security_profile.azure_key_vault_kms
if azure_key_vault_kms_profile is None:
azure_key_vault_kms_profile = self.models.AzureKeyVaultKms()
mc.security_profile.azure_key_vault_kms = azure_key_vault_kms_profile
# set enabled to False
azure_key_vault_kms_profile.enabled = False
return mc | Update security profile azureKeyvaultKms for the ManagedCluster object.
:return: the ManagedCluster object | update_azure_keyvault_kms | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_image_cleaner(self, mc: ManagedCluster) -> ManagedCluster:
"""Update security profile imageCleaner for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
enable_image_cleaner = self.context.get_enable_image_cleaner()
disable_image_cleaner = self.context.get_disable_image_cleaner()
interval_hours = self.context.get_image_cleaner_interval_hours()
# no image cleaner related changes
if not enable_image_cleaner and not disable_image_cleaner and interval_hours is None:
return mc
if mc.security_profile is None:
mc.security_profile = self.models.ManagedClusterSecurityProfile()
image_cleaner_profile = mc.security_profile.image_cleaner
if image_cleaner_profile is None:
image_cleaner_profile = self.models.ManagedClusterSecurityProfileImageCleaner()
mc.security_profile.image_cleaner = image_cleaner_profile
# init the image cleaner profile
image_cleaner_profile.enabled = False
image_cleaner_profile.interval_hours = 7 * 24
if enable_image_cleaner:
image_cleaner_profile.enabled = True
if disable_image_cleaner:
image_cleaner_profile.enabled = False
if interval_hours is not None:
image_cleaner_profile.interval_hours = interval_hours
return mc | Update security profile imageCleaner for the ManagedCluster object.
:return: the ManagedCluster object | update_image_cleaner | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_app_routing_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update app routing profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
# get parameters from context
enable_app_routing = self.context.get_enable_app_routing()
enable_keyvault_secret_provider = self.context.get_enable_kv()
dns_zone_resource_ids = self.context.get_dns_zone_resource_ids_from_input()
# update ManagedCluster object with app routing settings
mc.ingress_profile = (
mc.ingress_profile or
self.models.ManagedClusterIngressProfile() # pylint: disable=no-member
)
mc.ingress_profile.web_app_routing = (
mc.ingress_profile.web_app_routing or
self.models.ManagedClusterIngressProfileWebAppRouting() # pylint: disable=no-member
)
if enable_app_routing is not None:
if mc.ingress_profile.web_app_routing.enabled == enable_app_routing:
error_message = (
"App Routing is already enabled.\n"
if enable_app_routing
else "App Routing is already disabled.\n"
)
raise CLIError(error_message)
mc.ingress_profile.web_app_routing.enabled = enable_app_routing
# enable keyvault secret provider addon
if enable_keyvault_secret_provider:
self._enable_keyvault_secret_provider_addon(mc)
# modify DNS zone resource IDs
if dns_zone_resource_ids:
self._update_dns_zone_resource_ids(mc, dns_zone_resource_ids)
return mc | Update app routing profile for the ManagedCluster object.
:return: the ManagedCluster object | update_app_routing_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_node_resource_group_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update node resource group profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
nrg_lockdown_restriction_level = self.context.get_nrg_lockdown_restriction_level()
if nrg_lockdown_restriction_level is not None:
if mc.node_resource_group_profile is None:
mc.node_resource_group_profile = (
self.models.ManagedClusterNodeResourceGroupProfile() # pylint: disable=no-member
)
mc.node_resource_group_profile.restriction_level = nrg_lockdown_restriction_level
return mc | Update node resource group profile for the ManagedCluster object.
:return: the ManagedCluster object | update_node_resource_group_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def _enable_keyvault_secret_provider_addon(self, mc: ManagedCluster) -> None:
"""Helper function to enable keyvault secret provider addon for the ManagedCluster object.
:return: None
"""
addon_consts = self.context.get_addon_consts()
CONST_SECRET_ROTATION_ENABLED = addon_consts.get(
"CONST_SECRET_ROTATION_ENABLED"
)
CONST_ROTATION_POLL_INTERVAL = addon_consts.get(
"CONST_ROTATION_POLL_INTERVAL"
)
CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME = addon_consts.get(
"CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME"
)
mc.addon_profiles = mc.addon_profiles or {}
if not mc.addon_profiles.get(CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME):
mc.addon_profiles[
CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME
] = self.models.ManagedClusterAddonProfile( # pylint: disable=no-member
enabled=True,
config={
CONST_SECRET_ROTATION_ENABLED: "false",
CONST_ROTATION_POLL_INTERVAL: "2m",
},
)
elif not mc.addon_profiles[CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME].enabled:
mc.addon_profiles[CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME].enabled = True
mc.addon_profiles[CONST_AZURE_KEYVAULT_SECRETS_PROVIDER_ADDON_NAME].config = {
CONST_SECRET_ROTATION_ENABLED: "false",
CONST_ROTATION_POLL_INTERVAL: "2m",
} | Helper function to enable keyvault secret provider addon for the ManagedCluster object.
:return: None | _enable_keyvault_secret_provider_addon | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def _update_dns_zone_resource_ids(self, mc: ManagedCluster, dns_zone_resource_ids) -> None:
"""Helper function to update dns zone resource ids in app routing addon.
:return: None
"""
add_dns_zone = self.context.get_add_dns_zone()
delete_dns_zone = self.context.get_delete_dns_zone()
update_dns_zone = self.context.get_update_dns_zone()
attach_zones = self.context.get_attach_zones()
if mc.ingress_profile and mc.ingress_profile.web_app_routing and mc.ingress_profile.web_app_routing.enabled:
if add_dns_zone:
mc.ingress_profile.web_app_routing.dns_zone_resource_ids = (
mc.ingress_profile.web_app_routing.dns_zone_resource_ids or []
)
for dns_zone_id in dns_zone_resource_ids:
if dns_zone_id not in mc.ingress_profile.web_app_routing.dns_zone_resource_ids:
mc.ingress_profile.web_app_routing.dns_zone_resource_ids.append(dns_zone_id)
if attach_zones:
try:
is_private_dns_zone = (
parse_resource_id(dns_zone_id).get("type").lower() == "privatednszones"
)
role = CONST_PRIVATE_DNS_ZONE_CONTRIBUTOR_ROLE if is_private_dns_zone else \
CONST_DNS_ZONE_CONTRIBUTOR_ROLE
if not add_role_assignment(
self.cmd,
role,
mc.ingress_profile.web_app_routing.identity.object_id,
False,
scope=dns_zone_id
):
logger.warning(
'Could not create a role assignment for App Routing. '
'Are you an Owner on this subscription?')
except Exception as ex:
raise CLIError('Error in granting dns zone permissions to managed identity.\n') from ex
elif delete_dns_zone:
if mc.ingress_profile.web_app_routing.dns_zone_resource_ids:
dns_zone_resource_ids = [
x
for x in mc.ingress_profile.web_app_routing.dns_zone_resource_ids
if x not in dns_zone_resource_ids
]
mc.ingress_profile.web_app_routing.dns_zone_resource_ids = dns_zone_resource_ids
else:
raise CLIError('No DNS zone is used by App Routing.\n')
elif update_dns_zone:
mc.ingress_profile.web_app_routing.dns_zone_resource_ids = dns_zone_resource_ids
if attach_zones:
try:
for dns_zone in dns_zone_resource_ids:
is_private_dns_zone = parse_resource_id(dns_zone).get("type").lower() == "privatednszones"
role = CONST_PRIVATE_DNS_ZONE_CONTRIBUTOR_ROLE if is_private_dns_zone else \
CONST_DNS_ZONE_CONTRIBUTOR_ROLE
if not add_role_assignment(
self.cmd,
role,
mc.ingress_profile.web_app_routing.identity.object_id,
False,
scope=dns_zone,
):
logger.warning(
'Could not create a role assignment for App Routing. '
'Are you an Owner on this subscription?')
except Exception as ex:
raise CLIError('Error in granting dns zone permisions to managed identity.\n') from ex
else:
raise CLIError('App Routing must be enabled to modify DNS zone resource IDs.\n') | Helper function to update dns zone resource ids in app routing addon.
:return: None | _update_dns_zone_resource_ids | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_identity_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update identity profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
assign_kubelet_identity = self.context.get_assign_kubelet_identity()
if assign_kubelet_identity:
identity_profile = {
'kubeletidentity': self.models.UserAssignedIdentity(
resource_id=assign_kubelet_identity,
)
}
user_assigned_identity = self.context.get_assign_identity()
if not user_assigned_identity:
user_assigned_identity = self.context.get_user_assignd_identity_from_mc()
cluster_identity_object_id = self.context.get_user_assigned_identity_object_id(user_assigned_identity)
# ensure the cluster identity has "Managed Identity Operator" role at the scope of kubelet identity
self.context.external_functions.ensure_cluster_identity_permission_on_kubelet_identity(
self.cmd,
cluster_identity_object_id,
assign_kubelet_identity)
mc.identity_profile = identity_profile
return mc | Update identity profile for the ManagedCluster object.
:return: the ManagedCluster object | update_identity_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_workload_auto_scaler_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update workload auto-scaler profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
if self.context.get_enable_keda():
if mc.workload_auto_scaler_profile is None:
mc.workload_auto_scaler_profile = self.models.ManagedClusterWorkloadAutoScalerProfile()
mc.workload_auto_scaler_profile.keda = self.models.ManagedClusterWorkloadAutoScalerProfileKeda(enabled=True)
if self.context.get_disable_keda():
if mc.workload_auto_scaler_profile is None:
mc.workload_auto_scaler_profile = self.models.ManagedClusterWorkloadAutoScalerProfile()
mc.workload_auto_scaler_profile.keda = self.models.ManagedClusterWorkloadAutoScalerProfileKeda(
enabled=False
)
if self.context.get_enable_vpa():
if mc.workload_auto_scaler_profile is None:
mc.workload_auto_scaler_profile = self.models.ManagedClusterWorkloadAutoScalerProfile()
if mc.workload_auto_scaler_profile.vertical_pod_autoscaler is None:
mc.workload_auto_scaler_profile.vertical_pod_autoscaler = self.models.ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler()
# set enabled
mc.workload_auto_scaler_profile.vertical_pod_autoscaler.enabled = True
if self.context.get_disable_vpa():
if mc.workload_auto_scaler_profile is None:
mc.workload_auto_scaler_profile = self.models.ManagedClusterWorkloadAutoScalerProfile()
if mc.workload_auto_scaler_profile.vertical_pod_autoscaler is None:
mc.workload_auto_scaler_profile.vertical_pod_autoscaler = self.models.ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler()
# set disabled
mc.workload_auto_scaler_profile.vertical_pod_autoscaler.enabled = False
return mc | Update workload auto-scaler profile for the ManagedCluster object.
:return: the ManagedCluster object | update_workload_auto_scaler_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_azure_monitor_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Update azure monitor profile for the ManagedCluster object.
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
# read the original value passed by the command
ksm_metric_labels_allow_list = self.context.raw_param.get("ksm_metric_labels_allow_list")
ksm_metric_annotations_allow_list = self.context.raw_param.get("ksm_metric_annotations_allow_list")
if ksm_metric_labels_allow_list is None:
ksm_metric_labels_allow_list = ""
if ksm_metric_annotations_allow_list is None:
ksm_metric_annotations_allow_list = ""
if self.context.get_enable_azure_monitor_metrics():
if mc.azure_monitor_profile is None:
mc.azure_monitor_profile = self.models.ManagedClusterAzureMonitorProfile()
mc.azure_monitor_profile.metrics = self.models.ManagedClusterAzureMonitorProfileMetrics(enabled=True)
mc.azure_monitor_profile.metrics.kube_state_metrics = self.models.ManagedClusterAzureMonitorProfileKubeStateMetrics( # pylint:disable=line-too-long
metric_labels_allowlist=str(ksm_metric_labels_allow_list),
metric_annotations_allow_list=str(ksm_metric_annotations_allow_list))
if self.context.get_disable_azure_monitor_metrics():
if mc.azure_monitor_profile is None:
mc.azure_monitor_profile = self.models.ManagedClusterAzureMonitorProfile()
mc.azure_monitor_profile.metrics = self.models.ManagedClusterAzureMonitorProfileMetrics(enabled=False)
if (
self.context.raw_param.get("enable_azure_monitor_metrics") or
self.context.raw_param.get("disable_azure_monitor_metrics")
):
self.context.external_functions.ensure_azure_monitor_profile_prerequisites(
self.cmd,
self.context.get_subscription_id(),
self.context.get_resource_group_name(),
self.context.get_name(),
self.context.get_location(),
self.__raw_parameters,
self.context.get_disable_azure_monitor_metrics(),
False)
return mc | Update azure monitor profile for the ManagedCluster object.
:return: the ManagedCluster object | update_azure_monitor_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_azure_container_storage(self, mc: ManagedCluster) -> ManagedCluster:
"""Update azure container storage for the Managed Cluster object
:return: ManagedCluster
"""
self._ensure_mc(mc)
# read the azure container storage values passed
enable_pool_type = self.context.raw_param.get("enable_azure_container_storage")
disable_pool_type = self.context.raw_param.get("disable_azure_container_storage")
enable_azure_container_storage = enable_pool_type is not None
disable_azure_container_storage = disable_pool_type is not None
nodepool_list = self.context.raw_param.get("azure_container_storage_nodepools")
ephemeral_disk_volume_type = self.context.raw_param.get("ephemeral_disk_volume_type")
ephemeral_disk_nvme_perf_tier = self.context.raw_param.get("ephemeral_disk_nvme_perf_tier")
if enable_azure_container_storage and disable_azure_container_storage:
raise MutuallyExclusiveArgumentError(
'Conflicting flags. Cannot set --enable-azure-container-storage '
'and --disable-azure-container-storage together.'
)
if (ephemeral_disk_volume_type is not None or ephemeral_disk_nvme_perf_tier is not None) and \
not enable_azure_container_storage:
params_defined_arr = []
if ephemeral_disk_volume_type is not None:
params_defined_arr.append('--ephemeral-disk-volume-type')
if ephemeral_disk_nvme_perf_tier is not None:
params_defined_arr.append('--ephemeral-disk-nvme-perf-tier')
params_defined = 'and '.join(params_defined_arr)
raise RequiredArgumentMissingError(
f'Cannot set {params_defined} without the parameter --enable-azure-container-storage.'
)
# pylint: disable=too-many-nested-blocks
if enable_azure_container_storage or disable_azure_container_storage:
# Require the agent pool profiles for azure container storage
# operations. Raise exception if not found.
if not mc.agent_pool_profiles:
raise UnknownError(
"Encounter an unexpected error while getting agent pool profiles from the cluster "
"in the process of updating agentpool profile."
)
storagepool_name = self.context.raw_param.get("storage_pool_name")
pool_option = self.context.raw_param.get("storage_pool_option")
pool_sku = self.context.raw_param.get("storage_pool_sku")
pool_size = self.context.raw_param.get("storage_pool_size")
agentpool_details = {}
from azure.cli.command_modules.acs.azurecontainerstorage._helpers import get_extension_installed_and_cluster_configs
(
is_extension_installed,
is_azureDisk_enabled,
is_elasticSan_enabled,
is_ephemeralDisk_localssd_enabled,
is_ephemeralDisk_nvme_enabled,
current_core_value,
existing_ephemeral_disk_volume_type,
existing_perf_tier,
) = get_extension_installed_and_cluster_configs(
self.cmd,
self.context.get_resource_group_name(),
self.context.get_name(),
mc.agent_pool_profiles,
)
from azure.cli.command_modules.acs.azurecontainerstorage._helpers import generate_vm_sku_cache_for_region
generate_vm_sku_cache_for_region(self.cmd.cli_ctx, self.context.get_location())
if enable_azure_container_storage:
from azure.cli.command_modules.acs.azurecontainerstorage._consts import (
CONST_ACSTOR_IO_ENGINE_LABEL_KEY,
CONST_ACSTOR_IO_ENGINE_LABEL_VAL
)
labelled_nodepool_arr = []
for agentpool in mc.agent_pool_profiles:
pool_details = {}
nodepool_name = agentpool.name
pool_details["vm_size"] = agentpool.vm_size
pool_details["count"] = agentpool.count
pool_details["os_type"] = agentpool.os_type
pool_details["mode"] = agentpool.mode
pool_details["node_taints"] = agentpool.node_taints
pool_details["zoned"] = agentpool.availability_zones is not None
if agentpool.node_labels is not None:
node_labels = agentpool.node_labels
if node_labels is not None and \
node_labels.get(CONST_ACSTOR_IO_ENGINE_LABEL_KEY) is not None and \
nodepool_name is not None:
labelled_nodepool_arr.append(nodepool_name)
pool_details["node_labels"] = node_labels
agentpool_details[nodepool_name] = pool_details
# Incase of a new installation, if the nodepool list is not defined
# then check for all the nodepools which are marked with acstor io-engine
# labels and include them for installation. If none of the nodepools are
# labelled, either pick nodepool1 as default, or if only
# one nodepool exists, choose the only nodepool by default.
if not is_extension_installed:
if nodepool_list is None:
nodepool_list = ""
if len(labelled_nodepool_arr) > 0:
nodepool_list = ','.join(labelled_nodepool_arr)
elif len(agentpool_details) == 1:
nodepool_list = ','.join(agentpool_details.keys())
from azure.cli.command_modules.acs.azurecontainerstorage._validators import (
validate_enable_azure_container_storage_params
)
validate_enable_azure_container_storage_params(
enable_pool_type,
storagepool_name,
pool_sku,
pool_option,
pool_size,
nodepool_list,
agentpool_details,
is_extension_installed,
is_azureDisk_enabled,
is_elasticSan_enabled,
is_ephemeralDisk_localssd_enabled,
is_ephemeralDisk_nvme_enabled,
ephemeral_disk_volume_type,
ephemeral_disk_nvme_perf_tier,
existing_ephemeral_disk_volume_type,
existing_perf_tier,
)
if is_ephemeralDisk_nvme_enabled and ephemeral_disk_nvme_perf_tier is not None:
msg = (
"Changing ephemeralDisk NVMe performance tier may result in a temporary "
"interruption to the applications using Azure Container Storage. Do you "
"want to continue with this operation?"
)
if not (self.context.get_yes() or prompt_y_n(msg, default="n")):
raise DecoratorEarlyExitException()
# If the extension is already installed,
# we expect that the Azure Container Storage
# nodes are already labelled. Use those label
# to generate the nodepool_list.
if is_extension_installed:
nodepool_list = ','.join(labelled_nodepool_arr)
else:
# Set Azure Container Storage labels on the required nodepools.
nodepool_list_arr = nodepool_list.split(',')
for agentpool in mc.agent_pool_profiles:
labels = agentpool.node_labels
if agentpool.name in nodepool_list_arr:
if labels is None:
labels = {}
labels[CONST_ACSTOR_IO_ENGINE_LABEL_KEY] = CONST_ACSTOR_IO_ENGINE_LABEL_VAL
else:
# Remove residual Azure Container Storage labels
# from any other nodepools where its not intended
if labels is not None:
labels.pop(CONST_ACSTOR_IO_ENGINE_LABEL_KEY, None)
agentpool.node_labels = labels
# set intermediates
self.context.set_intermediate("azure_container_storage_nodepools", nodepool_list, overwrite_exists=True)
self.context.set_intermediate("enable_azure_container_storage", True, overwrite_exists=True)
if disable_azure_container_storage:
from azure.cli.command_modules.acs.azurecontainerstorage._validators import (
validate_disable_azure_container_storage_params
)
validate_disable_azure_container_storage_params(
disable_pool_type,
storagepool_name,
pool_sku,
pool_option,
pool_size,
nodepool_list,
is_extension_installed,
is_azureDisk_enabled,
is_elasticSan_enabled,
is_ephemeralDisk_localssd_enabled,
is_ephemeralDisk_nvme_enabled,
ephemeral_disk_volume_type,
ephemeral_disk_nvme_perf_tier,
)
pre_disable_validate = False
msg = (
"Disabling Azure Container Storage will forcefully delete all the storage pools in the cluster and "
"affect the applications using these storage pools. Forceful deletion of storage pools can also "
"lead to leaking of storage resources which are being consumed. Do you want to validate whether "
"any of the storage pools are being used before disabling Azure Container Storage?"
)
from azure.cli.command_modules.acs.azurecontainerstorage._consts import (
CONST_ACSTOR_ALL,
)
if disable_pool_type != CONST_ACSTOR_ALL:
msg = (
f"Disabling Azure Container Storage for storage pool type {disable_pool_type} "
"will forcefully delete all the storage pools of the same type and affect the "
"applications using these storage pools. Forceful deletion of storage pools can "
"also lead to leaking of storage resources which are being consumed. Do you want to "
f"validate whether any of the storage pools of type {disable_pool_type} are being used "
"before disabling Azure Container Storage?"
)
if self.context.get_yes() or prompt_y_n(msg, default="y"):
pre_disable_validate = True
# set intermediate
self.context.set_intermediate("disable_azure_container_storage", True, overwrite_exists=True)
self.context.set_intermediate(
"pre_disable_validate_azure_container_storage",
pre_disable_validate,
overwrite_exists=True
)
# Set intermediates
self.context.set_intermediate("is_extension_installed", is_extension_installed, overwrite_exists=True)
self.context.set_intermediate("is_azureDisk_enabled", is_azureDisk_enabled, overwrite_exists=True)
self.context.set_intermediate("is_elasticSan_enabled", is_elasticSan_enabled, overwrite_exists=True)
self.context.set_intermediate("current_core_value", current_core_value, overwrite_exists=True)
self.context.set_intermediate(
"current_ephemeral_nvme_perf_tier",
existing_perf_tier,
overwrite_exists=True
)
self.context.set_intermediate(
"existing_ephemeral_disk_volume_type",
existing_ephemeral_disk_volume_type,
overwrite_exists=True
)
self.context.set_intermediate(
"is_ephemeralDisk_nvme_enabled",
is_ephemeralDisk_nvme_enabled,
overwrite_exists=True
)
self.context.set_intermediate(
"is_ephemeralDisk_localssd_enabled",
is_ephemeralDisk_localssd_enabled,
overwrite_exists=True
)
self.context.set_intermediate("current_core_value", current_core_value, overwrite_exists=True)
return mc | Update azure container storage for the Managed Cluster object
:return: ManagedCluster | update_azure_container_storage | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_metrics_profile(self, mc: ManagedCluster) -> ManagedCluster:
"""Updates the metricsProfile field of the managed cluster
:return: the ManagedCluster object
"""
self._ensure_mc(mc)
mc = self.update_cost_analysis(mc)
return mc | Updates the metricsProfile field of the managed cluster
:return: the ManagedCluster object | update_metrics_profile | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
def update_mc_profile_default(self) -> ManagedCluster:
"""The overall controller used to update the default ManagedCluster profile.
The completely updated ManagedCluster object will later be passed as a parameter to the underlying SDK
(mgmt-containerservice) to send the actual request.
:return: the ManagedCluster object
"""
# check raw parameters
# promt y/n if no options are specified to ask user whether to perform a reconcile operation
self.check_raw_parameters()
# fetch the ManagedCluster object
mc = self.fetch_mc()
# update agentpool profile by the agentpool decorator
mc = self.update_agentpool_profile(mc)
# update auto scaler profile
mc = self.update_auto_scaler_profile(mc)
# update tags
mc = self.update_tags(mc)
# attach or detach acr (add or delete role assignment for acr)
self.process_attach_detach_acr(mc)
# update sku (uptime sla)
mc = self.update_sku(mc)
# update outbound type
mc = self.update_outbound_type_in_network_profile(mc)
# update load balancer profile
mc = self.update_load_balancer_profile(mc)
# update nat gateway profile
mc = self.update_nat_gateway_profile(mc)
# update disable/enable local accounts
mc = self.update_disable_local_accounts(mc)
# update api server access profile
mc = self.update_api_server_access_profile(mc)
# update windows profile
mc = self.update_windows_profile(mc)
# update network plugin settings
mc = self.update_network_plugin_settings(mc)
# update network profile settings
mc = self.update_network_profile(mc)
# update network profile with acns
mc = self.update_network_profile_advanced_networking(mc)
# update aad profile
mc = self.update_aad_profile(mc)
# update oidc issuer profile
mc = self.update_oidc_issuer_profile(mc)
# update auto upgrade profile
mc = self.update_auto_upgrade_profile(mc)
# update identity
mc = self.update_identity(mc)
# update addon profiles
mc = self.update_addon_profiles(mc)
# update defender
mc = self.update_defender(mc)
# update workload identity profile
mc = self.update_workload_identity_profile(mc)
# update stroage profile
mc = self.update_storage_profile(mc)
# update azure keyvalut kms
mc = self.update_azure_keyvault_kms(mc)
# update image cleaner
mc = self.update_image_cleaner(mc)
# update identity
mc = self.update_identity_profile(mc)
# set up http proxy config
mc = self.update_http_proxy_config(mc)
# update workload autoscaler profile
mc = self.update_workload_auto_scaler_profile(mc)
# update kubernetes support plan
mc = self.update_k8s_support_plan(mc)
# update azure monitor metrics profile
mc = self.update_azure_monitor_profile(mc)
# update azure container storage
mc = self.update_azure_container_storage(mc)
# update cluster upgrade settings
mc = self.update_upgrade_settings(mc)
# update metrics profile
mc = self.update_metrics_profile(mc)
# update node resource group profile
mc = self.update_node_resource_group_profile(mc)
return mc | The overall controller used to update the default ManagedCluster profile.
The completely updated ManagedCluster object will later be passed as a parameter to the underlying SDK
(mgmt-containerservice) to send the actual request.
:return: the ManagedCluster object | update_mc_profile_default | python | Azure/azure-cli | src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | https://github.com/Azure/azure-cli/blob/master/src/azure-cli/azure/cli/command_modules/acs/managed_cluster_decorator.py | MIT |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.