body_hash
stringlengths
64
64
body
stringlengths
23
109k
docstring
stringlengths
1
57k
path
stringlengths
4
198
name
stringlengths
1
115
repository_name
stringlengths
7
111
repository_stars
float64
0
191k
lang
stringclasses
1 value
body_without_docstring
stringlengths
14
108k
unified
stringlengths
45
133k
6cf7c8938f7ceca38c2736fba43ed8ef4a46a14b2f0c1c1a3e2847900d0ba1cc
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n The label key that the selector applies to.\n ' return pulumi.get(self, 'key')
The label key that the selector applies to.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
key
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'key')
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'key')<|docstring|>The label key that the selector applies to.<|endoftext|>
026834ef706ec81b0dac6449994d81a644e4e6273652b199546c01310ae542ba
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.\n " return pulumi.get(self, 'operator')
Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
operator
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n \n " return pulumi.get(self, 'operator')
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n \n " return pulumi.get(self, 'operator')<|docstring|>Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.<|endoftext|>
1156c9e0af0cbb14c25f97914f22f3802e172355689d41403da88d3ce78227b5
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.\n ' return pulumi.get(self, 'values')
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
values
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n \n ' return pulumi.get(self, 'values')
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n \n ' return pulumi.get(self, 'values')<|docstring|>An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.<|endoftext|>
57c9e894aa291b9c40773ebd68d1e6baed160d9adcd1f724cc2823e7fb689af0
def __init__(__self__, *, preferred_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None, required_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None): '\n Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs\']]] preferred_during_scheduling_ignored_during_execution: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs\']]] required_during_scheduling_ignored_during_execution: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.\n ' if (preferred_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'preferred_during_scheduling_ignored_during_execution', preferred_during_scheduling_ignored_during_execution) if (required_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'required_during_scheduling_ignored_during_execution', required_during_scheduling_ignored_during_execution)
Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]] preferred_during_scheduling_ignored_during_execution: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]] required_during_scheduling_ignored_during_execution: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, preferred_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None, required_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None): '\n Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs\']]] preferred_during_scheduling_ignored_during_execution: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs\']]] required_during_scheduling_ignored_during_execution: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.\n ' if (preferred_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'preferred_during_scheduling_ignored_during_execution', preferred_during_scheduling_ignored_during_execution) if (required_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'required_during_scheduling_ignored_during_execution', required_during_scheduling_ignored_during_execution)
def __init__(__self__, *, preferred_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None, required_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None): '\n Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs\']]] preferred_during_scheduling_ignored_during_execution: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs\']]] required_during_scheduling_ignored_during_execution: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.\n ' if (preferred_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'preferred_during_scheduling_ignored_during_execution', preferred_during_scheduling_ignored_during_execution) if (required_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'required_during_scheduling_ignored_during_execution', required_during_scheduling_ignored_during_execution)<|docstring|>Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]] preferred_during_scheduling_ignored_during_execution: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]] required_during_scheduling_ignored_during_execution: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.<|endoftext|>
837111f6474521c650fe424bbf103b07e8d58bbc33e85a92a366063efa7a2a38
@property @pulumi.getter(name='preferredDuringSchedulingIgnoredDuringExecution') def preferred_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.\n ' return pulumi.get(self, 'preferred_during_scheduling_ignored_during_execution')
The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
preferred_during_scheduling_ignored_during_execution
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='preferredDuringSchedulingIgnoredDuringExecution') def preferred_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n \n ' return pulumi.get(self, 'preferred_during_scheduling_ignored_during_execution')
@property @pulumi.getter(name='preferredDuringSchedulingIgnoredDuringExecution') def preferred_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n \n ' return pulumi.get(self, 'preferred_during_scheduling_ignored_during_execution')<|docstring|>The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.<|endoftext|>
bb1da76b1a1f6312c26174555c95998b360d9a130f64b4758b094b6a8afff3cb
@property @pulumi.getter(name='requiredDuringSchedulingIgnoredDuringExecution') def required_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.\n ' return pulumi.get(self, 'required_during_scheduling_ignored_during_execution')
If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
required_during_scheduling_ignored_during_execution
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='requiredDuringSchedulingIgnoredDuringExecution') def required_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n \n ' return pulumi.get(self, 'required_during_scheduling_ignored_during_execution')
@property @pulumi.getter(name='requiredDuringSchedulingIgnoredDuringExecution') def required_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n \n ' return pulumi.get(self, 'required_during_scheduling_ignored_during_execution')<|docstring|>If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.<|endoftext|>
8cd6dfb1e2062325c56a0d0f9083f53db00af1b6e957176ce64e7d54512232e3
def __init__(__self__, *, pod_affinity_term: pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'], weight: pulumi.Input[int]): "\n The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)\n :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'] pod_affinity_term: Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[int] weight: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.\n " pulumi.set(__self__, 'pod_affinity_term', pod_affinity_term) pulumi.set(__self__, 'weight', weight)
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'] pod_affinity_term: Required. A pod affinity term, associated with the corresponding weight. :param pulumi.Input[int] weight: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, pod_affinity_term: pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'], weight: pulumi.Input[int]): "\n The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)\n :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'] pod_affinity_term: Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[int] weight: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.\n " pulumi.set(__self__, 'pod_affinity_term', pod_affinity_term) pulumi.set(__self__, 'weight', weight)
def __init__(__self__, *, pod_affinity_term: pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'], weight: pulumi.Input[int]): "\n The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)\n :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'] pod_affinity_term: Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[int] weight: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.\n " pulumi.set(__self__, 'pod_affinity_term', pod_affinity_term) pulumi.set(__self__, 'weight', weight)<|docstring|>The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'] pod_affinity_term: Required. A pod affinity term, associated with the corresponding weight. :param pulumi.Input[int] weight: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.<|endoftext|>
7c0ea4baa6aef8ddf9ca57a39b02b8d8ed0755a1d1100abc1182070798b18980
@property @pulumi.getter(name='podAffinityTerm') def pod_affinity_term(self) -> pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs']: '\n Required. A pod affinity term, associated with the corresponding weight.\n ' return pulumi.get(self, 'pod_affinity_term')
Required. A pod affinity term, associated with the corresponding weight.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
pod_affinity_term
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='podAffinityTerm') def pod_affinity_term(self) -> pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs']: '\n \n ' return pulumi.get(self, 'pod_affinity_term')
@property @pulumi.getter(name='podAffinityTerm') def pod_affinity_term(self) -> pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs']: '\n \n ' return pulumi.get(self, 'pod_affinity_term')<|docstring|>Required. A pod affinity term, associated with the corresponding weight.<|endoftext|>
cbda27b0cdf343ea47c23ff8abbfa8a96b716aaf51975c7c3ed2982e4a3adb2f
@property @pulumi.getter def weight(self) -> pulumi.Input[int]: '\n weight associated with matching the corresponding podAffinityTerm, in the range 1-100.\n ' return pulumi.get(self, 'weight')
weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
weight
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def weight(self) -> pulumi.Input[int]: '\n \n ' return pulumi.get(self, 'weight')
@property @pulumi.getter def weight(self) -> pulumi.Input[int]: '\n \n ' return pulumi.get(self, 'weight')<|docstring|>weight associated with matching the corresponding podAffinityTerm, in the range 1-100.<|endoftext|>
84fa5243527043a1fd72de1c908ae42c5f1419f5b209d54d35f47f1b03a4bb51
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)
Required. A pod affinity term, associated with the corresponding weight. :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs'] label_selector: A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)<|docstring|>Required. A pod affinity term, associated with the corresponding weight. :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs'] label_selector: A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"<|endoftext|>
1a7fd32bfbac3e1d1c7b3da8ebc96670c998ac5294448293ae6cf957310d8ab3
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n ' return pulumi.get(self, 'topology_key')
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
topology_key
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'topology_key')
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'topology_key')<|docstring|>This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.<|endoftext|>
ed9d0baa91509a37d44c08760ddbb4ebdde598a0fac17398e60e9fa10da23b90
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]: '\n A label query over a set of resources, in this case pods.\n ' return pulumi.get(self, 'label_selector')
A label query over a set of resources, in this case pods.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
label_selector
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]: '\n \n ' return pulumi.get(self, 'label_selector')
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]: '\n \n ' return pulumi.get(self, 'label_selector')<|docstring|>A label query over a set of resources, in this case pods.<|endoftext|>
cafaaebd0b2bde715d14eccb6c5349f21790206a44c76155207e11f20235f51a
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')
namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
namespaces
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')<|docstring|>namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"<|endoftext|>
bd67f66d19745ee0efd8236807b7aa8721b3fe938af84960d711ca2cf5a3b7e6
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)
A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed. :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)<|docstring|>A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed. :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.<|endoftext|>
0c2aeb6bbe462ee058516bd6abbdcf2907548a7f01973ff2b0c4aa0659feb180
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]: '\n matchExpressions is a list of label selector requirements. The requirements are ANDed.\n ' return pulumi.get(self, 'match_expressions')
matchExpressions is a list of label selector requirements. The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
match_expressions
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]: '\n \n ' return pulumi.get(self, 'match_expressions')
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]: '\n \n ' return pulumi.get(self, 'match_expressions')<|docstring|>matchExpressions is a list of label selector requirements. The requirements are ANDed.<|endoftext|>
c8d28c22c7664eea6a56b076ef07546eb4ceb965de2e9c944999569f79409a76
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' return pulumi.get(self, 'match_labels')
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
match_labels
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n \n ' return pulumi.get(self, 'match_labels')
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n \n ' return pulumi.get(self, 'match_labels')<|docstring|>matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.<|endoftext|>
c920819fc0b4fc3fe978eacb8fbb8d5f1500cc63d1fed6bf3274b88c2f2ce017
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. :param pulumi.Input[str] key: key is the label key that the selector applies to. :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)<|docstring|>A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. :param pulumi.Input[str] key: key is the label key that the selector applies to. :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.<|endoftext|>
873b01de6c7b4b390af7286e21dc936c9a738bde5f172c97010521d8ead34b94
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n key is the label key that the selector applies to.\n ' return pulumi.get(self, 'key')
key is the label key that the selector applies to.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
key
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'key')
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'key')<|docstring|>key is the label key that the selector applies to.<|endoftext|>
0c4e49e08176a0283877710e6179293ccac4fbb9a05b090e0d01b0bc406ff63f
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n " return pulumi.get(self, 'operator')
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
operator
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n \n " return pulumi.get(self, 'operator')
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n \n " return pulumi.get(self, 'operator')<|docstring|>operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.<|endoftext|>
8890282f9ab13c97f11910757bc10d16f9baa2504cd65573fd75ceafa57e1daf
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n ' return pulumi.get(self, 'values')
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
values
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n \n ' return pulumi.get(self, 'values')
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n \n ' return pulumi.get(self, 'values')<|docstring|>values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.<|endoftext|>
3583cae28aa148b6c7a5ba4dbbbf0664eb1f613ae973b19d42a03d502814d12f
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs'] label_selector: A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)<|docstring|>Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs'] label_selector: A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"<|endoftext|>
1a7fd32bfbac3e1d1c7b3da8ebc96670c998ac5294448293ae6cf957310d8ab3
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n ' return pulumi.get(self, 'topology_key')
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
topology_key
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'topology_key')
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'topology_key')<|docstring|>This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.<|endoftext|>
c8690906f620a9e69cf8261d71105632288110e5367d41f7a998519df5370488
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]: '\n A label query over a set of resources, in this case pods.\n ' return pulumi.get(self, 'label_selector')
A label query over a set of resources, in this case pods.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
label_selector
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]: '\n \n ' return pulumi.get(self, 'label_selector')
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]: '\n \n ' return pulumi.get(self, 'label_selector')<|docstring|>A label query over a set of resources, in this case pods.<|endoftext|>
cafaaebd0b2bde715d14eccb6c5349f21790206a44c76155207e11f20235f51a
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')
namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
namespaces
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')<|docstring|>namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"<|endoftext|>
2da373ba6ff5872ba3a92f8f69eeaf9408489d303ee7df8e5f537a3d881eb910
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)
A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed. :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)<|docstring|>A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed. :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.<|endoftext|>
819b8b54560bba6298723276bb087c7cfc45550e150aa5bdc3c6272d07b28c27
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]: '\n matchExpressions is a list of label selector requirements. The requirements are ANDed.\n ' return pulumi.get(self, 'match_expressions')
matchExpressions is a list of label selector requirements. The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
match_expressions
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]: '\n \n ' return pulumi.get(self, 'match_expressions')
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]: '\n \n ' return pulumi.get(self, 'match_expressions')<|docstring|>matchExpressions is a list of label selector requirements. The requirements are ANDed.<|endoftext|>
c8d28c22c7664eea6a56b076ef07546eb4ceb965de2e9c944999569f79409a76
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' return pulumi.get(self, 'match_labels')
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
match_labels
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n \n ' return pulumi.get(self, 'match_labels')
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n \n ' return pulumi.get(self, 'match_labels')<|docstring|>matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.<|endoftext|>
c920819fc0b4fc3fe978eacb8fbb8d5f1500cc63d1fed6bf3274b88c2f2ce017
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. :param pulumi.Input[str] key: key is the label key that the selector applies to. :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)<|docstring|>A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. :param pulumi.Input[str] key: key is the label key that the selector applies to. :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.<|endoftext|>
873b01de6c7b4b390af7286e21dc936c9a738bde5f172c97010521d8ead34b94
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n key is the label key that the selector applies to.\n ' return pulumi.get(self, 'key')
key is the label key that the selector applies to.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
key
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'key')
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'key')<|docstring|>key is the label key that the selector applies to.<|endoftext|>
0c4e49e08176a0283877710e6179293ccac4fbb9a05b090e0d01b0bc406ff63f
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n " return pulumi.get(self, 'operator')
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
operator
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n \n " return pulumi.get(self, 'operator')
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n \n " return pulumi.get(self, 'operator')<|docstring|>operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.<|endoftext|>
8890282f9ab13c97f11910757bc10d16f9baa2504cd65573fd75ceafa57e1daf
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n ' return pulumi.get(self, 'values')
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
values
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n \n ' return pulumi.get(self, 'values')
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n \n ' return pulumi.get(self, 'values')<|docstring|>values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.<|endoftext|>
adbbfbe2fc9bd80658860a8b45b0220560ead0893f98c3f57a489f97ce1ba563
def __init__(__self__, *, preferred_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None, required_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None): '\n Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs\']]] preferred_during_scheduling_ignored_during_execution: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs\']]] required_during_scheduling_ignored_during_execution: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.\n ' if (preferred_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'preferred_during_scheduling_ignored_during_execution', preferred_during_scheduling_ignored_during_execution) if (required_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'required_during_scheduling_ignored_during_execution', required_during_scheduling_ignored_during_execution)
Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]] preferred_during_scheduling_ignored_during_execution: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]] required_during_scheduling_ignored_during_execution: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, preferred_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None, required_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None): '\n Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs\']]] preferred_during_scheduling_ignored_during_execution: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs\']]] required_during_scheduling_ignored_during_execution: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.\n ' if (preferred_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'preferred_during_scheduling_ignored_during_execution', preferred_during_scheduling_ignored_during_execution) if (required_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'required_during_scheduling_ignored_during_execution', required_during_scheduling_ignored_during_execution)
def __init__(__self__, *, preferred_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None, required_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]=None): '\n Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs\']]] preferred_during_scheduling_ignored_during_execution: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs\']]] required_during_scheduling_ignored_during_execution: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.\n ' if (preferred_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'preferred_during_scheduling_ignored_during_execution', preferred_during_scheduling_ignored_during_execution) if (required_during_scheduling_ignored_during_execution is not None): pulumi.set(__self__, 'required_during_scheduling_ignored_during_execution', required_during_scheduling_ignored_during_execution)<|docstring|>Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]] preferred_during_scheduling_ignored_during_execution: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]] required_during_scheduling_ignored_during_execution: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.<|endoftext|>
7575258af588226f6ef672d376c9683ec61f3c038ae42ce50a68f7260aac333b
@property @pulumi.getter(name='preferredDuringSchedulingIgnoredDuringExecution') def preferred_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.\n ' return pulumi.get(self, 'preferred_during_scheduling_ignored_during_execution')
The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
preferred_during_scheduling_ignored_during_execution
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='preferredDuringSchedulingIgnoredDuringExecution') def preferred_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n \n ' return pulumi.get(self, 'preferred_during_scheduling_ignored_during_execution')
@property @pulumi.getter(name='preferredDuringSchedulingIgnoredDuringExecution') def preferred_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n \n ' return pulumi.get(self, 'preferred_during_scheduling_ignored_during_execution')<|docstring|>The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.<|endoftext|>
5c1146b48c150ec9fb8928a34af3a2bca87a23bb4929b7ca8d2a390637320a34
@property @pulumi.getter(name='requiredDuringSchedulingIgnoredDuringExecution') def required_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.\n ' return pulumi.get(self, 'required_during_scheduling_ignored_during_execution')
If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
required_during_scheduling_ignored_during_execution
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='requiredDuringSchedulingIgnoredDuringExecution') def required_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n \n ' return pulumi.get(self, 'required_during_scheduling_ignored_during_execution')
@property @pulumi.getter(name='requiredDuringSchedulingIgnoredDuringExecution') def required_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]]]: '\n \n ' return pulumi.get(self, 'required_during_scheduling_ignored_during_execution')<|docstring|>If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.<|endoftext|>
e6e63d85afa1517211f8bd8be14ed4ec9ff00b4be7461b842ad8d37ef9192733
def __init__(__self__, *, pod_affinity_term: pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'], weight: pulumi.Input[int]): "\n The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)\n :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'] pod_affinity_term: Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[int] weight: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.\n " pulumi.set(__self__, 'pod_affinity_term', pod_affinity_term) pulumi.set(__self__, 'weight', weight)
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'] pod_affinity_term: Required. A pod affinity term, associated with the corresponding weight. :param pulumi.Input[int] weight: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, pod_affinity_term: pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'], weight: pulumi.Input[int]): "\n The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)\n :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'] pod_affinity_term: Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[int] weight: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.\n " pulumi.set(__self__, 'pod_affinity_term', pod_affinity_term) pulumi.set(__self__, 'weight', weight)
def __init__(__self__, *, pod_affinity_term: pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'], weight: pulumi.Input[int]): "\n The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)\n :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'] pod_affinity_term: Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[int] weight: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.\n " pulumi.set(__self__, 'pod_affinity_term', pod_affinity_term) pulumi.set(__self__, 'weight', weight)<|docstring|>The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs'] pod_affinity_term: Required. A pod affinity term, associated with the corresponding weight. :param pulumi.Input[int] weight: weight associated with matching the corresponding podAffinityTerm, in the range 1-100.<|endoftext|>
0475de0cf469d11e6db63a719029da0ae408c96ec67ba699235cbb5a1bcc446e
@property @pulumi.getter(name='podAffinityTerm') def pod_affinity_term(self) -> pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs']: '\n Required. A pod affinity term, associated with the corresponding weight.\n ' return pulumi.get(self, 'pod_affinity_term')
Required. A pod affinity term, associated with the corresponding weight.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
pod_affinity_term
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='podAffinityTerm') def pod_affinity_term(self) -> pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs']: '\n \n ' return pulumi.get(self, 'pod_affinity_term')
@property @pulumi.getter(name='podAffinityTerm') def pod_affinity_term(self) -> pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermArgs']: '\n \n ' return pulumi.get(self, 'pod_affinity_term')<|docstring|>Required. A pod affinity term, associated with the corresponding weight.<|endoftext|>
cbda27b0cdf343ea47c23ff8abbfa8a96b716aaf51975c7c3ed2982e4a3adb2f
@property @pulumi.getter def weight(self) -> pulumi.Input[int]: '\n weight associated with matching the corresponding podAffinityTerm, in the range 1-100.\n ' return pulumi.get(self, 'weight')
weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
weight
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def weight(self) -> pulumi.Input[int]: '\n \n ' return pulumi.get(self, 'weight')
@property @pulumi.getter def weight(self) -> pulumi.Input[int]: '\n \n ' return pulumi.get(self, 'weight')<|docstring|>weight associated with matching the corresponding podAffinityTerm, in the range 1-100.<|endoftext|>
6a78c646cbcbd8eda2aebb7d6aea3faf060c8017dc164b9763f62d1683fbfad2
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)
Required. A pod affinity term, associated with the corresponding weight. :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs'] label_selector: A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Required. A pod affinity term, associated with the corresponding weight.\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)<|docstring|>Required. A pod affinity term, associated with the corresponding weight. :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs'] label_selector: A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"<|endoftext|>
1a7fd32bfbac3e1d1c7b3da8ebc96670c998ac5294448293ae6cf957310d8ab3
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n ' return pulumi.get(self, 'topology_key')
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
topology_key
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'topology_key')
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'topology_key')<|docstring|>This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.<|endoftext|>
04de335ddac1ae44b28cacfee3557f4e8ea883a129d1a490749a19b4e706500d
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]: '\n A label query over a set of resources, in this case pods.\n ' return pulumi.get(self, 'label_selector')
A label query over a set of resources, in this case pods.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
label_selector
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]: '\n \n ' return pulumi.get(self, 'label_selector')
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorArgs']]: '\n \n ' return pulumi.get(self, 'label_selector')<|docstring|>A label query over a set of resources, in this case pods.<|endoftext|>
cafaaebd0b2bde715d14eccb6c5349f21790206a44c76155207e11f20235f51a
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')
namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
namespaces
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')<|docstring|>namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"<|endoftext|>
51a800b0e2ffa5ccce37807330db2f1e80b06e1b98ecc81785961dd151d0360f
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)
A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed. :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)<|docstring|>A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed. :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.<|endoftext|>
886acdf5ca0a2e0c009efe5d93b1aa72689b7b802fcd9c159268ffd73deae4c9
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]: '\n matchExpressions is a list of label selector requirements. The requirements are ANDed.\n ' return pulumi.get(self, 'match_expressions')
matchExpressions is a list of label selector requirements. The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
match_expressions
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]: '\n \n ' return pulumi.get(self, 'match_expressions')
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityPreferredDuringSchedulingIgnoredDuringExecutionPodAffinityTermLabelSelectorMatchExpressionsArgs']]]]: '\n \n ' return pulumi.get(self, 'match_expressions')<|docstring|>matchExpressions is a list of label selector requirements. The requirements are ANDed.<|endoftext|>
c8d28c22c7664eea6a56b076ef07546eb4ceb965de2e9c944999569f79409a76
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' return pulumi.get(self, 'match_labels')
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
match_labels
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n \n ' return pulumi.get(self, 'match_labels')
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n \n ' return pulumi.get(self, 'match_labels')<|docstring|>matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.<|endoftext|>
c920819fc0b4fc3fe978eacb8fbb8d5f1500cc63d1fed6bf3274b88c2f2ce017
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. :param pulumi.Input[str] key: key is the label key that the selector applies to. :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)<|docstring|>A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. :param pulumi.Input[str] key: key is the label key that the selector applies to. :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.<|endoftext|>
873b01de6c7b4b390af7286e21dc936c9a738bde5f172c97010521d8ead34b94
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n key is the label key that the selector applies to.\n ' return pulumi.get(self, 'key')
key is the label key that the selector applies to.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
key
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'key')
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'key')<|docstring|>key is the label key that the selector applies to.<|endoftext|>
0c4e49e08176a0283877710e6179293ccac4fbb9a05b090e0d01b0bc406ff63f
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n " return pulumi.get(self, 'operator')
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
operator
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n \n " return pulumi.get(self, 'operator')
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n \n " return pulumi.get(self, 'operator')<|docstring|>operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.<|endoftext|>
8890282f9ab13c97f11910757bc10d16f9baa2504cd65573fd75ceafa57e1daf
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n ' return pulumi.get(self, 'values')
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
values
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n \n ' return pulumi.get(self, 'values')
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n \n ' return pulumi.get(self, 'values')<|docstring|>values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.<|endoftext|>
eab227fb83678e98d68eb939c590a18837d1be6e9dfa2dd02d2da6e31e02fe54
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs'] label_selector: A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)
def __init__(__self__, *, topology_key: pulumi.Input[str], label_selector: Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]=None, namespaces: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): '\n Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running\n :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n :param pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs\'] label_selector: A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' pulumi.set(__self__, 'topology_key', topology_key) if (label_selector is not None): pulumi.set(__self__, 'label_selector', label_selector) if (namespaces is not None): pulumi.set(__self__, 'namespaces', namespaces)<|docstring|>Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running :param pulumi.Input[str] topology_key: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. :param pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs'] label_selector: A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input[str]]] namespaces: namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"<|endoftext|>
1a7fd32bfbac3e1d1c7b3da8ebc96670c998ac5294448293ae6cf957310d8ab3
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n ' return pulumi.get(self, 'topology_key')
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
topology_key
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'topology_key')
@property @pulumi.getter(name='topologyKey') def topology_key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'topology_key')<|docstring|>This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.<|endoftext|>
65e00caed452f14bbf473cde43911b9d4cacbf1e448d70b263f16faecd91378c
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]: '\n A label query over a set of resources, in this case pods.\n ' return pulumi.get(self, 'label_selector')
A label query over a set of resources, in this case pods.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
label_selector
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]: '\n \n ' return pulumi.get(self, 'label_selector')
@property @pulumi.getter(name='labelSelector') def label_selector(self) -> Optional[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorArgs']]: '\n \n ' return pulumi.get(self, 'label_selector')<|docstring|>A label query over a set of resources, in this case pods.<|endoftext|>
cafaaebd0b2bde715d14eccb6c5349f21790206a44c76155207e11f20235f51a
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')
namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
namespaces
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')
@property @pulumi.getter def namespaces(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod\'s namespace"\n ' return pulumi.get(self, 'namespaces')<|docstring|>namespaces specifies which namespaces the labelSelector applies to (matches against); null or empty list means "this pod's namespace"<|endoftext|>
fa1feea552b72930b93955053e336bb1aab037767350442d7d0ab50ba9d82fe2
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)
A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed. :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)
def __init__(__self__, *, match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]=None, match_labels: Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]=None): '\n A label query over a set of resources, in this case pods.\n :param pulumi.Input[Sequence[pulumi.Input[\'IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs\']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.\n :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' if (match_expressions is not None): pulumi.set(__self__, 'match_expressions', match_expressions) if (match_labels is not None): pulumi.set(__self__, 'match_labels', match_labels)<|docstring|>A label query over a set of resources, in this case pods. :param pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed. :param pulumi.Input[Mapping[str, pulumi.Input[str]]] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.<|endoftext|>
5c7e1d947682c5bdbf7b53e6b0119c6fcd145037f32f31354316460da8ef6b45
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]: '\n matchExpressions is a list of label selector requirements. The requirements are ANDed.\n ' return pulumi.get(self, 'match_expressions')
matchExpressions is a list of label selector requirements. The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
match_expressions
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]: '\n \n ' return pulumi.get(self, 'match_expressions')
@property @pulumi.getter(name='matchExpressions') def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['IBMBlockCSISpecNodeAffinityPodAntiAffinityRequiredDuringSchedulingIgnoredDuringExecutionLabelSelectorMatchExpressionsArgs']]]]: '\n \n ' return pulumi.get(self, 'match_expressions')<|docstring|>matchExpressions is a list of label selector requirements. The requirements are ANDed.<|endoftext|>
c8d28c22c7664eea6a56b076ef07546eb4ceb965de2e9c944999569f79409a76
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.\n ' return pulumi.get(self, 'match_labels')
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
match_labels
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n \n ' return pulumi.get(self, 'match_labels')
@property @pulumi.getter(name='matchLabels') def match_labels(self) -> Optional[pulumi.Input[Mapping[(str, pulumi.Input[str])]]]: '\n \n ' return pulumi.get(self, 'match_labels')<|docstring|>matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.<|endoftext|>
c920819fc0b4fc3fe978eacb8fbb8d5f1500cc63d1fed6bf3274b88c2f2ce017
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. :param pulumi.Input[str] key: key is the label key that the selector applies to. :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)
def __init__(__self__, *, key: pulumi.Input[str], operator: pulumi.Input[str], values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]=None): "\n A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.\n :param pulumi.Input[str] key: key is the label key that the selector applies to.\n :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n " pulumi.set(__self__, 'key', key) pulumi.set(__self__, 'operator', operator) if (values is not None): pulumi.set(__self__, 'values', values)<|docstring|>A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. :param pulumi.Input[str] key: key is the label key that the selector applies to. :param pulumi.Input[str] operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. :param pulumi.Input[Sequence[pulumi.Input[str]]] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.<|endoftext|>
873b01de6c7b4b390af7286e21dc936c9a738bde5f172c97010521d8ead34b94
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n key is the label key that the selector applies to.\n ' return pulumi.get(self, 'key')
key is the label key that the selector applies to.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
key
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'key')
@property @pulumi.getter def key(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'key')<|docstring|>key is the label key that the selector applies to.<|endoftext|>
0c4e49e08176a0283877710e6179293ccac4fbb9a05b090e0d01b0bc406ff63f
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n " return pulumi.get(self, 'operator')
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
operator
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n \n " return pulumi.get(self, 'operator')
@property @pulumi.getter def operator(self) -> pulumi.Input[str]: "\n \n " return pulumi.get(self, 'operator')<|docstring|>operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.<|endoftext|>
8890282f9ab13c97f11910757bc10d16f9baa2504cd65573fd75ceafa57e1daf
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n ' return pulumi.get(self, 'values')
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
values
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n \n ' return pulumi.get(self, 'values')
@property @pulumi.getter def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]: '\n \n ' return pulumi.get(self, 'values')<|docstring|>values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.<|endoftext|>
792282a676c40c879cfb934a9fbf7817889dade407a6d6ec03318c357d63111d
def __init__(__self__, *, effect: Optional[pulumi.Input[str]]=None, key: Optional[pulumi.Input[str]]=None, operator: Optional[pulumi.Input[str]]=None, toleration_seconds: Optional[pulumi.Input[int]]=None, value: Optional[pulumi.Input[str]]=None): "\n The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.\n :param pulumi.Input[str] effect: Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.\n :param pulumi.Input[str] key: Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.\n :param pulumi.Input[str] operator: Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.\n :param pulumi.Input[int] toleration_seconds: TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.\n :param pulumi.Input[str] value: Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.\n " if (effect is not None): pulumi.set(__self__, 'effect', effect) if (key is not None): pulumi.set(__self__, 'key', key) if (operator is not None): pulumi.set(__self__, 'operator', operator) if (toleration_seconds is not None): pulumi.set(__self__, 'toleration_seconds', toleration_seconds) if (value is not None): pulumi.set(__self__, 'value', value)
The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. :param pulumi.Input[str] effect: Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. :param pulumi.Input[str] key: Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. :param pulumi.Input[str] operator: Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. :param pulumi.Input[int] toleration_seconds: TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. :param pulumi.Input[str] value: Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, effect: Optional[pulumi.Input[str]]=None, key: Optional[pulumi.Input[str]]=None, operator: Optional[pulumi.Input[str]]=None, toleration_seconds: Optional[pulumi.Input[int]]=None, value: Optional[pulumi.Input[str]]=None): "\n The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.\n :param pulumi.Input[str] effect: Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.\n :param pulumi.Input[str] key: Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.\n :param pulumi.Input[str] operator: Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.\n :param pulumi.Input[int] toleration_seconds: TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.\n :param pulumi.Input[str] value: Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.\n " if (effect is not None): pulumi.set(__self__, 'effect', effect) if (key is not None): pulumi.set(__self__, 'key', key) if (operator is not None): pulumi.set(__self__, 'operator', operator) if (toleration_seconds is not None): pulumi.set(__self__, 'toleration_seconds', toleration_seconds) if (value is not None): pulumi.set(__self__, 'value', value)
def __init__(__self__, *, effect: Optional[pulumi.Input[str]]=None, key: Optional[pulumi.Input[str]]=None, operator: Optional[pulumi.Input[str]]=None, toleration_seconds: Optional[pulumi.Input[int]]=None, value: Optional[pulumi.Input[str]]=None): "\n The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.\n :param pulumi.Input[str] effect: Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.\n :param pulumi.Input[str] key: Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.\n :param pulumi.Input[str] operator: Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.\n :param pulumi.Input[int] toleration_seconds: TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.\n :param pulumi.Input[str] value: Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.\n " if (effect is not None): pulumi.set(__self__, 'effect', effect) if (key is not None): pulumi.set(__self__, 'key', key) if (operator is not None): pulumi.set(__self__, 'operator', operator) if (toleration_seconds is not None): pulumi.set(__self__, 'toleration_seconds', toleration_seconds) if (value is not None): pulumi.set(__self__, 'value', value)<|docstring|>The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. :param pulumi.Input[str] effect: Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. :param pulumi.Input[str] key: Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. :param pulumi.Input[str] operator: Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. :param pulumi.Input[int] toleration_seconds: TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. :param pulumi.Input[str] value: Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.<|endoftext|>
96948b7e3758758871da06382ec6963c7b7bad0427c696a8cbf58506fa7334ab
@property @pulumi.getter def effect(self) -> Optional[pulumi.Input[str]]: '\n Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.\n ' return pulumi.get(self, 'effect')
Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
effect
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def effect(self) -> Optional[pulumi.Input[str]]: '\n \n ' return pulumi.get(self, 'effect')
@property @pulumi.getter def effect(self) -> Optional[pulumi.Input[str]]: '\n \n ' return pulumi.get(self, 'effect')<|docstring|>Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.<|endoftext|>
56a17d7ed8f5e92d7706a8ffd2c55cb6cc3a6609d8cce8250c66fcf5c3ab27bb
@property @pulumi.getter def key(self) -> Optional[pulumi.Input[str]]: '\n Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.\n ' return pulumi.get(self, 'key')
Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
key
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def key(self) -> Optional[pulumi.Input[str]]: '\n \n ' return pulumi.get(self, 'key')
@property @pulumi.getter def key(self) -> Optional[pulumi.Input[str]]: '\n \n ' return pulumi.get(self, 'key')<|docstring|>Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.<|endoftext|>
3c398ae7390efb94fe34d30e32ac9bcbaf5751730356d5b920a042bfe235940f
@property @pulumi.getter def operator(self) -> Optional[pulumi.Input[str]]: "\n Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.\n " return pulumi.get(self, 'operator')
Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
operator
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def operator(self) -> Optional[pulumi.Input[str]]: "\n \n " return pulumi.get(self, 'operator')
@property @pulumi.getter def operator(self) -> Optional[pulumi.Input[str]]: "\n \n " return pulumi.get(self, 'operator')<|docstring|>Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.<|endoftext|>
6904f5505ae78f512faf3af88f5928c30059929ce52f340f02f9ded6c50e1848
@property @pulumi.getter(name='tolerationSeconds') def toleration_seconds(self) -> Optional[pulumi.Input[int]]: '\n TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.\n ' return pulumi.get(self, 'toleration_seconds')
TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
toleration_seconds
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='tolerationSeconds') def toleration_seconds(self) -> Optional[pulumi.Input[int]]: '\n \n ' return pulumi.get(self, 'toleration_seconds')
@property @pulumi.getter(name='tolerationSeconds') def toleration_seconds(self) -> Optional[pulumi.Input[int]]: '\n \n ' return pulumi.get(self, 'toleration_seconds')<|docstring|>TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.<|endoftext|>
b0fe514b57d90ee8e06cda410198addb22eb038e54215645c2c74e93b62fc20e
@property @pulumi.getter def value(self) -> Optional[pulumi.Input[str]]: '\n Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.\n ' return pulumi.get(self, 'value')
Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
value
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def value(self) -> Optional[pulumi.Input[str]]: '\n \n ' return pulumi.get(self, 'value')
@property @pulumi.getter def value(self) -> Optional[pulumi.Input[str]]: '\n \n ' return pulumi.get(self, 'value')<|docstring|>Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.<|endoftext|>
96c56414d6c976ea59bca3d4bdc116ca4bddbcfb02206dc949e0d31c8f32aa01
def __init__(__self__, *, name: pulumi.Input[str], repository: pulumi.Input[str], tag: pulumi.Input[str], image_pull_policy: Optional[pulumi.Input[str]]=None): '\n :param pulumi.Input[str] name: The name of the csi sidecar image\n :param pulumi.Input[str] repository: The repository of the csi sidecar image\n :param pulumi.Input[str] tag: The tag of the csi sidecar image\n :param pulumi.Input[str] image_pull_policy: The pullPolicy of the csi sidecar image\n ' pulumi.set(__self__, 'name', name) pulumi.set(__self__, 'repository', repository) pulumi.set(__self__, 'tag', tag) if (image_pull_policy is not None): pulumi.set(__self__, 'image_pull_policy', image_pull_policy)
:param pulumi.Input[str] name: The name of the csi sidecar image :param pulumi.Input[str] repository: The repository of the csi sidecar image :param pulumi.Input[str] tag: The tag of the csi sidecar image :param pulumi.Input[str] image_pull_policy: The pullPolicy of the csi sidecar image
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, name: pulumi.Input[str], repository: pulumi.Input[str], tag: pulumi.Input[str], image_pull_policy: Optional[pulumi.Input[str]]=None): '\n :param pulumi.Input[str] name: The name of the csi sidecar image\n :param pulumi.Input[str] repository: The repository of the csi sidecar image\n :param pulumi.Input[str] tag: The tag of the csi sidecar image\n :param pulumi.Input[str] image_pull_policy: The pullPolicy of the csi sidecar image\n ' pulumi.set(__self__, 'name', name) pulumi.set(__self__, 'repository', repository) pulumi.set(__self__, 'tag', tag) if (image_pull_policy is not None): pulumi.set(__self__, 'image_pull_policy', image_pull_policy)
def __init__(__self__, *, name: pulumi.Input[str], repository: pulumi.Input[str], tag: pulumi.Input[str], image_pull_policy: Optional[pulumi.Input[str]]=None): '\n :param pulumi.Input[str] name: The name of the csi sidecar image\n :param pulumi.Input[str] repository: The repository of the csi sidecar image\n :param pulumi.Input[str] tag: The tag of the csi sidecar image\n :param pulumi.Input[str] image_pull_policy: The pullPolicy of the csi sidecar image\n ' pulumi.set(__self__, 'name', name) pulumi.set(__self__, 'repository', repository) pulumi.set(__self__, 'tag', tag) if (image_pull_policy is not None): pulumi.set(__self__, 'image_pull_policy', image_pull_policy)<|docstring|>:param pulumi.Input[str] name: The name of the csi sidecar image :param pulumi.Input[str] repository: The repository of the csi sidecar image :param pulumi.Input[str] tag: The tag of the csi sidecar image :param pulumi.Input[str] image_pull_policy: The pullPolicy of the csi sidecar image<|endoftext|>
995aa8dc2ffc64a30940b58978b5187b273cfebd81ecb60c34ee30ad4e4b4640
@property @pulumi.getter def name(self) -> pulumi.Input[str]: '\n The name of the csi sidecar image\n ' return pulumi.get(self, 'name')
The name of the csi sidecar image
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
name
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def name(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'name')
@property @pulumi.getter def name(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'name')<|docstring|>The name of the csi sidecar image<|endoftext|>
c4af764cc3c228af428be5db4e0ff708e605cf5918f11043c36ed7d09c2d6488
@property @pulumi.getter def repository(self) -> pulumi.Input[str]: '\n The repository of the csi sidecar image\n ' return pulumi.get(self, 'repository')
The repository of the csi sidecar image
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
repository
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def repository(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'repository')
@property @pulumi.getter def repository(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'repository')<|docstring|>The repository of the csi sidecar image<|endoftext|>
62829ebcd6093edeb2d50b15c2fa9aeee4501bf9e627c8f119c8304d5798a303
@property @pulumi.getter def tag(self) -> pulumi.Input[str]: '\n The tag of the csi sidecar image\n ' return pulumi.get(self, 'tag')
The tag of the csi sidecar image
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
tag
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def tag(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'tag')
@property @pulumi.getter def tag(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'tag')<|docstring|>The tag of the csi sidecar image<|endoftext|>
03d48c8084798eaf07d29f41a9b9263b29fd38f8c64a030a944bcb49e5803cb1
@property @pulumi.getter(name='imagePullPolicy') def image_pull_policy(self) -> Optional[pulumi.Input[str]]: '\n The pullPolicy of the csi sidecar image\n ' return pulumi.get(self, 'image_pull_policy')
The pullPolicy of the csi sidecar image
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
image_pull_policy
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter(name='imagePullPolicy') def image_pull_policy(self) -> Optional[pulumi.Input[str]]: '\n \n ' return pulumi.get(self, 'image_pull_policy')
@property @pulumi.getter(name='imagePullPolicy') def image_pull_policy(self) -> Optional[pulumi.Input[str]]: '\n \n ' return pulumi.get(self, 'image_pull_policy')<|docstring|>The pullPolicy of the csi sidecar image<|endoftext|>
ebde0a257f57d193268d6ca2f13646e84699b14640005b1554ce458e663c71b3
def __init__(__self__, *, controller_ready: pulumi.Input[bool], node_ready: pulumi.Input[bool], phase: pulumi.Input[str], version: pulumi.Input[str]): '\n IBMBlockCSIStatus defines the observed state of IBMBlockCSI\n :param pulumi.Input[str] phase: Phase is the driver running phase\n :param pulumi.Input[str] version: Version is the current driver version\n ' pulumi.set(__self__, 'controller_ready', controller_ready) pulumi.set(__self__, 'node_ready', node_ready) pulumi.set(__self__, 'phase', phase) pulumi.set(__self__, 'version', version)
IBMBlockCSIStatus defines the observed state of IBMBlockCSI :param pulumi.Input[str] phase: Phase is the driver running phase :param pulumi.Input[str] version: Version is the current driver version
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
__init__
pulumi/pulumi-kubernetes-crds
0
python
def __init__(__self__, *, controller_ready: pulumi.Input[bool], node_ready: pulumi.Input[bool], phase: pulumi.Input[str], version: pulumi.Input[str]): '\n IBMBlockCSIStatus defines the observed state of IBMBlockCSI\n :param pulumi.Input[str] phase: Phase is the driver running phase\n :param pulumi.Input[str] version: Version is the current driver version\n ' pulumi.set(__self__, 'controller_ready', controller_ready) pulumi.set(__self__, 'node_ready', node_ready) pulumi.set(__self__, 'phase', phase) pulumi.set(__self__, 'version', version)
def __init__(__self__, *, controller_ready: pulumi.Input[bool], node_ready: pulumi.Input[bool], phase: pulumi.Input[str], version: pulumi.Input[str]): '\n IBMBlockCSIStatus defines the observed state of IBMBlockCSI\n :param pulumi.Input[str] phase: Phase is the driver running phase\n :param pulumi.Input[str] version: Version is the current driver version\n ' pulumi.set(__self__, 'controller_ready', controller_ready) pulumi.set(__self__, 'node_ready', node_ready) pulumi.set(__self__, 'phase', phase) pulumi.set(__self__, 'version', version)<|docstring|>IBMBlockCSIStatus defines the observed state of IBMBlockCSI :param pulumi.Input[str] phase: Phase is the driver running phase :param pulumi.Input[str] version: Version is the current driver version<|endoftext|>
bd6f8d168a899ef88b3140dabfd7cfecc372a97619ec53cd460c38fa36d489ef
@property @pulumi.getter def phase(self) -> pulumi.Input[str]: '\n Phase is the driver running phase\n ' return pulumi.get(self, 'phase')
Phase is the driver running phase
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
phase
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def phase(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'phase')
@property @pulumi.getter def phase(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'phase')<|docstring|>Phase is the driver running phase<|endoftext|>
7aaf505a0e12d7644f2b2d45fbad3629656e3b21c903bd7f4157383f3418273f
@property @pulumi.getter def version(self) -> pulumi.Input[str]: '\n Version is the current driver version\n ' return pulumi.get(self, 'version')
Version is the current driver version
operators/ibm-block-csi-operator-community/python/pulumi_pulumi_kubernetes_crds_operators_ibm_block_csi_operator_community/csi/v1/_inputs.py
version
pulumi/pulumi-kubernetes-crds
0
python
@property @pulumi.getter def version(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'version')
@property @pulumi.getter def version(self) -> pulumi.Input[str]: '\n \n ' return pulumi.get(self, 'version')<|docstring|>Version is the current driver version<|endoftext|>
4c51c550d851828b1f6ad62a53f25747bd7fa42f5e8c05d865aac67ce223c4c4
@django_view @ajax_request @admin_cm_permission def cma_ajax_get_table_templates(request): '\n Ajax view fetching template list.\n ' if (request.method == 'GET'): templates = prep_data('admin_cm/template/get_list/', request.session) for item in templates: item['ec2name'] = ec2names_reversed[item['ec2name']] item['memory'] = filesizeformatmb(item['memory']) return messages_ajax.success(templates)
Ajax view fetching template list.
src/wi/views/admin_cm/template.py
cma_ajax_get_table_templates
cc1-cloud/cc1
11
python
@django_view @ajax_request @admin_cm_permission def cma_ajax_get_table_templates(request): '\n \n ' if (request.method == 'GET'): templates = prep_data('admin_cm/template/get_list/', request.session) for item in templates: item['ec2name'] = ec2names_reversed[item['ec2name']] item['memory'] = filesizeformatmb(item['memory']) return messages_ajax.success(templates)
@django_view @ajax_request @admin_cm_permission def cma_ajax_get_table_templates(request): '\n \n ' if (request.method == 'GET'): templates = prep_data('admin_cm/template/get_list/', request.session) for item in templates: item['ec2name'] = ec2names_reversed[item['ec2name']] item['memory'] = filesizeformatmb(item['memory']) return messages_ajax.success(templates)<|docstring|>Ajax view fetching template list.<|endoftext|>
a650b746d44b5c6e2cf26fac0c71cc1c819ad8e3862cf530456a268303661d25
@property def name(self): '\n Returns a room name.\n\n :return: A room name.\n :rtype: str\n ' return self._data.get('room', '')
Returns a room name. :return: A room name. :rtype: str
lc.py
name
liquidthex/nortbot
0
python
@property def name(self): '\n Returns a room name.\n\n :return: A room name.\n :rtype: str\n ' return self._data.get('room', )
@property def name(self): '\n Returns a room name.\n\n :return: A room name.\n :rtype: str\n ' return self._data.get('room', )<|docstring|>Returns a room name. :return: A room name. :rtype: str<|endoftext|>
14586989cbfdeb9117dc7af23357997fb17b3cff23b27183a6cc20c223df3ce4
@property def users(self): "\n Returns room users.\n\n :return: Room users'\n :rtype: int\n " return self._data.get('users', 0)
Returns room users. :return: Room users' :rtype: int
lc.py
users
liquidthex/nortbot
0
python
@property def users(self): "\n Returns room users.\n\n :return: Room users'\n :rtype: int\n " return self._data.get('users', 0)
@property def users(self): "\n Returns room users.\n\n :return: Room users'\n :rtype: int\n " return self._data.get('users', 0)<|docstring|>Returns room users. :return: Room users' :rtype: int<|endoftext|>
8094a52ce90846add78f66639fa69ad76f64dca185ea1b7e6ec926056fa1065a
@property def broadcasters(self): '\n Returns room broadcasters.\n\n :return: Room broadcasters.\n :rtype: int\n ' return self._data.get('broadcasters', 0)
Returns room broadcasters. :return: Room broadcasters. :rtype: int
lc.py
broadcasters
liquidthex/nortbot
0
python
@property def broadcasters(self): '\n Returns room broadcasters.\n\n :return: Room broadcasters.\n :rtype: int\n ' return self._data.get('broadcasters', 0)
@property def broadcasters(self): '\n Returns room broadcasters.\n\n :return: Room broadcasters.\n :rtype: int\n ' return self._data.get('broadcasters', 0)<|docstring|>Returns room broadcasters. :return: Room broadcasters. :rtype: int<|endoftext|>
963494d18f88933e3f9d9e384db78bfccdb855d7eb7d92bd4999bd73afa4625e
def __init__(self, bot, watch=None): '\n Initialize the LiveCount class.\n\n :param bot: A instance of NortBot.\n :param watch: A room name to watch.\n :type watch: str | None\n ' self._bot = bot self._watch_rooms = [watch] self._watch_interval = 20 self._last_update_time = 0 self._most_active = Room() self._connected = False self._ws = None
Initialize the LiveCount class. :param bot: A instance of NortBot. :param watch: A room name to watch. :type watch: str | None
lc.py
__init__
liquidthex/nortbot
0
python
def __init__(self, bot, watch=None): '\n Initialize the LiveCount class.\n\n :param bot: A instance of NortBot.\n :param watch: A room name to watch.\n :type watch: str | None\n ' self._bot = bot self._watch_rooms = [watch] self._watch_interval = 20 self._last_update_time = 0 self._most_active = Room() self._connected = False self._ws = None
def __init__(self, bot, watch=None): '\n Initialize the LiveCount class.\n\n :param bot: A instance of NortBot.\n :param watch: A room name to watch.\n :type watch: str | None\n ' self._bot = bot self._watch_rooms = [watch] self._watch_interval = 20 self._last_update_time = 0 self._most_active = Room() self._connected = False self._ws = None<|docstring|>Initialize the LiveCount class. :param bot: A instance of NortBot. :param watch: A room name to watch. :type watch: str | None<|endoftext|>
860ed382194f1cc220d3491fabebd6cb56f7223ef9386d820ab15ca6b0b3f2a2
@property def connected(self): '\n Returns a bool based on the connection state.\n\n :return: True if connected.\n :rtype: bool\n ' return self._connected
Returns a bool based on the connection state. :return: True if connected. :rtype: bool
lc.py
connected
liquidthex/nortbot
0
python
@property def connected(self): '\n Returns a bool based on the connection state.\n\n :return: True if connected.\n :rtype: bool\n ' return self._connected
@property def connected(self): '\n Returns a bool based on the connection state.\n\n :return: True if connected.\n :rtype: bool\n ' return self._connected<|docstring|>Returns a bool based on the connection state. :return: True if connected. :rtype: bool<|endoftext|>
a7d7df66d4ce3328c533e72f8d82ee1b32e6613a9c648f365a728ce94f71fa62
def most_active(self): '\n Returns the most active room.\n\n :return: The room with the most users.\n :rtype: Room\n ' if self.connected: ma = ('Most active: %s, Users: %s, Broadcasters: %s' % (self._most_active.name, self._most_active.users, self._most_active.broadcasters)) self._bot.responder(ma)
Returns the most active room. :return: The room with the most users. :rtype: Room
lc.py
most_active
liquidthex/nortbot
0
python
def most_active(self): '\n Returns the most active room.\n\n :return: The room with the most users.\n :rtype: Room\n ' if self.connected: ma = ('Most active: %s, Users: %s, Broadcasters: %s' % (self._most_active.name, self._most_active.users, self._most_active.broadcasters)) self._bot.responder(ma)
def most_active(self): '\n Returns the most active room.\n\n :return: The room with the most users.\n :rtype: Room\n ' if self.connected: ma = ('Most active: %s, Users: %s, Broadcasters: %s' % (self._most_active.name, self._most_active.users, self._most_active.broadcasters)) self._bot.responder(ma)<|docstring|>Returns the most active room. :return: The room with the most users. :rtype: Room<|endoftext|>
6d7006e74447cc2b7bb133753035b57774d6fe58eb67509ec349ac507a3260b9
def status(self): '\n Show live count status.\n ' if self.connected: stats = [('Live connected: %s' % self.connected), ('Live count watch room %s' % len(self._watch_rooms)), ('Live count interval: %s' % self._watch_interval), ('Most active room: %s' % self.most_active)] self._bot.responder('\n'.join(stats))
Show live count status.
lc.py
status
liquidthex/nortbot
0
python
def status(self): '\n \n ' if self.connected: stats = [('Live connected: %s' % self.connected), ('Live count watch room %s' % len(self._watch_rooms)), ('Live count interval: %s' % self._watch_interval), ('Most active room: %s' % self.most_active)] self._bot.responder('\n'.join(stats))
def status(self): '\n \n ' if self.connected: stats = [('Live connected: %s' % self.connected), ('Live count watch room %s' % len(self._watch_rooms)), ('Live count interval: %s' % self._watch_interval), ('Most active room: %s' % self.most_active)] self._bot.responder('\n'.join(stats))<|docstring|>Show live count status.<|endoftext|>
e6d33924dd3823bb2c540a62b4b0b307d7b7abefcada03e1b6db58adc2062169
def add_watch_room(self, room_name): '\n Set a room name to watch live count for.\n\n :param room_name: The room name to watch.\n :type room_name: str\n ' if self.connected: self._watch_rooms.append(room_name) self._bot.responder(('Added %s to live watch.' % room_name))
Set a room name to watch live count for. :param room_name: The room name to watch. :type room_name: str
lc.py
add_watch_room
liquidthex/nortbot
0
python
def add_watch_room(self, room_name): '\n Set a room name to watch live count for.\n\n :param room_name: The room name to watch.\n :type room_name: str\n ' if self.connected: self._watch_rooms.append(room_name) self._bot.responder(('Added %s to live watch.' % room_name))
def add_watch_room(self, room_name): '\n Set a room name to watch live count for.\n\n :param room_name: The room name to watch.\n :type room_name: str\n ' if self.connected: self._watch_rooms.append(room_name) self._bot.responder(('Added %s to live watch.' % room_name))<|docstring|>Set a room name to watch live count for. :param room_name: The room name to watch. :type room_name: str<|endoftext|>
40eee59fa589897fd440822a12350512b7346aae6005800625da21dc5e7c054a
def remove_watch_room(self, room_name): '\n Remove a room name from the live count watch rooms.\n\n :param room_name: The room name to remove.\n :type room_name: str\n ' if self.connected: if (room_name in self._watch_rooms): self._watch_rooms.remove(room_name) self._bot.responder(('Removed %s from watch rooms.' % room_name)) else: self._bot.responder(('%s is not in the watch rooms.' % room_name))
Remove a room name from the live count watch rooms. :param room_name: The room name to remove. :type room_name: str
lc.py
remove_watch_room
liquidthex/nortbot
0
python
def remove_watch_room(self, room_name): '\n Remove a room name from the live count watch rooms.\n\n :param room_name: The room name to remove.\n :type room_name: str\n ' if self.connected: if (room_name in self._watch_rooms): self._watch_rooms.remove(room_name) self._bot.responder(('Removed %s from watch rooms.' % room_name)) else: self._bot.responder(('%s is not in the watch rooms.' % room_name))
def remove_watch_room(self, room_name): '\n Remove a room name from the live count watch rooms.\n\n :param room_name: The room name to remove.\n :type room_name: str\n ' if self.connected: if (room_name in self._watch_rooms): self._watch_rooms.remove(room_name) self._bot.responder(('Removed %s from watch rooms.' % room_name)) else: self._bot.responder(('%s is not in the watch rooms.' % room_name))<|docstring|>Remove a room name from the live count watch rooms. :param room_name: The room name to remove. :type room_name: str<|endoftext|>
9ad4744b7ac18ebd6be28c579c12e83d0b6492709f41d26f8399dd63e0dccad1
def set_watch_interval(self, interval): '\n Set the watch interval time.\n\n :param interval: Watch interval time in seconds.\n :type interval: int\n ' if self.connected: self._watch_interval = interval self._bot.responder(('Live count watch interval: %s' % interval))
Set the watch interval time. :param interval: Watch interval time in seconds. :type interval: int
lc.py
set_watch_interval
liquidthex/nortbot
0
python
def set_watch_interval(self, interval): '\n Set the watch interval time.\n\n :param interval: Watch interval time in seconds.\n :type interval: int\n ' if self.connected: self._watch_interval = interval self._bot.responder(('Live count watch interval: %s' % interval))
def set_watch_interval(self, interval): '\n Set the watch interval time.\n\n :param interval: Watch interval time in seconds.\n :type interval: int\n ' if self.connected: self._watch_interval = interval self._bot.responder(('Live count watch interval: %s' % interval))<|docstring|>Set the watch interval time. :param interval: Watch interval time in seconds. :type interval: int<|endoftext|>
0c20fa35c561d87e3a4c8e09a4386ddbb130cb2e8d12c73443cd711520af8a5e
def connect(self): '\n Connect to the websocket endpoint.\n ' tc_header = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:66.0) Gecko/20100101 Firefox/66.0', 'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate, br', 'Sec-WebSocket-Extensions': 'permessage-deflate'} self._ws = websocket.create_connection('wss://lb-stat.tinychat.com/leaderboard', header=tc_header, origin='https://tinychat.com') if self._ws.connected: self._bot.responder('Live count connected.') self._connected = self._ws.connected self._listener() else: log.debug('connection to live counter failed')
Connect to the websocket endpoint.
lc.py
connect
liquidthex/nortbot
0
python
def connect(self): '\n \n ' tc_header = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:66.0) Gecko/20100101 Firefox/66.0', 'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate, br', 'Sec-WebSocket-Extensions': 'permessage-deflate'} self._ws = websocket.create_connection('wss://lb-stat.tinychat.com/leaderboard', header=tc_header, origin='https://tinychat.com') if self._ws.connected: self._bot.responder('Live count connected.') self._connected = self._ws.connected self._listener() else: log.debug('connection to live counter failed')
def connect(self): '\n \n ' tc_header = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:66.0) Gecko/20100101 Firefox/66.0', 'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate, br', 'Sec-WebSocket-Extensions': 'permessage-deflate'} self._ws = websocket.create_connection('wss://lb-stat.tinychat.com/leaderboard', header=tc_header, origin='https://tinychat.com') if self._ws.connected: self._bot.responder('Live count connected.') self._connected = self._ws.connected self._listener() else: log.debug('connection to live counter failed')<|docstring|>Connect to the websocket endpoint.<|endoftext|>
34617c37321f948071c10e828e3cf82b08a4d16e0ccc16712e500249efea139a
def disconnect(self): '\n Disconnect from the websocket.\n ' self._bot.responder('Closing live count.') self._connected = False self._ws = None self._watch_rooms = []
Disconnect from the websocket.
lc.py
disconnect
liquidthex/nortbot
0
python
def disconnect(self): '\n \n ' self._bot.responder('Closing live count.') self._connected = False self._ws = None self._watch_rooms = []
def disconnect(self): '\n \n ' self._bot.responder('Closing live count.') self._connected = False self._ws = None self._watch_rooms = []<|docstring|>Disconnect from the websocket.<|endoftext|>
b3960f4ad295092b72415d9a45f5d1198ff57d4e35d8c18b44468237ca241d15
def on_update(self, data): '\n Received when ever the live count gets updated.\n\n :param data: A list containing room updates.\n :type data: list\n ' log.debug(('live count items %s' % len(data))) ts = time.time() rooms = [] for room_data in data: room = Room(**room_data) if (room.users > self._most_active.users): self._most_active = room if (len(self._watch_rooms) > 0): if (room.name in self._watch_rooms): info = ('Watching: %s, Users: %s, Broadcasters: %s' % (room.name, room.users, room.broadcasters)) rooms.append(info) if (len(rooms) > 0): if ((ts - self._last_update_time) >= self._watch_interval): self._last_update_time = ts self._bot.responder('\n'.join(rooms))
Received when ever the live count gets updated. :param data: A list containing room updates. :type data: list
lc.py
on_update
liquidthex/nortbot
0
python
def on_update(self, data): '\n Received when ever the live count gets updated.\n\n :param data: A list containing room updates.\n :type data: list\n ' log.debug(('live count items %s' % len(data))) ts = time.time() rooms = [] for room_data in data: room = Room(**room_data) if (room.users > self._most_active.users): self._most_active = room if (len(self._watch_rooms) > 0): if (room.name in self._watch_rooms): info = ('Watching: %s, Users: %s, Broadcasters: %s' % (room.name, room.users, room.broadcasters)) rooms.append(info) if (len(rooms) > 0): if ((ts - self._last_update_time) >= self._watch_interval): self._last_update_time = ts self._bot.responder('\n'.join(rooms))
def on_update(self, data): '\n Received when ever the live count gets updated.\n\n :param data: A list containing room updates.\n :type data: list\n ' log.debug(('live count items %s' % len(data))) ts = time.time() rooms = [] for room_data in data: room = Room(**room_data) if (room.users > self._most_active.users): self._most_active = room if (len(self._watch_rooms) > 0): if (room.name in self._watch_rooms): info = ('Watching: %s, Users: %s, Broadcasters: %s' % (room.name, room.users, room.broadcasters)) rooms.append(info) if (len(rooms) > 0): if ((ts - self._last_update_time) >= self._watch_interval): self._last_update_time = ts self._bot.responder('\n'.join(rooms))<|docstring|>Received when ever the live count gets updated. :param data: A list containing room updates. :type data: list<|endoftext|>
d5d4fa3d9468d9aaec61eafdba7d1590b84e3cca56b2f527168f23e76dd93565
def getch(timeout=0.01): '\n Retrieves a character from stdin.\n\n Returns None if no character is available within the timeout.\n Blocks if timeout < 0.\n ' if (not sys.stdin.isatty()): return sys.stdin.read(1) fileno = sys.stdin.fileno() old_settings = termios.tcgetattr(fileno) ch = None try: tty.setraw(fileno) rlist = [fileno] if (timeout >= 0): [rlist, _, _] = select(rlist, [], [], timeout) if (fileno in rlist): ch = sys.stdin.read(1) except Exception as ex: print('getch', ex) raise OSError finally: termios.tcsetattr(fileno, termios.TCSADRAIN, old_settings) return ch
Retrieves a character from stdin. Returns None if no character is available within the timeout. Blocks if timeout < 0.
franka_interface/src/franka_dataflow/getch.py
getch
minoring/franka_ros_interface
91
python
def getch(timeout=0.01): '\n Retrieves a character from stdin.\n\n Returns None if no character is available within the timeout.\n Blocks if timeout < 0.\n ' if (not sys.stdin.isatty()): return sys.stdin.read(1) fileno = sys.stdin.fileno() old_settings = termios.tcgetattr(fileno) ch = None try: tty.setraw(fileno) rlist = [fileno] if (timeout >= 0): [rlist, _, _] = select(rlist, [], [], timeout) if (fileno in rlist): ch = sys.stdin.read(1) except Exception as ex: print('getch', ex) raise OSError finally: termios.tcsetattr(fileno, termios.TCSADRAIN, old_settings) return ch
def getch(timeout=0.01): '\n Retrieves a character from stdin.\n\n Returns None if no character is available within the timeout.\n Blocks if timeout < 0.\n ' if (not sys.stdin.isatty()): return sys.stdin.read(1) fileno = sys.stdin.fileno() old_settings = termios.tcgetattr(fileno) ch = None try: tty.setraw(fileno) rlist = [fileno] if (timeout >= 0): [rlist, _, _] = select(rlist, [], [], timeout) if (fileno in rlist): ch = sys.stdin.read(1) except Exception as ex: print('getch', ex) raise OSError finally: termios.tcsetattr(fileno, termios.TCSADRAIN, old_settings) return ch<|docstring|>Retrieves a character from stdin. Returns None if no character is available within the timeout. Blocks if timeout < 0.<|endoftext|>
66b3ebbb296b2b132b1c8f3e7385051413bc3efb82b90da10a908733a9dbd700
def transform(self, X, y=None): 'Transforms data X.\n\n Arguments:\n X (ww.DataTable, pd.DataFrame): Data to transform.\n y (ww.DataColumn, pd.Series, optional): Target data.\n\n Returns:\n ww.DataTable: Transformed X\n ' X_ww = infer_feature_types(X) X = _convert_woodwork_types_wrapper(X_ww.to_dataframe()) if (y is not None): y = infer_feature_types(y) y = _convert_woodwork_types_wrapper(y.to_series()) try: X_t = self._component_obj.transform(X, y) except AttributeError: raise MethodPropertyNotFoundError('Transformer requires a transform method or a component_obj that implements transform') X_t_df = pd.DataFrame(X_t, columns=X.columns, index=X.index) return _retain_custom_types_and_initalize_woodwork(X_ww, X_t_df)
Transforms data X. Arguments: X (ww.DataTable, pd.DataFrame): Data to transform. y (ww.DataColumn, pd.Series, optional): Target data. Returns: ww.DataTable: Transformed X
evalml/pipelines/components/transformers/transformer.py
transform
skvorekn/evalml
0
python
def transform(self, X, y=None): 'Transforms data X.\n\n Arguments:\n X (ww.DataTable, pd.DataFrame): Data to transform.\n y (ww.DataColumn, pd.Series, optional): Target data.\n\n Returns:\n ww.DataTable: Transformed X\n ' X_ww = infer_feature_types(X) X = _convert_woodwork_types_wrapper(X_ww.to_dataframe()) if (y is not None): y = infer_feature_types(y) y = _convert_woodwork_types_wrapper(y.to_series()) try: X_t = self._component_obj.transform(X, y) except AttributeError: raise MethodPropertyNotFoundError('Transformer requires a transform method or a component_obj that implements transform') X_t_df = pd.DataFrame(X_t, columns=X.columns, index=X.index) return _retain_custom_types_and_initalize_woodwork(X_ww, X_t_df)
def transform(self, X, y=None): 'Transforms data X.\n\n Arguments:\n X (ww.DataTable, pd.DataFrame): Data to transform.\n y (ww.DataColumn, pd.Series, optional): Target data.\n\n Returns:\n ww.DataTable: Transformed X\n ' X_ww = infer_feature_types(X) X = _convert_woodwork_types_wrapper(X_ww.to_dataframe()) if (y is not None): y = infer_feature_types(y) y = _convert_woodwork_types_wrapper(y.to_series()) try: X_t = self._component_obj.transform(X, y) except AttributeError: raise MethodPropertyNotFoundError('Transformer requires a transform method or a component_obj that implements transform') X_t_df = pd.DataFrame(X_t, columns=X.columns, index=X.index) return _retain_custom_types_and_initalize_woodwork(X_ww, X_t_df)<|docstring|>Transforms data X. Arguments: X (ww.DataTable, pd.DataFrame): Data to transform. y (ww.DataColumn, pd.Series, optional): Target data. Returns: ww.DataTable: Transformed X<|endoftext|>
9b25c23229d816e8bad0c017f90268fcb78a491222347e34ccaca7f807cef230
def fit_transform(self, X, y=None): 'Fits on X and transforms X\n\n Arguments:\n X (ww.DataTable, pd.DataFrame): Data to fit and transform\n y (ww.DataColumn, pd.Series): Target data\n\n Returns:\n ww.DataTable: Transformed X\n ' X_ww = infer_feature_types(X) X_pd = _convert_woodwork_types_wrapper(X_ww.to_dataframe()) if (y is not None): y_ww = infer_feature_types(y) y_pd = _convert_woodwork_types_wrapper(y_ww.to_series()) try: X_t = self._component_obj.fit_transform(X_pd, y_pd) return _retain_custom_types_and_initalize_woodwork(X_ww, X_t) except AttributeError: try: return self.fit(X, y).transform(X, y) except MethodPropertyNotFoundError as e: raise e
Fits on X and transforms X Arguments: X (ww.DataTable, pd.DataFrame): Data to fit and transform y (ww.DataColumn, pd.Series): Target data Returns: ww.DataTable: Transformed X
evalml/pipelines/components/transformers/transformer.py
fit_transform
skvorekn/evalml
0
python
def fit_transform(self, X, y=None): 'Fits on X and transforms X\n\n Arguments:\n X (ww.DataTable, pd.DataFrame): Data to fit and transform\n y (ww.DataColumn, pd.Series): Target data\n\n Returns:\n ww.DataTable: Transformed X\n ' X_ww = infer_feature_types(X) X_pd = _convert_woodwork_types_wrapper(X_ww.to_dataframe()) if (y is not None): y_ww = infer_feature_types(y) y_pd = _convert_woodwork_types_wrapper(y_ww.to_series()) try: X_t = self._component_obj.fit_transform(X_pd, y_pd) return _retain_custom_types_and_initalize_woodwork(X_ww, X_t) except AttributeError: try: return self.fit(X, y).transform(X, y) except MethodPropertyNotFoundError as e: raise e
def fit_transform(self, X, y=None): 'Fits on X and transforms X\n\n Arguments:\n X (ww.DataTable, pd.DataFrame): Data to fit and transform\n y (ww.DataColumn, pd.Series): Target data\n\n Returns:\n ww.DataTable: Transformed X\n ' X_ww = infer_feature_types(X) X_pd = _convert_woodwork_types_wrapper(X_ww.to_dataframe()) if (y is not None): y_ww = infer_feature_types(y) y_pd = _convert_woodwork_types_wrapper(y_ww.to_series()) try: X_t = self._component_obj.fit_transform(X_pd, y_pd) return _retain_custom_types_and_initalize_woodwork(X_ww, X_t) except AttributeError: try: return self.fit(X, y).transform(X, y) except MethodPropertyNotFoundError as e: raise e<|docstring|>Fits on X and transforms X Arguments: X (ww.DataTable, pd.DataFrame): Data to fit and transform y (ww.DataColumn, pd.Series): Target data Returns: ww.DataTable: Transformed X<|endoftext|>
96e2c89ad03983038f05fd05f677bc56605d4006173b62757a5f4b53153fbee9
def make_functions_chart(parser, metric='efpkg'): 'Writes XML file for functions chart and generates Krona plot from it\n\n Args:\n parser (:obj:DiamondParser): parser object with annotated reads\n metric (str): scoring metric (efpkg by default)\n ' outfile = os.path.join(parser.options.get_project_dir(parser.sample.sample_id), parser.options.get_output_subdir(parser.sample.sample_id), ((((parser.sample.sample_id + '_') + parser.end) + '_') + parser.options.xml_name)) with open(outfile, 'w') as out: if (metric == 'proteincount'): metric = 'readcount' out.write((((((('<krona key="false">\n' + '\t<attributes magnitude="') + metric) + '">\n') + '\t\t<attribute display="Protein count">') + metric) + '</attribute>\n')) else: out.write((((('<krona key="false">\n' + '\t<attributes magnitude="') + metric) + '">\n') + '\t\t<attribute display="Read count">readcount</attribute>\n')) if (metric != 'readcount'): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write(((((('\t\t<attribute display="AAI %" mono="true">identity</attribute>\n' + '\t</attributes>\n') + ' '.join(['\t<color attribute="identity"', 'valueStart="50"', 'valueEnd="100"', 'hueStart="0"', 'hueEnd="240"', 'default="true"></color>\n'])) + '\t<datasets>\n\t\t<dataset>') + parser.sample.sample_id) + '</dataset>\n\t</datasets>\n')) read_count = 0 total_rpkm = 0.0 groups_rpkm = defaultdict(float) groups_counts = defaultdict(set) groups_identity = defaultdict(list) functions_counts = defaultdict(set) functions_rpkm = defaultdict(float) functions_identity = defaultdict(list) for (_, read) in parser.reads.items(): if (read.status == STATUS_GOOD): read_count += 1 for function in read.functions: total_rpkm += read.functions[function] groups_counts[parser.ref_data.lookup_function_group(function)].add(read.read_id) functions_rpkm[function] += read.functions[function] groups_rpkm[parser.ref_data.lookup_function_group(function)] += read.functions[function] functions_counts[function].add(read.read_id) for hit in read.hit_list.hits: for function in hit.functions: functions_identity[function].append(hit.identity) groups_identity[parser.ref_data.lookup_function_group(function)].append(hit.identity) out.write((((('\t<node name="' + parser.sample.sample_id) + '_') + parser.end) + '">\n')) if (metric != 'readcount'): out.write((('\t\t<readcount><val>' + str(read_count)) + '</val></readcount>\n')) out.write((((((('\t\t<' + metric) + '><val>') + str(total_rpkm)) + '</val></') + metric) + '>\n')) for group in groups_rpkm: out.write((('\t\t<node name="' + group) + '">\n')) if (metric != 'readcount'): out.write((('\t\t\t<readcount><val>' + str(len(groups_counts[group]))) + '</val></readcount>\n')) out.write((((((('\t\t\t<' + metric) + '><val>') + str(groups_rpkm[group])) + '</val></') + metric) + '>\n')) if (group in groups_identity): out.write((('\t\t\t<identity><val>' + str((sum(groups_identity[group]) / len(groups_identity[group])))) + '</val></identity>\n')) else: out.write('\t\t\t<identity><val>0.0</val></identity>\n') for function in parser.ref_data.get_functions_in_group(group): if (function in functions_rpkm): out.write((('\t\t\t<node name="' + function) + '">\n')) if (metric != 'readcount'): out.write((('\t\t\t\t<readcount><val>' + str(len(functions_counts[function]))) + '</val></readcount>\n')) out.write((((((('\t\t\t\t<' + metric) + '><val>') + str(functions_rpkm[function])) + '</val></') + metric) + '>\n')) if (function in functions_identity): out.write((('\t\t\t\t<identity><val>' + str((sum(functions_identity[function]) / len(functions_identity[function])))) + '</val></identity>\n')) else: out.write('\t\t\t\t<identity><val>0.0</val></identity>\n') out.write('\t\t\t</node>\n') out.write('\t\t</node>\n') out.write('\t</node>\n</krona>') html_file = os.path.join(parser.options.get_project_dir(parser.sample.sample_id), parser.options.get_output_subdir(parser.sample.sample_id), ((((parser.sample.sample_id + '_') + parser.end) + '_') + parser.options.html_name)) run_external_program([parser.config.krona_path, '-o', html_file, outfile])
Writes XML file for functions chart and generates Krona plot from it Args: parser (:obj:DiamondParser): parser object with annotated reads metric (str): scoring metric (efpkg by default)
lib/fama/output/krona_xml_writer.py
make_functions_chart
aekazakov/FamaProfiling
0
python
def make_functions_chart(parser, metric='efpkg'): 'Writes XML file for functions chart and generates Krona plot from it\n\n Args:\n parser (:obj:DiamondParser): parser object with annotated reads\n metric (str): scoring metric (efpkg by default)\n ' outfile = os.path.join(parser.options.get_project_dir(parser.sample.sample_id), parser.options.get_output_subdir(parser.sample.sample_id), ((((parser.sample.sample_id + '_') + parser.end) + '_') + parser.options.xml_name)) with open(outfile, 'w') as out: if (metric == 'proteincount'): metric = 'readcount' out.write((((((('<krona key="false">\n' + '\t<attributes magnitude="') + metric) + '">\n') + '\t\t<attribute display="Protein count">') + metric) + '</attribute>\n')) else: out.write((((('<krona key="false">\n' + '\t<attributes magnitude="') + metric) + '">\n') + '\t\t<attribute display="Read count">readcount</attribute>\n')) if (metric != 'readcount'): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write(((((('\t\t<attribute display="AAI %" mono="true">identity</attribute>\n' + '\t</attributes>\n') + ' '.join(['\t<color attribute="identity"', 'valueStart="50"', 'valueEnd="100"', 'hueStart="0"', 'hueEnd="240"', 'default="true"></color>\n'])) + '\t<datasets>\n\t\t<dataset>') + parser.sample.sample_id) + '</dataset>\n\t</datasets>\n')) read_count = 0 total_rpkm = 0.0 groups_rpkm = defaultdict(float) groups_counts = defaultdict(set) groups_identity = defaultdict(list) functions_counts = defaultdict(set) functions_rpkm = defaultdict(float) functions_identity = defaultdict(list) for (_, read) in parser.reads.items(): if (read.status == STATUS_GOOD): read_count += 1 for function in read.functions: total_rpkm += read.functions[function] groups_counts[parser.ref_data.lookup_function_group(function)].add(read.read_id) functions_rpkm[function] += read.functions[function] groups_rpkm[parser.ref_data.lookup_function_group(function)] += read.functions[function] functions_counts[function].add(read.read_id) for hit in read.hit_list.hits: for function in hit.functions: functions_identity[function].append(hit.identity) groups_identity[parser.ref_data.lookup_function_group(function)].append(hit.identity) out.write((((('\t<node name="' + parser.sample.sample_id) + '_') + parser.end) + '">\n')) if (metric != 'readcount'): out.write((('\t\t<readcount><val>' + str(read_count)) + '</val></readcount>\n')) out.write((((((('\t\t<' + metric) + '><val>') + str(total_rpkm)) + '</val></') + metric) + '>\n')) for group in groups_rpkm: out.write((('\t\t<node name="' + group) + '">\n')) if (metric != 'readcount'): out.write((('\t\t\t<readcount><val>' + str(len(groups_counts[group]))) + '</val></readcount>\n')) out.write((((((('\t\t\t<' + metric) + '><val>') + str(groups_rpkm[group])) + '</val></') + metric) + '>\n')) if (group in groups_identity): out.write((('\t\t\t<identity><val>' + str((sum(groups_identity[group]) / len(groups_identity[group])))) + '</val></identity>\n')) else: out.write('\t\t\t<identity><val>0.0</val></identity>\n') for function in parser.ref_data.get_functions_in_group(group): if (function in functions_rpkm): out.write((('\t\t\t<node name="' + function) + '">\n')) if (metric != 'readcount'): out.write((('\t\t\t\t<readcount><val>' + str(len(functions_counts[function]))) + '</val></readcount>\n')) out.write((((((('\t\t\t\t<' + metric) + '><val>') + str(functions_rpkm[function])) + '</val></') + metric) + '>\n')) if (function in functions_identity): out.write((('\t\t\t\t<identity><val>' + str((sum(functions_identity[function]) / len(functions_identity[function])))) + '</val></identity>\n')) else: out.write('\t\t\t\t<identity><val>0.0</val></identity>\n') out.write('\t\t\t</node>\n') out.write('\t\t</node>\n') out.write('\t</node>\n</krona>') html_file = os.path.join(parser.options.get_project_dir(parser.sample.sample_id), parser.options.get_output_subdir(parser.sample.sample_id), ((((parser.sample.sample_id + '_') + parser.end) + '_') + parser.options.html_name)) run_external_program([parser.config.krona_path, '-o', html_file, outfile])
def make_functions_chart(parser, metric='efpkg'): 'Writes XML file for functions chart and generates Krona plot from it\n\n Args:\n parser (:obj:DiamondParser): parser object with annotated reads\n metric (str): scoring metric (efpkg by default)\n ' outfile = os.path.join(parser.options.get_project_dir(parser.sample.sample_id), parser.options.get_output_subdir(parser.sample.sample_id), ((((parser.sample.sample_id + '_') + parser.end) + '_') + parser.options.xml_name)) with open(outfile, 'w') as out: if (metric == 'proteincount'): metric = 'readcount' out.write((((((('<krona key="false">\n' + '\t<attributes magnitude="') + metric) + '">\n') + '\t\t<attribute display="Protein count">') + metric) + '</attribute>\n')) else: out.write((((('<krona key="false">\n' + '\t<attributes magnitude="') + metric) + '">\n') + '\t\t<attribute display="Read count">readcount</attribute>\n')) if (metric != 'readcount'): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write(((((('\t\t<attribute display="AAI %" mono="true">identity</attribute>\n' + '\t</attributes>\n') + ' '.join(['\t<color attribute="identity"', 'valueStart="50"', 'valueEnd="100"', 'hueStart="0"', 'hueEnd="240"', 'default="true"></color>\n'])) + '\t<datasets>\n\t\t<dataset>') + parser.sample.sample_id) + '</dataset>\n\t</datasets>\n')) read_count = 0 total_rpkm = 0.0 groups_rpkm = defaultdict(float) groups_counts = defaultdict(set) groups_identity = defaultdict(list) functions_counts = defaultdict(set) functions_rpkm = defaultdict(float) functions_identity = defaultdict(list) for (_, read) in parser.reads.items(): if (read.status == STATUS_GOOD): read_count += 1 for function in read.functions: total_rpkm += read.functions[function] groups_counts[parser.ref_data.lookup_function_group(function)].add(read.read_id) functions_rpkm[function] += read.functions[function] groups_rpkm[parser.ref_data.lookup_function_group(function)] += read.functions[function] functions_counts[function].add(read.read_id) for hit in read.hit_list.hits: for function in hit.functions: functions_identity[function].append(hit.identity) groups_identity[parser.ref_data.lookup_function_group(function)].append(hit.identity) out.write((((('\t<node name="' + parser.sample.sample_id) + '_') + parser.end) + '">\n')) if (metric != 'readcount'): out.write((('\t\t<readcount><val>' + str(read_count)) + '</val></readcount>\n')) out.write((((((('\t\t<' + metric) + '><val>') + str(total_rpkm)) + '</val></') + metric) + '>\n')) for group in groups_rpkm: out.write((('\t\t<node name="' + group) + '">\n')) if (metric != 'readcount'): out.write((('\t\t\t<readcount><val>' + str(len(groups_counts[group]))) + '</val></readcount>\n')) out.write((((((('\t\t\t<' + metric) + '><val>') + str(groups_rpkm[group])) + '</val></') + metric) + '>\n')) if (group in groups_identity): out.write((('\t\t\t<identity><val>' + str((sum(groups_identity[group]) / len(groups_identity[group])))) + '</val></identity>\n')) else: out.write('\t\t\t<identity><val>0.0</val></identity>\n') for function in parser.ref_data.get_functions_in_group(group): if (function in functions_rpkm): out.write((('\t\t\t<node name="' + function) + '">\n')) if (metric != 'readcount'): out.write((('\t\t\t\t<readcount><val>' + str(len(functions_counts[function]))) + '</val></readcount>\n')) out.write((((((('\t\t\t\t<' + metric) + '><val>') + str(functions_rpkm[function])) + '</val></') + metric) + '>\n')) if (function in functions_identity): out.write((('\t\t\t\t<identity><val>' + str((sum(functions_identity[function]) / len(functions_identity[function])))) + '</val></identity>\n')) else: out.write('\t\t\t\t<identity><val>0.0</val></identity>\n') out.write('\t\t\t</node>\n') out.write('\t\t</node>\n') out.write('\t</node>\n</krona>') html_file = os.path.join(parser.options.get_project_dir(parser.sample.sample_id), parser.options.get_output_subdir(parser.sample.sample_id), ((((parser.sample.sample_id + '_') + parser.end) + '_') + parser.options.html_name)) run_external_program([parser.config.krona_path, '-o', html_file, outfile])<|docstring|>Writes XML file for functions chart and generates Krona plot from it Args: parser (:obj:DiamondParser): parser object with annotated reads metric (str): scoring metric (efpkg by default)<|endoftext|>
1e329a023b4c0824cceea67d0fa57787dc07bbc2947a84001838ddb2adaac801
def get_taxon_xml(tax_profile, taxid, offset, metric='efpkg'): "Returns XML node for a phylogenetic tree node and all its children\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value 'efpkg')\n\n Returns:\n ret_val (str): XML node\n " if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if (metric != 'readcount'): ret_val += (((('\t' * offset) + '<readcount><val>') + format(tax_profile.tree.data[taxid].attributes['count'], '0.0f')) + '</val></readcount>\n') ret_val += (((((((('\t' * offset) + '<') + metric) + '><val>') + format(tax_profile.tree.data[taxid].attributes[metric], '0.2f')) + '</val></') + metric) + '>\n') ret_val += (((('\t' * offset) + '<identity><val>') + format((tax_profile.tree.data[taxid].attributes['identity'] / tax_profile.tree.data[taxid].attributes['hit_count']), '0.1f')) + '</val></identity>\n') else: if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount><val>0</val></readcount>\n') ret_val += (((((('\t' * offset) + '<') + metric) + '><val>0.0</val></') + metric) + '>\n') ret_val += (('\t' * offset) + '<identity><val>0.0</val></identity>\n') if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: ret_val += get_taxon_xml(tax_profile, child_taxid, offset, metric) offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val
Returns XML node for a phylogenetic tree node and all its children Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile taxid (str): taxonomy identifier of a node of interest offset (int): number of starting tabs metric (str): scoring metric (default value 'efpkg') Returns: ret_val (str): XML node
lib/fama/output/krona_xml_writer.py
get_taxon_xml
aekazakov/FamaProfiling
0
python
def get_taxon_xml(tax_profile, taxid, offset, metric='efpkg'): "Returns XML node for a phylogenetic tree node and all its children\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value 'efpkg')\n\n Returns:\n ret_val (str): XML node\n " if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if (metric != 'readcount'): ret_val += (((('\t' * offset) + '<readcount><val>') + format(tax_profile.tree.data[taxid].attributes['count'], '0.0f')) + '</val></readcount>\n') ret_val += (((((((('\t' * offset) + '<') + metric) + '><val>') + format(tax_profile.tree.data[taxid].attributes[metric], '0.2f')) + '</val></') + metric) + '>\n') ret_val += (((('\t' * offset) + '<identity><val>') + format((tax_profile.tree.data[taxid].attributes['identity'] / tax_profile.tree.data[taxid].attributes['hit_count']), '0.1f')) + '</val></identity>\n') else: if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount><val>0</val></readcount>\n') ret_val += (((((('\t' * offset) + '<') + metric) + '><val>0.0</val></') + metric) + '>\n') ret_val += (('\t' * offset) + '<identity><val>0.0</val></identity>\n') if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: ret_val += get_taxon_xml(tax_profile, child_taxid, offset, metric) offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val
def get_taxon_xml(tax_profile, taxid, offset, metric='efpkg'): "Returns XML node for a phylogenetic tree node and all its children\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value 'efpkg')\n\n Returns:\n ret_val (str): XML node\n " if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if (metric != 'readcount'): ret_val += (((('\t' * offset) + '<readcount><val>') + format(tax_profile.tree.data[taxid].attributes['count'], '0.0f')) + '</val></readcount>\n') ret_val += (((((((('\t' * offset) + '<') + metric) + '><val>') + format(tax_profile.tree.data[taxid].attributes[metric], '0.2f')) + '</val></') + metric) + '>\n') ret_val += (((('\t' * offset) + '<identity><val>') + format((tax_profile.tree.data[taxid].attributes['identity'] / tax_profile.tree.data[taxid].attributes['hit_count']), '0.1f')) + '</val></identity>\n') else: if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount><val>0</val></readcount>\n') ret_val += (((((('\t' * offset) + '<') + metric) + '><val>0.0</val></') + metric) + '>\n') ret_val += (('\t' * offset) + '<identity><val>0.0</val></identity>\n') if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: ret_val += get_taxon_xml(tax_profile, child_taxid, offset, metric) offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val<|docstring|>Returns XML node for a phylogenetic tree node and all its children Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile taxid (str): taxonomy identifier of a node of interest offset (int): number of starting tabs metric (str): scoring metric (default value 'efpkg') Returns: ret_val (str): XML node<|endoftext|>
f7c4f1290f218906aa37d74741d63b42d7aabc753a94264b8f5d47a691e5b428
def get_lca_tax_xml(tax_profile, taxid, offset, metric='efpkg'): 'Returns XML node for a phylogenetic tree node and all its children.\n Creates additional child node for a fictional "Unclassified..." taxon\n if not all reads of the current node are mapped to children nodes.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value \'efpkg\')\n\n Returns:\n ret_val (str): XML node\n ' attribute_values = defaultdict(float) try: ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') except KeyError: print(taxid, 'not found in the tree data!!!') raise KeyError offset += 1 if tax_profile.tree.data[taxid].attributes: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (((('\t' * offset) + '<readcount><val>') + format(tax_profile.tree.data[taxid].attributes['count'], '0.0f')) + '</val></readcount>\n') ret_val += (((((((('\t' * offset) + '<') + metric) + '><val>') + format(tax_profile.tree.data[taxid].attributes[metric], '0.2f')) + '</val></') + metric) + '>\n') ret_val += (((('\t' * offset) + '<identity><val>') + format((tax_profile.tree.data[taxid].attributes['identity'] / tax_profile.tree.data[taxid].attributes['hit_count']), '0.1f')) + '</val></identity>\n') else: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount><val>0</val></readcount>\n') ret_val += (((((('\t' * offset) + '<') + metric) + '><val>0.0</val></') + metric) + '>\n') ret_val += (('\t' * offset) + '<identity><val>0.0</val></identity>\n') if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: (child_node, child_values) = get_lca_tax_xml(tax_profile, child_taxid, offset, metric) ret_val += child_node for (key, val) in child_values.items(): attribute_values[key] += val if (tax_profile.tree.data[taxid].attributes and (attribute_values['count'] < tax_profile.tree.data[taxid].attributes['count'])): unknown_node = ('Unidentified ' + tax_profile.tree.data[taxid].name) if (offset == 2): unknown_node = 'Unknown' ret_val += (((('\t' * offset) + '<node name="') + unknown_node) + '">\n') offset += 1 if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (((('\t' * offset) + '<readcount><val>') + format((tax_profile.tree.data[taxid].attributes['count'] - attribute_values['count']), '0.0f')) + '</val></readcount>\n') ret_val += (((((((('\t' * offset) + '<') + metric) + '><val>') + format((tax_profile.tree.data[taxid].attributes[metric] - attribute_values[metric]), '0.2f')) + '</val></') + metric) + '>\n') if (tax_profile.tree.data[taxid].attributes['hit_count'] > attribute_values['hit_count']): ret_val += (((('\t' * offset) + '<identity><val>') + format(((tax_profile.tree.data[taxid].attributes['identity'] - attribute_values['identity']) / (tax_profile.tree.data[taxid].attributes['hit_count'] - attribute_values['hit_count'])), '0.1f')) + '</val></identity>\n') else: ret_val += (('\t' * offset) + '<identity><val>0.0</val></identity>\n') offset -= 1 ret_val += (('\t' * offset) + '</node>\n') offset -= 1 ret_val += (('\t' * offset) + '</node>\n') attribute_values = defaultdict(float) attribute_values[metric] = tax_profile.tree.data[taxid].attributes[metric] attribute_values['count'] = tax_profile.tree.data[taxid].attributes['count'] attribute_values['identity'] = tax_profile.tree.data[taxid].attributes['identity'] attribute_values['hit_count'] = tax_profile.tree.data[taxid].attributes['hit_count'] return (ret_val, attribute_values)
Returns XML node for a phylogenetic tree node and all its children. Creates additional child node for a fictional "Unclassified..." taxon if not all reads of the current node are mapped to children nodes. Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile taxid (str): taxonomy identifier of a node of interest offset (int): number of starting tabs metric (str): scoring metric (default value 'efpkg') Returns: ret_val (str): XML node
lib/fama/output/krona_xml_writer.py
get_lca_tax_xml
aekazakov/FamaProfiling
0
python
def get_lca_tax_xml(tax_profile, taxid, offset, metric='efpkg'): 'Returns XML node for a phylogenetic tree node and all its children.\n Creates additional child node for a fictional "Unclassified..." taxon\n if not all reads of the current node are mapped to children nodes.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value \'efpkg\')\n\n Returns:\n ret_val (str): XML node\n ' attribute_values = defaultdict(float) try: ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') except KeyError: print(taxid, 'not found in the tree data!!!') raise KeyError offset += 1 if tax_profile.tree.data[taxid].attributes: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (((('\t' * offset) + '<readcount><val>') + format(tax_profile.tree.data[taxid].attributes['count'], '0.0f')) + '</val></readcount>\n') ret_val += (((((((('\t' * offset) + '<') + metric) + '><val>') + format(tax_profile.tree.data[taxid].attributes[metric], '0.2f')) + '</val></') + metric) + '>\n') ret_val += (((('\t' * offset) + '<identity><val>') + format((tax_profile.tree.data[taxid].attributes['identity'] / tax_profile.tree.data[taxid].attributes['hit_count']), '0.1f')) + '</val></identity>\n') else: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount><val>0</val></readcount>\n') ret_val += (((((('\t' * offset) + '<') + metric) + '><val>0.0</val></') + metric) + '>\n') ret_val += (('\t' * offset) + '<identity><val>0.0</val></identity>\n') if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: (child_node, child_values) = get_lca_tax_xml(tax_profile, child_taxid, offset, metric) ret_val += child_node for (key, val) in child_values.items(): attribute_values[key] += val if (tax_profile.tree.data[taxid].attributes and (attribute_values['count'] < tax_profile.tree.data[taxid].attributes['count'])): unknown_node = ('Unidentified ' + tax_profile.tree.data[taxid].name) if (offset == 2): unknown_node = 'Unknown' ret_val += (((('\t' * offset) + '<node name="') + unknown_node) + '">\n') offset += 1 if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (((('\t' * offset) + '<readcount><val>') + format((tax_profile.tree.data[taxid].attributes['count'] - attribute_values['count']), '0.0f')) + '</val></readcount>\n') ret_val += (((((((('\t' * offset) + '<') + metric) + '><val>') + format((tax_profile.tree.data[taxid].attributes[metric] - attribute_values[metric]), '0.2f')) + '</val></') + metric) + '>\n') if (tax_profile.tree.data[taxid].attributes['hit_count'] > attribute_values['hit_count']): ret_val += (((('\t' * offset) + '<identity><val>') + format(((tax_profile.tree.data[taxid].attributes['identity'] - attribute_values['identity']) / (tax_profile.tree.data[taxid].attributes['hit_count'] - attribute_values['hit_count'])), '0.1f')) + '</val></identity>\n') else: ret_val += (('\t' * offset) + '<identity><val>0.0</val></identity>\n') offset -= 1 ret_val += (('\t' * offset) + '</node>\n') offset -= 1 ret_val += (('\t' * offset) + '</node>\n') attribute_values = defaultdict(float) attribute_values[metric] = tax_profile.tree.data[taxid].attributes[metric] attribute_values['count'] = tax_profile.tree.data[taxid].attributes['count'] attribute_values['identity'] = tax_profile.tree.data[taxid].attributes['identity'] attribute_values['hit_count'] = tax_profile.tree.data[taxid].attributes['hit_count'] return (ret_val, attribute_values)
def get_lca_tax_xml(tax_profile, taxid, offset, metric='efpkg'): 'Returns XML node for a phylogenetic tree node and all its children.\n Creates additional child node for a fictional "Unclassified..." taxon\n if not all reads of the current node are mapped to children nodes.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value \'efpkg\')\n\n Returns:\n ret_val (str): XML node\n ' attribute_values = defaultdict(float) try: ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') except KeyError: print(taxid, 'not found in the tree data!!!') raise KeyError offset += 1 if tax_profile.tree.data[taxid].attributes: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (((('\t' * offset) + '<readcount><val>') + format(tax_profile.tree.data[taxid].attributes['count'], '0.0f')) + '</val></readcount>\n') ret_val += (((((((('\t' * offset) + '<') + metric) + '><val>') + format(tax_profile.tree.data[taxid].attributes[metric], '0.2f')) + '</val></') + metric) + '>\n') ret_val += (((('\t' * offset) + '<identity><val>') + format((tax_profile.tree.data[taxid].attributes['identity'] / tax_profile.tree.data[taxid].attributes['hit_count']), '0.1f')) + '</val></identity>\n') else: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount><val>0</val></readcount>\n') ret_val += (((((('\t' * offset) + '<') + metric) + '><val>0.0</val></') + metric) + '>\n') ret_val += (('\t' * offset) + '<identity><val>0.0</val></identity>\n') if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: (child_node, child_values) = get_lca_tax_xml(tax_profile, child_taxid, offset, metric) ret_val += child_node for (key, val) in child_values.items(): attribute_values[key] += val if (tax_profile.tree.data[taxid].attributes and (attribute_values['count'] < tax_profile.tree.data[taxid].attributes['count'])): unknown_node = ('Unidentified ' + tax_profile.tree.data[taxid].name) if (offset == 2): unknown_node = 'Unknown' ret_val += (((('\t' * offset) + '<node name="') + unknown_node) + '">\n') offset += 1 if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (((('\t' * offset) + '<readcount><val>') + format((tax_profile.tree.data[taxid].attributes['count'] - attribute_values['count']), '0.0f')) + '</val></readcount>\n') ret_val += (((((((('\t' * offset) + '<') + metric) + '><val>') + format((tax_profile.tree.data[taxid].attributes[metric] - attribute_values[metric]), '0.2f')) + '</val></') + metric) + '>\n') if (tax_profile.tree.data[taxid].attributes['hit_count'] > attribute_values['hit_count']): ret_val += (((('\t' * offset) + '<identity><val>') + format(((tax_profile.tree.data[taxid].attributes['identity'] - attribute_values['identity']) / (tax_profile.tree.data[taxid].attributes['hit_count'] - attribute_values['hit_count'])), '0.1f')) + '</val></identity>\n') else: ret_val += (('\t' * offset) + '<identity><val>0.0</val></identity>\n') offset -= 1 ret_val += (('\t' * offset) + '</node>\n') offset -= 1 ret_val += (('\t' * offset) + '</node>\n') attribute_values = defaultdict(float) attribute_values[metric] = tax_profile.tree.data[taxid].attributes[metric] attribute_values['count'] = tax_profile.tree.data[taxid].attributes['count'] attribute_values['identity'] = tax_profile.tree.data[taxid].attributes['identity'] attribute_values['hit_count'] = tax_profile.tree.data[taxid].attributes['hit_count'] return (ret_val, attribute_values)<|docstring|>Returns XML node for a phylogenetic tree node and all its children. Creates additional child node for a fictional "Unclassified..." taxon if not all reads of the current node are mapped to children nodes. Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile taxid (str): taxonomy identifier of a node of interest offset (int): number of starting tabs metric (str): scoring metric (default value 'efpkg') Returns: ret_val (str): XML node<|endoftext|>
6c19fc664aadbc4ffda3b9f2b473b004e625208e2781885effd2819c7b11e900
def make_taxonomy_chart(tax_profile, sample, outfile, krona_path, metric='efpkg'): 'Writes XML file for taxonomy chart of one sample and generates Krona plot from it\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile object\n sample (str): sample identifier\n outfile (str): path for XML output\n krona_path (str): Krona Tools command\n metric (str): scoring metric (efpkg by default)\n ' with open(outfile, 'w') as out: out.write('<krona key="false">\n') out.write((('\t<attributes magnitude="' + metric) + '">\n')) if (metric == 'proteincount'): out.write((('\t\t<attribute display="Protein count">' + metric) + '</attribute>\n')) else: out.write('\t\t<attribute display="Read count">readcount</attribute>\n') if ((metric != 'readcount') and (metric != 'proteincount')): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write('\t\t<attribute display="AAI %" mono="true">identity</attribute>\n') out.write('\t</attributes>\n') out.write(('\t<color attribute="identity" valueStart="50" valueEnd="100" hueStart="0"' + ' hueEnd="240" default="true"></color>\n')) out.write('\t<datasets>\n') out.write((('\t\t<dataset>' + sample) + '</dataset>\n')) out.write('\t</datasets>\n') offset = 1 (child_node, _) = get_lca_tax_xml(tax_profile, ROOT_TAXONOMY_ID, offset, metric=metric) out.write(child_node) out.write('</krona>') html_file = (outfile + '.html') krona_cmd = [krona_path, '-o', html_file, outfile] run_external_program(krona_cmd)
Writes XML file for taxonomy chart of one sample and generates Krona plot from it Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile object sample (str): sample identifier outfile (str): path for XML output krona_path (str): Krona Tools command metric (str): scoring metric (efpkg by default)
lib/fama/output/krona_xml_writer.py
make_taxonomy_chart
aekazakov/FamaProfiling
0
python
def make_taxonomy_chart(tax_profile, sample, outfile, krona_path, metric='efpkg'): 'Writes XML file for taxonomy chart of one sample and generates Krona plot from it\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile object\n sample (str): sample identifier\n outfile (str): path for XML output\n krona_path (str): Krona Tools command\n metric (str): scoring metric (efpkg by default)\n ' with open(outfile, 'w') as out: out.write('<krona key="false">\n') out.write((('\t<attributes magnitude="' + metric) + '">\n')) if (metric == 'proteincount'): out.write((('\t\t<attribute display="Protein count">' + metric) + '</attribute>\n')) else: out.write('\t\t<attribute display="Read count">readcount</attribute>\n') if ((metric != 'readcount') and (metric != 'proteincount')): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write('\t\t<attribute display="AAI %" mono="true">identity</attribute>\n') out.write('\t</attributes>\n') out.write(('\t<color attribute="identity" valueStart="50" valueEnd="100" hueStart="0"' + ' hueEnd="240" default="true"></color>\n')) out.write('\t<datasets>\n') out.write((('\t\t<dataset>' + sample) + '</dataset>\n')) out.write('\t</datasets>\n') offset = 1 (child_node, _) = get_lca_tax_xml(tax_profile, ROOT_TAXONOMY_ID, offset, metric=metric) out.write(child_node) out.write('</krona>') html_file = (outfile + '.html') krona_cmd = [krona_path, '-o', html_file, outfile] run_external_program(krona_cmd)
def make_taxonomy_chart(tax_profile, sample, outfile, krona_path, metric='efpkg'): 'Writes XML file for taxonomy chart of one sample and generates Krona plot from it\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile object\n sample (str): sample identifier\n outfile (str): path for XML output\n krona_path (str): Krona Tools command\n metric (str): scoring metric (efpkg by default)\n ' with open(outfile, 'w') as out: out.write('<krona key="false">\n') out.write((('\t<attributes magnitude="' + metric) + '">\n')) if (metric == 'proteincount'): out.write((('\t\t<attribute display="Protein count">' + metric) + '</attribute>\n')) else: out.write('\t\t<attribute display="Read count">readcount</attribute>\n') if ((metric != 'readcount') and (metric != 'proteincount')): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write('\t\t<attribute display="AAI %" mono="true">identity</attribute>\n') out.write('\t</attributes>\n') out.write(('\t<color attribute="identity" valueStart="50" valueEnd="100" hueStart="0"' + ' hueEnd="240" default="true"></color>\n')) out.write('\t<datasets>\n') out.write((('\t\t<dataset>' + sample) + '</dataset>\n')) out.write('\t</datasets>\n') offset = 1 (child_node, _) = get_lca_tax_xml(tax_profile, ROOT_TAXONOMY_ID, offset, metric=metric) out.write(child_node) out.write('</krona>') html_file = (outfile + '.html') krona_cmd = [krona_path, '-o', html_file, outfile] run_external_program(krona_cmd)<|docstring|>Writes XML file for taxonomy chart of one sample and generates Krona plot from it Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile object sample (str): sample identifier outfile (str): path for XML output krona_path (str): Krona Tools command metric (str): scoring metric (efpkg by default)<|endoftext|>
e6ac0df04a67d25d55d64040f3df20019b861974789a269c77179a23b06737d3
def make_taxonomy_series_chart(tax_profile, sample_list, outfile, krona_path, metric='efpkg'): 'Writes XML file for taxonomy chart of multiple samples and generates Krona plot for it.\n Taxonomy profile must have two-level attributes, with function identifier as outer key and\n a metric as inner key.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile object\n sample_list (list of str): sample identifiers\n outfile (str): path for XML output\n krona_path (str): Krona Tools command\n metric (str): scoring metric (efpkg by default)\n ' with open(outfile, 'w') as out: out.write('<krona key="false">\n') out.write((('\t<attributes magnitude="' + metric) + '">\n')) if (metric == 'proteincount'): out.write((('\t\t<attribute display="Protein count">' + metric) + '</attribute>\n')) else: out.write('\t\t<attribute display="Read count">readcount</attribute>\n') if ((metric != 'readcount') and (metric != 'proteincount')): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write('\t\t<attribute display="AAI %" mono="true">identity</attribute>\n') out.write('\t</attributes>\n') out.write(('\t<color attribute="identity" valueStart="50" valueEnd="100" hueStart="0" ' + 'hueEnd="240" default="true"></color>\n')) out.write('\t<datasets>\n') for sample in sample_list: out.write((('\t\t<dataset>' + sample) + '</dataset>\n')) out.write('\t</datasets>\n') offset = 1 (child_nodes, _) = get_lca_dataseries_tax_xml(tax_profile, sample_list, ROOT_TAXONOMY_ID, offset, metric=metric) out.write(child_nodes) out.write('</krona>') html_file = (outfile + '.html') krona_cmd = [krona_path, '-o', html_file, outfile] run_external_program(krona_cmd)
Writes XML file for taxonomy chart of multiple samples and generates Krona plot for it. Taxonomy profile must have two-level attributes, with function identifier as outer key and a metric as inner key. Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile object sample_list (list of str): sample identifiers outfile (str): path for XML output krona_path (str): Krona Tools command metric (str): scoring metric (efpkg by default)
lib/fama/output/krona_xml_writer.py
make_taxonomy_series_chart
aekazakov/FamaProfiling
0
python
def make_taxonomy_series_chart(tax_profile, sample_list, outfile, krona_path, metric='efpkg'): 'Writes XML file for taxonomy chart of multiple samples and generates Krona plot for it.\n Taxonomy profile must have two-level attributes, with function identifier as outer key and\n a metric as inner key.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile object\n sample_list (list of str): sample identifiers\n outfile (str): path for XML output\n krona_path (str): Krona Tools command\n metric (str): scoring metric (efpkg by default)\n ' with open(outfile, 'w') as out: out.write('<krona key="false">\n') out.write((('\t<attributes magnitude="' + metric) + '">\n')) if (metric == 'proteincount'): out.write((('\t\t<attribute display="Protein count">' + metric) + '</attribute>\n')) else: out.write('\t\t<attribute display="Read count">readcount</attribute>\n') if ((metric != 'readcount') and (metric != 'proteincount')): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write('\t\t<attribute display="AAI %" mono="true">identity</attribute>\n') out.write('\t</attributes>\n') out.write(('\t<color attribute="identity" valueStart="50" valueEnd="100" hueStart="0" ' + 'hueEnd="240" default="true"></color>\n')) out.write('\t<datasets>\n') for sample in sample_list: out.write((('\t\t<dataset>' + sample) + '</dataset>\n')) out.write('\t</datasets>\n') offset = 1 (child_nodes, _) = get_lca_dataseries_tax_xml(tax_profile, sample_list, ROOT_TAXONOMY_ID, offset, metric=metric) out.write(child_nodes) out.write('</krona>') html_file = (outfile + '.html') krona_cmd = [krona_path, '-o', html_file, outfile] run_external_program(krona_cmd)
def make_taxonomy_series_chart(tax_profile, sample_list, outfile, krona_path, metric='efpkg'): 'Writes XML file for taxonomy chart of multiple samples and generates Krona plot for it.\n Taxonomy profile must have two-level attributes, with function identifier as outer key and\n a metric as inner key.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile object\n sample_list (list of str): sample identifiers\n outfile (str): path for XML output\n krona_path (str): Krona Tools command\n metric (str): scoring metric (efpkg by default)\n ' with open(outfile, 'w') as out: out.write('<krona key="false">\n') out.write((('\t<attributes magnitude="' + metric) + '">\n')) if (metric == 'proteincount'): out.write((('\t\t<attribute display="Protein count">' + metric) + '</attribute>\n')) else: out.write('\t\t<attribute display="Read count">readcount</attribute>\n') if ((metric != 'readcount') and (metric != 'proteincount')): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write('\t\t<attribute display="AAI %" mono="true">identity</attribute>\n') out.write('\t</attributes>\n') out.write(('\t<color attribute="identity" valueStart="50" valueEnd="100" hueStart="0" ' + 'hueEnd="240" default="true"></color>\n')) out.write('\t<datasets>\n') for sample in sample_list: out.write((('\t\t<dataset>' + sample) + '</dataset>\n')) out.write('\t</datasets>\n') offset = 1 (child_nodes, _) = get_lca_dataseries_tax_xml(tax_profile, sample_list, ROOT_TAXONOMY_ID, offset, metric=metric) out.write(child_nodes) out.write('</krona>') html_file = (outfile + '.html') krona_cmd = [krona_path, '-o', html_file, outfile] run_external_program(krona_cmd)<|docstring|>Writes XML file for taxonomy chart of multiple samples and generates Krona plot for it. Taxonomy profile must have two-level attributes, with function identifier as outer key and a metric as inner key. Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile object sample_list (list of str): sample identifiers outfile (str): path for XML output krona_path (str): Krona Tools command metric (str): scoring metric (efpkg by default)<|endoftext|>
ceae01c67116a9e04a9d16bca1751fdd8c41b39f4a240489f731960cfb520589
def get_lca_dataseries_tax_xml(tax_profile, dataseries, taxid, offset, metric='efpkg'): 'Returns XML node for a phylogenetic tree node and all its children.\n Creates additional child node for a fictional "Unclassified..." taxon\n if not all reads of the current node were mapped to children nodes.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n dataseries (list of str): either sample identifiers or function identifiers,\n depending on profile type (functional or taxonomic)\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value \'efpkg\')\n\n Returns:\n ret_val (str): XML node\n attribute_values (defaultdict[str,dict[str,float]]): outer key is\n one of dataseries members, inner key is in [metric, \'count\', \'identity\'\n \'hit_count\'], value is float.\n ' attribute_values = autovivify(2, float) if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and ('count' in tax_profile.tree.data[taxid].attributes[datapoint])): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint]['count'], '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (metric in tax_profile.tree.data[taxid].attributes[datapoint])): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint][metric], '0.6f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and ('identity' in tax_profile.tree.data[taxid].attributes[datapoint])): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] / tax_profile.tree.data[taxid].attributes[datapoint]['hit_count']), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' else: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') ret_val += ('<val>0</val>' * len(dataseries)) ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += (((('<' + metric) + '>\n') + ('\t' * offset)) + '<identity>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += '</identity>\n' if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: (child_node, child_values) = get_lca_dataseries_tax_xml(tax_profile, dataseries, child_taxid, offset, metric=metric) ret_val += child_node for datapoint in child_values.keys(): for (key, val) in child_values[datapoint].items(): attribute_values[datapoint][key] += val unidentified_flag = False for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): if (attribute_values[datapoint]['count'] < tax_profile.tree.data[taxid].attributes[datapoint]['count']): unidentified_flag = True break if unidentified_flag: if (offset == 2): ret_val += (('\t' * offset) + '<node name="Unclassified">\n') else: ret_val += (((('\t' * offset) + '<node name="Unclassified ') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (attribute_values[datapoint]['count'] < tax_profile.tree.data[taxid].attributes[datapoint]['count'])): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['count'] - attribute_values[datapoint]['count']), '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (attribute_values[datapoint]['count'] < tax_profile.tree.data[taxid].attributes[datapoint]['count'])): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint][metric] - attribute_values[datapoint][metric]), '0.6f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (('</' + metric) + '>\n') ret_val += (('\t' * offset) + '<identity>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and ('hit_count' in tax_profile.tree.data[taxid].attributes[datapoint]) and (attribute_values[datapoint]['hit_count'] < tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'])): ret_val += (('<val>' + format(((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] - attribute_values[datapoint]['identity']) / (tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'] - attribute_values[datapoint]['hit_count'])), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' offset -= 1 ret_val += (('\t' * offset) + '</node>\n') offset -= 1 ret_val += (('\t' * offset) + '</node>\n') attribute_values = autovivify(1) for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): if (metric in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint][metric] = tax_profile.tree.data[taxid].attributes[datapoint][metric] if ('count' in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint]['count'] = tax_profile.tree.data[taxid].attributes[datapoint]['count'] if ('identity' in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint]['identity'] = tax_profile.tree.data[taxid].attributes[datapoint]['identity'] if ('hit_count' in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint]['hit_count'] = tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'] return (ret_val, attribute_values)
Returns XML node for a phylogenetic tree node and all its children. Creates additional child node for a fictional "Unclassified..." taxon if not all reads of the current node were mapped to children nodes. Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile dataseries (list of str): either sample identifiers or function identifiers, depending on profile type (functional or taxonomic) taxid (str): taxonomy identifier of a node of interest offset (int): number of starting tabs metric (str): scoring metric (default value 'efpkg') Returns: ret_val (str): XML node attribute_values (defaultdict[str,dict[str,float]]): outer key is one of dataseries members, inner key is in [metric, 'count', 'identity' 'hit_count'], value is float.
lib/fama/output/krona_xml_writer.py
get_lca_dataseries_tax_xml
aekazakov/FamaProfiling
0
python
def get_lca_dataseries_tax_xml(tax_profile, dataseries, taxid, offset, metric='efpkg'): 'Returns XML node for a phylogenetic tree node and all its children.\n Creates additional child node for a fictional "Unclassified..." taxon\n if not all reads of the current node were mapped to children nodes.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n dataseries (list of str): either sample identifiers or function identifiers,\n depending on profile type (functional or taxonomic)\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value \'efpkg\')\n\n Returns:\n ret_val (str): XML node\n attribute_values (defaultdict[str,dict[str,float]]): outer key is\n one of dataseries members, inner key is in [metric, \'count\', \'identity\'\n \'hit_count\'], value is float.\n ' attribute_values = autovivify(2, float) if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and ('count' in tax_profile.tree.data[taxid].attributes[datapoint])): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint]['count'], '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (metric in tax_profile.tree.data[taxid].attributes[datapoint])): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint][metric], '0.6f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and ('identity' in tax_profile.tree.data[taxid].attributes[datapoint])): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] / tax_profile.tree.data[taxid].attributes[datapoint]['hit_count']), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' else: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') ret_val += ('<val>0</val>' * len(dataseries)) ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += (((('<' + metric) + '>\n') + ('\t' * offset)) + '<identity>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += '</identity>\n' if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: (child_node, child_values) = get_lca_dataseries_tax_xml(tax_profile, dataseries, child_taxid, offset, metric=metric) ret_val += child_node for datapoint in child_values.keys(): for (key, val) in child_values[datapoint].items(): attribute_values[datapoint][key] += val unidentified_flag = False for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): if (attribute_values[datapoint]['count'] < tax_profile.tree.data[taxid].attributes[datapoint]['count']): unidentified_flag = True break if unidentified_flag: if (offset == 2): ret_val += (('\t' * offset) + '<node name="Unclassified">\n') else: ret_val += (((('\t' * offset) + '<node name="Unclassified ') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (attribute_values[datapoint]['count'] < tax_profile.tree.data[taxid].attributes[datapoint]['count'])): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['count'] - attribute_values[datapoint]['count']), '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (attribute_values[datapoint]['count'] < tax_profile.tree.data[taxid].attributes[datapoint]['count'])): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint][metric] - attribute_values[datapoint][metric]), '0.6f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (('</' + metric) + '>\n') ret_val += (('\t' * offset) + '<identity>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and ('hit_count' in tax_profile.tree.data[taxid].attributes[datapoint]) and (attribute_values[datapoint]['hit_count'] < tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'])): ret_val += (('<val>' + format(((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] - attribute_values[datapoint]['identity']) / (tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'] - attribute_values[datapoint]['hit_count'])), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' offset -= 1 ret_val += (('\t' * offset) + '</node>\n') offset -= 1 ret_val += (('\t' * offset) + '</node>\n') attribute_values = autovivify(1) for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): if (metric in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint][metric] = tax_profile.tree.data[taxid].attributes[datapoint][metric] if ('count' in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint]['count'] = tax_profile.tree.data[taxid].attributes[datapoint]['count'] if ('identity' in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint]['identity'] = tax_profile.tree.data[taxid].attributes[datapoint]['identity'] if ('hit_count' in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint]['hit_count'] = tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'] return (ret_val, attribute_values)
def get_lca_dataseries_tax_xml(tax_profile, dataseries, taxid, offset, metric='efpkg'): 'Returns XML node for a phylogenetic tree node and all its children.\n Creates additional child node for a fictional "Unclassified..." taxon\n if not all reads of the current node were mapped to children nodes.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n dataseries (list of str): either sample identifiers or function identifiers,\n depending on profile type (functional or taxonomic)\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value \'efpkg\')\n\n Returns:\n ret_val (str): XML node\n attribute_values (defaultdict[str,dict[str,float]]): outer key is\n one of dataseries members, inner key is in [metric, \'count\', \'identity\'\n \'hit_count\'], value is float.\n ' attribute_values = autovivify(2, float) if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and ('count' in tax_profile.tree.data[taxid].attributes[datapoint])): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint]['count'], '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (metric in tax_profile.tree.data[taxid].attributes[datapoint])): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint][metric], '0.6f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and ('identity' in tax_profile.tree.data[taxid].attributes[datapoint])): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] / tax_profile.tree.data[taxid].attributes[datapoint]['hit_count']), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' else: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') ret_val += ('<val>0</val>' * len(dataseries)) ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += (((('<' + metric) + '>\n') + ('\t' * offset)) + '<identity>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += '</identity>\n' if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: (child_node, child_values) = get_lca_dataseries_tax_xml(tax_profile, dataseries, child_taxid, offset, metric=metric) ret_val += child_node for datapoint in child_values.keys(): for (key, val) in child_values[datapoint].items(): attribute_values[datapoint][key] += val unidentified_flag = False for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): if (attribute_values[datapoint]['count'] < tax_profile.tree.data[taxid].attributes[datapoint]['count']): unidentified_flag = True break if unidentified_flag: if (offset == 2): ret_val += (('\t' * offset) + '<node name="Unclassified">\n') else: ret_val += (((('\t' * offset) + '<node name="Unclassified ') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (attribute_values[datapoint]['count'] < tax_profile.tree.data[taxid].attributes[datapoint]['count'])): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['count'] - attribute_values[datapoint]['count']), '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (attribute_values[datapoint]['count'] < tax_profile.tree.data[taxid].attributes[datapoint]['count'])): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint][metric] - attribute_values[datapoint][metric]), '0.6f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (('</' + metric) + '>\n') ret_val += (('\t' * offset) + '<identity>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and ('hit_count' in tax_profile.tree.data[taxid].attributes[datapoint]) and (attribute_values[datapoint]['hit_count'] < tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'])): ret_val += (('<val>' + format(((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] - attribute_values[datapoint]['identity']) / (tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'] - attribute_values[datapoint]['hit_count'])), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' offset -= 1 ret_val += (('\t' * offset) + '</node>\n') offset -= 1 ret_val += (('\t' * offset) + '</node>\n') attribute_values = autovivify(1) for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): if (metric in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint][metric] = tax_profile.tree.data[taxid].attributes[datapoint][metric] if ('count' in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint]['count'] = tax_profile.tree.data[taxid].attributes[datapoint]['count'] if ('identity' in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint]['identity'] = tax_profile.tree.data[taxid].attributes[datapoint]['identity'] if ('hit_count' in tax_profile.tree.data[taxid].attributes[datapoint]): attribute_values[datapoint]['hit_count'] = tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'] return (ret_val, attribute_values)<|docstring|>Returns XML node for a phylogenetic tree node and all its children. Creates additional child node for a fictional "Unclassified..." taxon if not all reads of the current node were mapped to children nodes. Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile dataseries (list of str): either sample identifiers or function identifiers, depending on profile type (functional or taxonomic) taxid (str): taxonomy identifier of a node of interest offset (int): number of starting tabs metric (str): scoring metric (default value 'efpkg') Returns: ret_val (str): XML node attribute_values (defaultdict[str,dict[str,float]]): outer key is one of dataseries members, inner key is in [metric, 'count', 'identity' 'hit_count'], value is float.<|endoftext|>
ca03ab9a80b035626dbe3571ee034261fc61001de4c8ae4eeec55fe18f52402c
def get_dataseries_tax_xml(tax_profile, dataseries, taxid, offset, metric='efpkg'): "Returns XML node for a phylogenetic tree node and all its children.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n dataseries (list of str): either sample identifiers or function identifiers,\n depending on profile type (functional or taxonomic)\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value 'efpkg')\n\n Returns:\n ret_val (str): XML node\n " if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint]['count'], '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint][metric], '0.5f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] / tax_profile.tree.data[taxid].attributes[datapoint]['hit_count']), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' else: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') ret_val += ('<val>0</val>' * len(dataseries)) ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += (((('<' + metric) + '>\n') + ('\t' * offset)) + '<identity>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += '</identity>\n' if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: ret_val += get_dataseries_tax_xml(tax_profile, dataseries, child_taxid, offset, metric=metric) offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val
Returns XML node for a phylogenetic tree node and all its children. Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile dataseries (list of str): either sample identifiers or function identifiers, depending on profile type (functional or taxonomic) taxid (str): taxonomy identifier of a node of interest offset (int): number of starting tabs metric (str): scoring metric (default value 'efpkg') Returns: ret_val (str): XML node
lib/fama/output/krona_xml_writer.py
get_dataseries_tax_xml
aekazakov/FamaProfiling
0
python
def get_dataseries_tax_xml(tax_profile, dataseries, taxid, offset, metric='efpkg'): "Returns XML node for a phylogenetic tree node and all its children.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n dataseries (list of str): either sample identifiers or function identifiers,\n depending on profile type (functional or taxonomic)\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value 'efpkg')\n\n Returns:\n ret_val (str): XML node\n " if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint]['count'], '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint][metric], '0.5f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] / tax_profile.tree.data[taxid].attributes[datapoint]['hit_count']), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' else: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') ret_val += ('<val>0</val>' * len(dataseries)) ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += (((('<' + metric) + '>\n') + ('\t' * offset)) + '<identity>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += '</identity>\n' if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: ret_val += get_dataseries_tax_xml(tax_profile, dataseries, child_taxid, offset, metric=metric) offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val
def get_dataseries_tax_xml(tax_profile, dataseries, taxid, offset, metric='efpkg'): "Returns XML node for a phylogenetic tree node and all its children.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n dataseries (list of str): either sample identifiers or function identifiers,\n depending on profile type (functional or taxonomic)\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value 'efpkg')\n\n Returns:\n ret_val (str): XML node\n " if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((('\t' * offset) + '<node name="') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint]['count'], '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint][metric], '0.5f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] / tax_profile.tree.data[taxid].attributes[datapoint]['hit_count']), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' else: if ((metric != 'readcount') and (metric != 'proteincount')): ret_val += (('\t' * offset) + '<readcount>') ret_val += ('<val>0</val>' * len(dataseries)) ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += (((('<' + metric) + '>\n') + ('\t' * offset)) + '<identity>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += '</identity>\n' if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: ret_val += get_dataseries_tax_xml(tax_profile, dataseries, child_taxid, offset, metric=metric) offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val<|docstring|>Returns XML node for a phylogenetic tree node and all its children. Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile dataseries (list of str): either sample identifiers or function identifiers, depending on profile type (functional or taxonomic) taxid (str): taxonomy identifier of a node of interest offset (int): number of starting tabs metric (str): scoring metric (default value 'efpkg') Returns: ret_val (str): XML node<|endoftext|>
34579e30607e9fe71a00c4876b6d84f57522dbf755028043c4d3fe14e9a0be26
def make_function_taxonomy_chart(tax_profile, function_list, outfile, krona_path, metric='efpkg'): 'Writes XML file for taxonomy chart of multiple functions in one sample\n and generates Krona plot from it\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile object\n function_list (list of str): function identifiers\n outfile (str): path for XML output\n krona_path (str): Krona Tools command\n metric (str): scoring metric (efpkg by default)\n ' with open(outfile, 'w') as out: out.write('<krona key="false">\n') out.write((('\t<attributes magnitude="' + metric) + '">\n')) if (metric == 'proteincount'): out.write((('\t\t<attribute display="Protein count">' + metric) + '</attribute>\n')) else: out.write('\t\t<attribute display="Read count">readcount</attribute>\n') if ((metric != 'readcount') and (metric != 'proteincount')): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write('\t\t<attribute display="Best hit identity %" mono="true">identity</attribute>\n') out.write('\t</attributes>\n') out.write(('\t<color attribute="identity" valueStart="50" valueEnd="100" hueStart="0" ' + 'hueEnd="240" default="true"></color>\n')) out.write('\t<datasets>\n') for function in function_list: out.write((('\t\t<dataset>' + function) + '</dataset>\n')) out.write('\t</datasets>\n') offset = 1 out.write(get_dataseries_tax_xml(tax_profile, function_list, ROOT_TAXONOMY_ID, offset, metric=metric)) out.write('</krona>') html_file = (outfile + '.html') krona_cmd = [krona_path, '-o', html_file, outfile] run_external_program(krona_cmd)
Writes XML file for taxonomy chart of multiple functions in one sample and generates Krona plot from it Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile object function_list (list of str): function identifiers outfile (str): path for XML output krona_path (str): Krona Tools command metric (str): scoring metric (efpkg by default)
lib/fama/output/krona_xml_writer.py
make_function_taxonomy_chart
aekazakov/FamaProfiling
0
python
def make_function_taxonomy_chart(tax_profile, function_list, outfile, krona_path, metric='efpkg'): 'Writes XML file for taxonomy chart of multiple functions in one sample\n and generates Krona plot from it\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile object\n function_list (list of str): function identifiers\n outfile (str): path for XML output\n krona_path (str): Krona Tools command\n metric (str): scoring metric (efpkg by default)\n ' with open(outfile, 'w') as out: out.write('<krona key="false">\n') out.write((('\t<attributes magnitude="' + metric) + '">\n')) if (metric == 'proteincount'): out.write((('\t\t<attribute display="Protein count">' + metric) + '</attribute>\n')) else: out.write('\t\t<attribute display="Read count">readcount</attribute>\n') if ((metric != 'readcount') and (metric != 'proteincount')): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write('\t\t<attribute display="Best hit identity %" mono="true">identity</attribute>\n') out.write('\t</attributes>\n') out.write(('\t<color attribute="identity" valueStart="50" valueEnd="100" hueStart="0" ' + 'hueEnd="240" default="true"></color>\n')) out.write('\t<datasets>\n') for function in function_list: out.write((('\t\t<dataset>' + function) + '</dataset>\n')) out.write('\t</datasets>\n') offset = 1 out.write(get_dataseries_tax_xml(tax_profile, function_list, ROOT_TAXONOMY_ID, offset, metric=metric)) out.write('</krona>') html_file = (outfile + '.html') krona_cmd = [krona_path, '-o', html_file, outfile] run_external_program(krona_cmd)
def make_function_taxonomy_chart(tax_profile, function_list, outfile, krona_path, metric='efpkg'): 'Writes XML file for taxonomy chart of multiple functions in one sample\n and generates Krona plot from it\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile object\n function_list (list of str): function identifiers\n outfile (str): path for XML output\n krona_path (str): Krona Tools command\n metric (str): scoring metric (efpkg by default)\n ' with open(outfile, 'w') as out: out.write('<krona key="false">\n') out.write((('\t<attributes magnitude="' + metric) + '">\n')) if (metric == 'proteincount'): out.write((('\t\t<attribute display="Protein count">' + metric) + '</attribute>\n')) else: out.write('\t\t<attribute display="Read count">readcount</attribute>\n') if ((metric != 'readcount') and (metric != 'proteincount')): out.write((((('\t\t<attribute display="Score:' + metric) + '">') + metric) + '</attribute>\n')) out.write('\t\t<attribute display="Best hit identity %" mono="true">identity</attribute>\n') out.write('\t</attributes>\n') out.write(('\t<color attribute="identity" valueStart="50" valueEnd="100" hueStart="0" ' + 'hueEnd="240" default="true"></color>\n')) out.write('\t<datasets>\n') for function in function_list: out.write((('\t\t<dataset>' + function) + '</dataset>\n')) out.write('\t</datasets>\n') offset = 1 out.write(get_dataseries_tax_xml(tax_profile, function_list, ROOT_TAXONOMY_ID, offset, metric=metric)) out.write('</krona>') html_file = (outfile + '.html') krona_cmd = [krona_path, '-o', html_file, outfile] run_external_program(krona_cmd)<|docstring|>Writes XML file for taxonomy chart of multiple functions in one sample and generates Krona plot from it Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile object function_list (list of str): function identifiers outfile (str): path for XML output krona_path (str): Krona Tools command metric (str): scoring metric (efpkg by default)<|endoftext|>
b7cb8a295ac51753882354353d5748a28324e115c42bfceb562ba401a3e825ff
def get_genes_xml(gene_data, gene_ids, dataseries, offset, metric): "Returns XML nodes for all predicted gene from one taxon.\n\n Args:\n gene_data (defaultdict[str,defaultdict[str,dict[str,float]]]): outer key is\n gene identifier, middle key is function identifier, inner key is in\n [metric, 'count', 'identity', 'coverage', 'Length', 'Completeness'],\n value is float.\n gene_ids (list of str): gene identifiers\n dataseries (list of str): either sample identifiers or function identifiers,\n depending on profile type (functional or taxonomic)\n offset (int): number of starting tabs\n metric (str): scoring metric\n\n Returns:\n ret_val (str): XML node\n attribute_values (defaultdict[str,dict[str,float]]): outer key is\n one of dataseries members, inner key is in [metric, 'count', 'identity'\n 'hit_count'], value is float.\n " ret_val = '' for gene_id in gene_ids: ret_val += (((('\t' * offset) + '<node name="') + gene_id) + '">\n') offset += 1 if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['count']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint][metric]) + '</val>') else: ret_val += '<val>0</val>' ret_val += (('</' + metric) + '>\n') ret_val += (('\t' * offset) + '<coverage>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['coverage']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</coverage>\n' ret_val += (('\t' * offset) + '<identity>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['identity']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</identity>\n' ret_val += (('\t' * offset) + '<Length>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['Length']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</Length>\n' ret_val += (('\t' * offset) + '<Completeness>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['Completeness']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</Completeness>\n' ret_val += (('\t' * offset) + '<best_hit>') for datapoint in dataseries: if ((datapoint in gene_data[gene_id]) and ('Best hit' in gene_data[gene_id][datapoint])): ret_val += (((('<val href="' + gene_data[gene_id][datapoint]['Best hit']) + '">') + gene_data[gene_id][datapoint]['Best hit']) + '</val>') else: ret_val += '<val></val>' ret_val += '</best_hit>\n' offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val
Returns XML nodes for all predicted gene from one taxon. Args: gene_data (defaultdict[str,defaultdict[str,dict[str,float]]]): outer key is gene identifier, middle key is function identifier, inner key is in [metric, 'count', 'identity', 'coverage', 'Length', 'Completeness'], value is float. gene_ids (list of str): gene identifiers dataseries (list of str): either sample identifiers or function identifiers, depending on profile type (functional or taxonomic) offset (int): number of starting tabs metric (str): scoring metric Returns: ret_val (str): XML node attribute_values (defaultdict[str,dict[str,float]]): outer key is one of dataseries members, inner key is in [metric, 'count', 'identity' 'hit_count'], value is float.
lib/fama/output/krona_xml_writer.py
get_genes_xml
aekazakov/FamaProfiling
0
python
def get_genes_xml(gene_data, gene_ids, dataseries, offset, metric): "Returns XML nodes for all predicted gene from one taxon.\n\n Args:\n gene_data (defaultdict[str,defaultdict[str,dict[str,float]]]): outer key is\n gene identifier, middle key is function identifier, inner key is in\n [metric, 'count', 'identity', 'coverage', 'Length', 'Completeness'],\n value is float.\n gene_ids (list of str): gene identifiers\n dataseries (list of str): either sample identifiers or function identifiers,\n depending on profile type (functional or taxonomic)\n offset (int): number of starting tabs\n metric (str): scoring metric\n\n Returns:\n ret_val (str): XML node\n attribute_values (defaultdict[str,dict[str,float]]): outer key is\n one of dataseries members, inner key is in [metric, 'count', 'identity'\n 'hit_count'], value is float.\n " ret_val = for gene_id in gene_ids: ret_val += (((('\t' * offset) + '<node name="') + gene_id) + '">\n') offset += 1 if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['count']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint][metric]) + '</val>') else: ret_val += '<val>0</val>' ret_val += (('</' + metric) + '>\n') ret_val += (('\t' * offset) + '<coverage>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['coverage']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</coverage>\n' ret_val += (('\t' * offset) + '<identity>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['identity']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</identity>\n' ret_val += (('\t' * offset) + '<Length>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['Length']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</Length>\n' ret_val += (('\t' * offset) + '<Completeness>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['Completeness']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</Completeness>\n' ret_val += (('\t' * offset) + '<best_hit>') for datapoint in dataseries: if ((datapoint in gene_data[gene_id]) and ('Best hit' in gene_data[gene_id][datapoint])): ret_val += (((('<val href="' + gene_data[gene_id][datapoint]['Best hit']) + '">') + gene_data[gene_id][datapoint]['Best hit']) + '</val>') else: ret_val += '<val></val>' ret_val += '</best_hit>\n' offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val
def get_genes_xml(gene_data, gene_ids, dataseries, offset, metric): "Returns XML nodes for all predicted gene from one taxon.\n\n Args:\n gene_data (defaultdict[str,defaultdict[str,dict[str,float]]]): outer key is\n gene identifier, middle key is function identifier, inner key is in\n [metric, 'count', 'identity', 'coverage', 'Length', 'Completeness'],\n value is float.\n gene_ids (list of str): gene identifiers\n dataseries (list of str): either sample identifiers or function identifiers,\n depending on profile type (functional or taxonomic)\n offset (int): number of starting tabs\n metric (str): scoring metric\n\n Returns:\n ret_val (str): XML node\n attribute_values (defaultdict[str,dict[str,float]]): outer key is\n one of dataseries members, inner key is in [metric, 'count', 'identity'\n 'hit_count'], value is float.\n " ret_val = for gene_id in gene_ids: ret_val += (((('\t' * offset) + '<node name="') + gene_id) + '">\n') offset += 1 if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['count']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint][metric]) + '</val>') else: ret_val += '<val>0</val>' ret_val += (('</' + metric) + '>\n') ret_val += (('\t' * offset) + '<coverage>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['coverage']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</coverage>\n' ret_val += (('\t' * offset) + '<identity>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['identity']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</identity>\n' ret_val += (('\t' * offset) + '<Length>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['Length']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</Length>\n' ret_val += (('\t' * offset) + '<Completeness>') for datapoint in dataseries: if (datapoint in gene_data[gene_id]): ret_val += (('<val>' + gene_data[gene_id][datapoint]['Completeness']) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</Completeness>\n' ret_val += (('\t' * offset) + '<best_hit>') for datapoint in dataseries: if ((datapoint in gene_data[gene_id]) and ('Best hit' in gene_data[gene_id][datapoint])): ret_val += (((('<val href="' + gene_data[gene_id][datapoint]['Best hit']) + '">') + gene_data[gene_id][datapoint]['Best hit']) + '</val>') else: ret_val += '<val></val>' ret_val += '</best_hit>\n' offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val<|docstring|>Returns XML nodes for all predicted gene from one taxon. Args: gene_data (defaultdict[str,defaultdict[str,dict[str,float]]]): outer key is gene identifier, middle key is function identifier, inner key is in [metric, 'count', 'identity', 'coverage', 'Length', 'Completeness'], value is float. gene_ids (list of str): gene identifiers dataseries (list of str): either sample identifiers or function identifiers, depending on profile type (functional or taxonomic) offset (int): number of starting tabs metric (str): scoring metric Returns: ret_val (str): XML node attribute_values (defaultdict[str,dict[str,float]]): outer key is one of dataseries members, inner key is in [metric, 'count', 'identity' 'hit_count'], value is float.<|endoftext|>
435a95c0d449a031123c4679314077239c65b578860e7122b4397d50a7895de9
def get_assembly_tax_xml(tax_profile, genes, dataseries, taxid, offset, metric='efpkg'): "Returns XML node for assembly phylogenetic tree node and all its children.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n genes (defaultdict[str,defaultdict[str,dict[str,float]]]): outer key is\n gene identifier, middle key is function identifier, inner key is in\n [metric, 'count', 'identity', 'coverage', 'Length', 'Completeness'],\n value is float (genes[gene_id][function_id][parameter_name] = parameter_value).\n dataseries (list of str): function identifiers\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value 'efpkg')\n\n Returns:\n ret_val (str): XML node\n " if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((((('\t' * offset) + '<node name="') + taxid) + ':') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint]['count'], '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint][metric], '0.7f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'] > 0)): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] / tax_profile.tree.data[taxid].attributes[datapoint]['hit_count']), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' gene_ids = set() for datapoint in tax_profile.tree.data[taxid].attributes: if ('genes' in tax_profile.tree.data[taxid].attributes[datapoint]): for gene_id in tax_profile.tree.data[taxid].attributes[datapoint]['genes'].split(' '): gene_ids.add(gene_id) ret_val += get_genes_xml(genes, sorted(gene_ids), dataseries, offset, metric) else: if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount>') ret_val += ('<val>0</val>' * len(dataseries)) ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') ret_val += ('<val>0.0%</val>' * len(dataseries)) ret_val += '</identity>\n' if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: ret_val += get_assembly_tax_xml(tax_profile, genes, dataseries, child_taxid, offset, metric) offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val
Returns XML node for assembly phylogenetic tree node and all its children. Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile genes (defaultdict[str,defaultdict[str,dict[str,float]]]): outer key is gene identifier, middle key is function identifier, inner key is in [metric, 'count', 'identity', 'coverage', 'Length', 'Completeness'], value is float (genes[gene_id][function_id][parameter_name] = parameter_value). dataseries (list of str): function identifiers taxid (str): taxonomy identifier of a node of interest offset (int): number of starting tabs metric (str): scoring metric (default value 'efpkg') Returns: ret_val (str): XML node
lib/fama/output/krona_xml_writer.py
get_assembly_tax_xml
aekazakov/FamaProfiling
0
python
def get_assembly_tax_xml(tax_profile, genes, dataseries, taxid, offset, metric='efpkg'): "Returns XML node for assembly phylogenetic tree node and all its children.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n genes (defaultdict[str,defaultdict[str,dict[str,float]]]): outer key is\n gene identifier, middle key is function identifier, inner key is in\n [metric, 'count', 'identity', 'coverage', 'Length', 'Completeness'],\n value is float (genes[gene_id][function_id][parameter_name] = parameter_value).\n dataseries (list of str): function identifiers\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value 'efpkg')\n\n Returns:\n ret_val (str): XML node\n " if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((((('\t' * offset) + '<node name="') + taxid) + ':') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint]['count'], '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint][metric], '0.7f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'] > 0)): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] / tax_profile.tree.data[taxid].attributes[datapoint]['hit_count']), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' gene_ids = set() for datapoint in tax_profile.tree.data[taxid].attributes: if ('genes' in tax_profile.tree.data[taxid].attributes[datapoint]): for gene_id in tax_profile.tree.data[taxid].attributes[datapoint]['genes'].split(' '): gene_ids.add(gene_id) ret_val += get_genes_xml(genes, sorted(gene_ids), dataseries, offset, metric) else: if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount>') ret_val += ('<val>0</val>' * len(dataseries)) ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') ret_val += ('<val>0.0%</val>' * len(dataseries)) ret_val += '</identity>\n' if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: ret_val += get_assembly_tax_xml(tax_profile, genes, dataseries, child_taxid, offset, metric) offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val
def get_assembly_tax_xml(tax_profile, genes, dataseries, taxid, offset, metric='efpkg'): "Returns XML node for assembly phylogenetic tree node and all its children.\n\n Args:\n tax_profile (:obj:TaxonomyProfile): taxonomy profile\n genes (defaultdict[str,defaultdict[str,dict[str,float]]]): outer key is\n gene identifier, middle key is function identifier, inner key is in\n [metric, 'count', 'identity', 'coverage', 'Length', 'Completeness'],\n value is float (genes[gene_id][function_id][parameter_name] = parameter_value).\n dataseries (list of str): function identifiers\n taxid (str): taxonomy identifier of a node of interest\n offset (int): number of starting tabs\n metric (str): scoring metric (default value 'efpkg')\n\n Returns:\n ret_val (str): XML node\n " if (taxid not in tax_profile.tree.data): raise KeyError(taxid, 'not found in the tree!!!') ret_val = (((((('\t' * offset) + '<node name="') + taxid) + ':') + tax_profile.tree.data[taxid].name) + '">\n') offset += 1 if tax_profile.tree.data[taxid].attributes: if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint]['count'], '0.0f')) + '</val>') else: ret_val += '<val>0</val>' ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') for datapoint in dataseries: if (datapoint in tax_profile.tree.data[taxid].attributes): ret_val += (('<val>' + format(tax_profile.tree.data[taxid].attributes[datapoint][metric], '0.7f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') for datapoint in dataseries: if ((datapoint in tax_profile.tree.data[taxid].attributes) and (tax_profile.tree.data[taxid].attributes[datapoint]['hit_count'] > 0)): ret_val += (('<val>' + format((tax_profile.tree.data[taxid].attributes[datapoint]['identity'] / tax_profile.tree.data[taxid].attributes[datapoint]['hit_count']), '0.1f')) + '</val>') else: ret_val += '<val>0.0</val>' ret_val += '</identity>\n' gene_ids = set() for datapoint in tax_profile.tree.data[taxid].attributes: if ('genes' in tax_profile.tree.data[taxid].attributes[datapoint]): for gene_id in tax_profile.tree.data[taxid].attributes[datapoint]['genes'].split(' '): gene_ids.add(gene_id) ret_val += get_genes_xml(genes, sorted(gene_ids), dataseries, offset, metric) else: if (metric != 'readcount'): ret_val += (('\t' * offset) + '<readcount>') ret_val += ('<val>0</val>' * len(dataseries)) ret_val += '</readcount>\n' ret_val += (((('\t' * offset) + '<') + metric) + '>') ret_val += ('<val>0.0</val>' * len(dataseries)) ret_val += (((('</' + metric) + '>\n') + ('\t' * offset)) + '<identity>') ret_val += ('<val>0.0%</val>' * len(dataseries)) ret_val += '</identity>\n' if tax_profile.tree.data[taxid].children: for child_taxid in tax_profile.tree.data[taxid].children: ret_val += get_assembly_tax_xml(tax_profile, genes, dataseries, child_taxid, offset, metric) offset -= 1 ret_val += (('\t' * offset) + '</node>\n') return ret_val<|docstring|>Returns XML node for assembly phylogenetic tree node and all its children. Args: tax_profile (:obj:TaxonomyProfile): taxonomy profile genes (defaultdict[str,defaultdict[str,dict[str,float]]]): outer key is gene identifier, middle key is function identifier, inner key is in [metric, 'count', 'identity', 'coverage', 'Length', 'Completeness'], value is float (genes[gene_id][function_id][parameter_name] = parameter_value). dataseries (list of str): function identifiers taxid (str): taxonomy identifier of a node of interest offset (int): number of starting tabs metric (str): scoring metric (default value 'efpkg') Returns: ret_val (str): XML node<|endoftext|>