question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
In this official document, it can run command in a yaml config file: https://kubernetes.io/docs/tasks/configure-pod-container/ apiVersion: v1 kind: Pod metadata: name: hello-world spec: # specification of the pod’s contents restartPolicy: Never containers: - name: hello image: "ubuntu:14.04" env: - name: MESSAGE value: "hello world" command: ["/bin/sh","-c"] args: ["/bin/echo \"${MESSAGE}\""] If I want to run more than one command, how to do?
command: ["/bin/sh","-c"] args: ["command one; command two && command three"] Explanation: The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeed. In the above example, it always runs command one followed by command two, and only runs command three if command two succeeded. Alternative: In many cases, some of the commands you want to run are probably setting up the final command to run. In this case, building your own Dockerfile is the way to go. Look at the RUN directive in particular.
Kubernetes
33,887,194
222
I am trying to build and deploy microservices images to a single-node Kubernetes cluster running on my development machine using minikube. I am using the cloud-native microservices demo application Online Boutique by Google to understand the use of technologies like Kubernetes, Istio etc. Link to github repo: microservices-demo While following the installation process, and on running command skaffold run to build and deploy my application, I get some errors: Step 10/11 : RUN apt-get -qq update && apt-get install -y --no-install-recommends curl ---> Running in 43d61232617c W: GPG error: http://deb.debian.org/debian buster InRelease: At least one invalid signature was encountered. E: The repository 'http://deb.debian.org/debian buster InRelease' is not signed. W: GPG error: http://deb.debian.org/debian buster-updates InRelease: At least one invalid signature was encountered. E: The repository 'http://deb.debian.org/debian buster-updates InRelease' is not signed. W: GPG error: http://security.debian.org/debian-security buster/updates InRelease: At least one invalid signature was encountered. E: The repository 'http://security.debian.org/debian-security buster/updates InRelease' is not signed. failed to build: couldn't build "loadgenerator": unable to stream build output: The command '/bin/sh -c apt-get -qq update && apt-get install -y --no-install-recommends curl' returned a non-zero code: 100 I receive these errors when trying to build loadgenerator. How can I resolve this issue?
There are a few reasons why you encounter these errors: There might be an issue with the existing cache and/or disc space. In order to fix it you need to clear the APT cache by executing: sudo apt-get clean and sudo apt-get update. The same goes with existing docker images. Execute: docker image prune -f and docker container prune -f in order to remove unused data and free disc space. Executing docker image prune -f will delete all the unused images. To delete some selective images of large size, run docker images and identify the images you want to remove, and then run docker rmi -f <IMAGE-ID1> <IMAGE-ID2> <IMAGE-ID3>. If you don't care about the security risks, you can try to run the apt-get command with the --allow-unauthenticated or --allow-insecure-repositories flag. According to the docs: Ignore if packages can't be authenticated and don't prompt about it. This can be useful while working with local repositories, but is a huge security risk if data authenticity isn't ensured in another way by the user itself. Please let me know if that helped.
Kubernetes
62,473,932
221
I've read a couple of passages from some books written on Kubernetes as well as the page on headless services in the docs. But I'm still unsure what it really actually does and why someone would use it. Does anyone have a good understanding of it, what it accomplishes, and why someone would use it?
Well, I think you need some theory. There are many explanations (including the official docs) across the whole internet, but I think Marco Luksa did it the best: Each connection to the service is forwarded to one randomly selected backing pod. But what if the client needs to connect to all of those pods? What if the backing pods themselves need to each connect to all the other backing pods. Connecting through the service clearly isn’t the way to do this. What is? For a client to connect to all pods, it needs to figure out the IP of each individual pod. One option is to have the client call the Kubernetes API server and get the list of pods and their IP addresses through an API call, but because you should always strive to keep your apps Kubernetes-agnostic, using the API server isn’t ideal Luckily, Kubernetes allows clients to discover pod IPs through DNS lookups. Usually, when you perform a DNS lookup for a service, the DNS server returns a single IP — the service’s cluster IP. But if you tell Kubernetes you don’t need a cluster IP for your service (you do this by setting the clusterIP field to None in the service specification ), the DNS server will return the pod IPs instead of the single service IP. Instead of returning a single DNS A record, the DNS server will return multiple A records for the service, each pointing to the IP of an individual pod backing the service at that moment. Clients can therefore do a simple DNS A record lookup and get the IPs of all the pods that are part of the service. The client can then use that information to connect to one, many, or all of them. Setting the clusterIP field in a service spec to None makes the service headless, as Kubernetes won’t assign it a cluster IP through which clients could connect to the pods backing it. "Kubernetes in Action" by Marco Luksa
Kubernetes
52,707,840
220
This is what I keep getting: [root@centos-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-server-h6nw8 1/1 Running 0 1h nfs-web-07rxz 0/1 CrashLoopBackOff 8 16m nfs-web-fdr9h 0/1 CrashLoopBackOff 8 16m Below is output from describe pods kubectl describe pods Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 16m 16m 1 {default-scheduler } Normal Scheduled Successfully assigned nfs-web-fdr9h to centos-minion-2 16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Created Created container with docker id 495fcbb06836 16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Started Started container with docker id 495fcbb06836 16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Started Started container with docker id d56f34ae4e8f 16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Created Created container with docker id d56f34ae4e8f 16m 16m 2 {kubelet centos-minion-2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "web" with CrashLoopBackOff: "Back-off 10s restarting failed container=web pod=nfs-web-fdr9h_default(461c937d-d870-11e6-98de-005056040cc2)" I have two pods: nfs-web-07rxz, nfs-web-fdr9h, but if I do kubectl logs nfs-web-07rxz or with -p option I don't see any log in both pods. [root@centos-master ~]# kubectl logs nfs-web-07rxz -p [root@centos-master ~]# kubectl logs nfs-web-07rxz This is my replicationController yaml file: replicationController yaml file apiVersion: v1 kind: ReplicationController metadata: name: nfs-web spec: replicas: 2 selector: role: web-frontend template: metadata: labels: role: web-frontend spec: containers: - name: web image: eso-cmbu-docker.artifactory.eng.vmware.com/demo-container:demo-version3.0 ports: - name: web containerPort: 80 securityContext: privileged: true My Docker image was made from this simple docker file: FROM ubuntu RUN apt-get update RUN apt-get install -y nginx RUN apt-get install -y nfs-common I am running my kubernetes cluster on CentOs-1611, kube version: [root@centos-master ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"} If I run the docker image by docker run I was able to run the image without any issue, only through kubernetes I got the crash. Can someone help me out, how can I debug without seeing any log?
As @Sukumar commented, you need to have your Dockerfile have a Command to run or have your ReplicationController specify a command. The pod is crashing because it starts up then immediately exits, thus Kubernetes restarts and the cycle continues.
Kubernetes
41,604,499
217
I have a pod test-1495806908-xn5jn with 2 containers. I'd like to restart one of them called container-test. Is it possible to restart a single container within a pod and how? If not, how do I restart the pod? The pod was created using a deployment.yaml with: kubectl create -f deployment.yaml
Is it possible to restart a single container Not through kubectl, although depending on the setup of your cluster you can "cheat" and docker kill the-sha-goes-here, which will cause kubelet to restart the "failed" container (assuming, of course, the restart policy for the Pod says that is what it should do) how do I restart the pod That depends on how the Pod was created, but based on the Pod name you provided, it appears to be under the oversight of a ReplicaSet, so you can just kubectl delete pod test-1495806908-xn5jn and kubernetes will create a new one in its place (the new Pod will have a different name, so do not expect kubectl get pods to return test-1495806908-xn5jn ever again)
Kubernetes
46,123,457
210
I inherited a Kubernetes/Docker setup, and I accidentally crashed the pod by changing something relating to the DB password. I am trying to troubleshoot this. I don't have much Kubernetes or Docker experience, so I'm still learning how to do things. The value is contained inside the db-user-pass credential I believe, which is an Opaque type secret. I'm describing it: kubectl describe secrets/db-user-pass Name: db-user-pass Namespace: default Labels: <none> Annotations: <none> Type: Opaque Data ==== password: 16 bytes username: 13 bytes but I have no clue how to get any data from this secret. The example on the Kubernetes site seems to assume I'll have a base64 encoded string, but I can't even seem to get that. How do I get the value for this?
You can use kubectl get secrets/db-user-pass -o yaml or -o json where you'll see the base64-encoded username and password. You can then copy the value and decode it with something like echo <ENCODED_VALUE> | base64 -d. A more compact one-liner for this: kubectl get secrets/db-user-pass --template={{.data.password}} | base64 -d and likewise for the username: kubectl get secrets/db-user-pass --template={{.data.username}} | base64 -d
Kubernetes
56,909,180
207
I've been doing a lot of digging on Kubernetes, and I'm liking what I see a lot! One thing I've been unable to get a clear idea about is what the exact distinctions are between the Deployment and StatefulSet resources and in which scenarios would you use each (or is one generally preferred over the other).
Deployments and ReplicationControllers are meant for stateless usage and are rather lightweight. StatefulSets are used when state has to be persisted. Therefore the latter use volumeClaimTemplates / claims on persistent volumes to ensure they can keep the state across component restarts. So if your application is stateful or if you want to deploy stateful storage on top of Kubernetes use a StatefulSet. If your application is stateless or if state can be built up from backend-systems during the start then use Deployments. Further details about running stateful application can be found in 2016 kubernetes' blog entry about stateful applications
Kubernetes
41,583,672
206
I just upgraded kubeadm and kubelet to v1.8.0. And install the dashboard following the official document. $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml After that, I started the dashboard by running $ kubectl proxy --address="192.168.0.101" -p 8001 --accept-hosts='^*$' Then fortunately, I was able to access the dashboard thru http://192.168.0.101:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ I was redirected to a login page like this which I had never met before. It looks like that there are two ways of authentication. I tried to upload the /etc/kubernetes/admin.conf as the kubeconfig but got failed. Then I tried to use the token I got from kubeadm token list to sign in but failed again. The question is how I can sign in the dashboard. It looks like they added a lot of security mechanism than before. Thanks.
As of release 1.7 Dashboard supports user authentication based on: Authorization: Bearer <token> header passed in every request to Dashboard. Supported from release 1.6. Has the highest priority. If present, login view will not be shown. Bearer Token that can be used on Dashboard login view. Username/password that can be used on Dashboard login view. Kubeconfig file that can be used on Dashboard login view. — Dashboard on Github Token Here Token can be Static Token, Service Account Token, OpenID Connect Token from Kubernetes Authenticating, but not the kubeadm Bootstrap Token. With kubectl, we can get an service account (eg. deployment controller) created in kubernetes by default. $ kubectl -n kube-system get secret # All secrets with type 'kubernetes.io/service-account-token' will allow to log in. # Note that they have different privileges. NAME TYPE DATA AGE deployment-controller-token-frsqj kubernetes.io/service-account-token 3 22h $ kubectl -n kube-system describe secret deployment-controller-token-frsqj Name: deployment-controller-token-frsqj Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name=deployment-controller kubernetes.io/service-account.uid=64735958-ae9f-11e7-90d5-02420ac00002 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZXBsb3ltZW50LWNvbnRyb2xsZXItdG9rZW4tZnJzcWoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVwbG95bWVudC1jb250cm9sbGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjQ3MzU5NTgtYWU5Zi0xMWU3LTkwZDUtMDI0MjBhYzAwMDAyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRlcGxveW1lbnQtY29udHJvbGxlciJ9.OqFc4CE1Kh6T3BTCR4XxDZR8gaF1MvH4M3ZHZeCGfO-sw-D0gp826vGPHr_0M66SkGaOmlsVHmP7zmTi-SJ3NCdVO5viHaVUwPJ62hx88_JPmSfD0KJJh6G5QokKfiO0WlGN7L1GgiZj18zgXVYaJShlBSz5qGRuGf0s1jy9KOBt9slAN5xQ9_b88amym2GIXoFyBsqymt5H-iMQaGP35tbRpewKKtly9LzIdrO23bDiZ1voc5QZeAZIWrizzjPY5HPM1qOqacaY9DcGc7akh98eBJG_4vZqH2gKy76fMf0yInFTeNKr45_6fWt8gRM77DQmPwb3hbrjWXe1VvXX_g Kubeconfig The dashboard needs the user in the kubeconfig file to have either username & password or token, but admin.conf only has client-certificate. You can edit the config file to add the token that was extracted using the method above. $ kubectl config set-credentials cluster-admin --token=bearer_token Alternative (Not recommended for Production) Here are two ways to bypass the authentication, but use for caution. Deploy dashboard with HTTP $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml Dashboard can be loaded at http://localhost:8001/ui with kubectl proxy. Granting admin privileges to Dashboard's Service Account $ cat <<EOF | kubectl create -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system EOF Afterwards you can use Skip option on login page to access Dashboard. If you are using dashboard version v1.10.1 or later, you must also add --enable-skip-login to the deployment's command line arguments. You can do so by adding it to the args in kubectl edit deployment/kubernetes-dashboard --namespace=kube-system. Example: containers: - args: - --auto-generate-certificates - --enable-skip-login # <-- add this line image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
Kubernetes
46,664,104
198
I would like to see all resources in a namespace. Doing kubectl get all will, despite of the name, not list things like services and ingresses. If I know the the type I can explicitly ask for that particular type, but it seems there is also no command for listing all possible types. (Especially kubectl get does for example not list custom types). How to show all resources before for example deleting that namespace?
Based on this comment , the supported way to list all resources is to iterate through all the api versions listed by kubectl api-resources: kubectl api-resources enumerates the resource types available in your cluster. this means you can combine it with kubectl get to actually list every instance of every resource type in a namespace: kubectl api-resources --verbs=list --namespaced -o name \ | xargs -n 1 kubectl get --show-kind --ignore-not-found -l <label>=<value> -n <namespace>
Kubernetes
47,691,479
198
I am looking to list all the containers in a pod in a script that gather's logs after running a test. kubectl describe pods -l k8s-app=kube-dns returns a lot of info, but I am just looking for a return like: etcd kube2sky skydns I don't see a simple way to format the describe output. Is there another command? (and I guess worst case there is always parsing the output of describe).
Answer kubectl get pods POD_NAME_HERE -o jsonpath='{.spec.containers[*].name}' Explanation This gets the JSON object representing the pod. It then uses kubectl's JSONpath to extract the name of each container from the pod.
Kubernetes
33,924,198
192
I've created a secret using kubectl create secret generic production-tls \ --from-file=./tls.key \ --from-file=./tls.crt If I'd like to update the values - how can I do this?
This should work: kubectl create secret generic production-tls \ --save-config \ --dry-run=client \ --from-file=./tls.key --from-file=./tls.crt \ -o yaml | \ kubectl apply -f -
Kubernetes
45,879,498
187
I have 3 nodes, running all kinds of pods. I would like to have a list of nodes and pods, for an example: NODE1 POD1 NODE1 POD2 NODE2 POD3 NODE3 POD4 How can this please be achieved?
You can do that with custom columns: kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName --all-namespaces or just: kubectl get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name --all-namespaces
Kubernetes
48,983,354
184
When I run kubectl -n abc-namespace describe pod my-pod-zl6m6, I get a lot of information about the pod along with the Events in the end. Is there a way to output just the Events of the pod either using kubectl describe or kubectl get commands? Edit: This can now (kubernetes 1.29) be achieved via the following command - kubectl -n abc-namespace events --for pod/my-pod-zl6m6 All the answers below can be ignored as they refer to older versions of kubernetes
You can use the event command of kubectl. To filter for a specific pod you can use a field-selector: kubectl get event --namespace abc-namespace --field-selector involvedObject.name=my-pod-zl6m6 To see what fields are possible you can use kubectl describe on any event.
Kubernetes
51,931,113
184
Every time a deployment gets updated, a new replica set is added to a long list. Should the old rs be cleaned?
Removing old replicasets is part of the Deployment object, but it is optional. You can set .spec.revisionHistoryLimit to tell the Deployment how many old replicasets to keep around. Here is a YAML example: apiVersion: apps/v1 kind: Deployment # ... spec: # ... revisionHistoryLimit: 0 # Default to 10 if not specified # ...
Kubernetes
37,255,731
182
I have a MySQL pod running in my cluster. I need to temporarily pause the pod from working without deleting it, something similar to docker where the docker stop container-id cmd will stop the container not delete the container. Are there any commands available in kubernetes to pause/stop a pod?
So, like others have pointed out, Kubernetes doesn't support stop/pause of current state of pod and resume when needed. However, you can still achieve it by having no working deployments which is setting number of replicas to 0. kubectl scale --replicas=0 deployment/<your-deployment> see the help # Set a new size for a Deployment, ReplicaSet, Replication Controller, or StatefulSet. kubectl scale --help Scale also allows users to specify one or more preconditions for the scale action. If --current-replicas or --resource-version is specified, it is validated before the scale is attempted, and it is guaranteed that the precondition holds true when the scale is sent to the server. Examples: # Scale a replicaset named 'foo' to 3. kubectl scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in "foo.yaml" to 3. kubectl scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3. kubectl scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers. kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale statefulset named 'web' to 3. kubectl scale --replicas=3 statefulset/web
Kubernetes
54,821,044
176
Is there a way to automatically remove completed Jobs besides making a CronJob to clean up completed Jobs? The K8s Job Documentation states that the intended behavior of completed Jobs is for them to remain in a completed state until manually deleted. Because I am running thousands of Jobs a day via CronJobs and I don't want to keep completed Jobs around.
You can now set history limits, or disable history altogether, so that failed or successful CronJobs are not kept around indefinitely. See my answer here. Documentation is here. To set the history limits: The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish. The config with 0 limits would look like: apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" successfulJobsHistoryLimit: 0 failedJobsHistoryLimit: 0 jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure
Kubernetes
41,385,403
174
I just saw some of my pods got evicted by kubernetes. What will happen to them? just hanging around like that or I have to delete them manually?
A quick workaround I use, is to delete all evicted pods manually after an incident. You can use this command: kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
Kubernetes
46,419,163
174
When I push my deployments, for some reason, I'm getting the error on my pods: pod has unbound PersistentVolumeClaims Here are my YAML below: This is running locally, not on any cloud solution. apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.16.0 () creationTimestamp: null labels: io.kompose.service: ckan name: ckan spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: io.kompose.service: ckan spec: containers: image: slckan/docker_ckan name: ckan ports: - containerPort: 5000 resources: {} volumeMounts: - name: ckan-home mountPath: /usr/lib/ckan/ subPath: ckan volumes: - name: ckan-home persistentVolumeClaim: claimName: ckan-pv-home-claim restartPolicy: Always status: {} kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ckan-pv-home-claim labels: io.kompose.service: ckan spec: storageClassName: ckan-home-sc accessModes: - ReadWriteOnce resources: requests: storage: 100Mi volumeMode: Filesystem --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ckan-home-sc provisioner: kubernetes.io/no-provisioner mountOptions: - dir_mode=0755 - file_mode=0755 - uid=1000 - gid=1000
You have to define a PersistentVolume providing disc space to be consumed by the PersistentVolumeClaim. When using storageClass Kubernetes is going to enable "Dynamic Volume Provisioning" which is not working with the local file system. To solve your issue: Provide a PersistentVolume fulfilling the constraints of the claim (a size >= 100Mi) Remove the storageClass from the PersistentVolumeClaim or provide it with an empty value ("") Remove the StorageClass from your cluster How do these pieces play together? At creation of the deployment state-description it is usually known which kind (amount, speed, ...) of storage that application will need. To make a deployment versatile you'd like to avoid a hard dependency on storage. Kubernetes' volume-abstraction allows you to provide and consume storage in a standardized way. The PersistentVolumeClaim is used to provide a storage-constraint alongside the deployment of an application. The PersistentVolume offers cluster-wide volume-instances ready to be consumed ("bound"). One PersistentVolume will be bound to one claim. But since multiple instances of that claim may be run on multiple nodes, that volume may be accessed by multiple nodes. A PersistentVolume without StorageClass is considered to be static. "Dynamic Volume Provisioning" alongside with a StorageClass allows the cluster to provision PersistentVolumes on demand. In order to make that work, the given storage provider must support provisioning - this allows the cluster to request the provisioning of a "new" PersistentVolume when an unsatisfied PersistentVolumeClaim pops up. Example PersistentVolume In order to find how to specify things you're best advised to take a look at the API for your Kubernetes version, so the following example is build from the API-Reference of K8S 1.17: apiVersion: v1 kind: PersistentVolume metadata: name: ckan-pv-home labels: type: local spec: capacity: storage: 100Mi hostPath: path: "/mnt/data/ckan" The PersistentVolumeSpec allows us to define multiple attributes. I chose a hostPath volume which maps a local directory as content for the volume. The capacity allows the resource scheduler to recognize this volume as applicable in terms of resource needs. Additional Resources: Configure PersistentVolume Guide
Kubernetes
52,668,938
172
I have been trying to follow the getting started guide to EKS. When I tried to call kubectl get service I got the message: error: You must be logged in to the server (Unauthorized) Here is what I did: 1. Created the EKS cluster. 2. Created the config file as follows: apiVersion: v1 clusters: - cluster: server: https://*********.yl4.us-west-2.eks.amazonaws.com certificate-authority-data: ********* name: ********* contexts: - context: cluster: ********* user: aws name: aws current-context: aws kind: Config preferences: {} users: - name: aws user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: heptio-authenticator-aws args: - "token" - "-i" - "*********" - "-r" - "arn:aws:iam::*****:role/******" Downloaded and installed latest aws cli Ran aws configure and set the credentials for my IAM user and the region as us-west-2 Added a policy to the IAM user for sts:AssumeRole for the EKS role and set it up as a trusted relationship Setup kubectl to use the config file I can get a token when I run heptio-authenticator-aws token -r arn:aws:iam::**********:role/********* -i my-cluster-ame However when I try to access the cluster I keep receiving error: You must be logged in to the server (Unauthorized) Any idea how to fix this issue?
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. eks-docs So to add access to other aws users, first you must edit ConfigMap to add an IAM user or role to an Amazon EKS cluster. You can edit the ConfigMap file by executing: kubectl edit -n kube-system configmap/aws-auth, after which you will be granted with editor with which you map new users. apiVersion: v1 data: mapRoles: | - rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6 username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes mapUsers: | - userarn: arn:aws:iam::111122223333:user/ops-user username: ops-user groups: - system:masters mapAccounts: | - "111122223333" Pay close attention to the mapUsers where you're adding ops-user together with mapAccounts label which maps the AWS user account with a username on Kubernetes cluster. However, no permissions are provided in RBAC by this action alone; you must still create role bindings in your cluster to provide these entities permissions. As the amazon documentation(iam-docs) states you need to create a role binding on the kubernetes cluster for the user specified in the ConfigMap. You can do that by executing following command (kub-docs): kubectl create clusterrolebinding ops-user-cluster-admin-binding --clusterrole=cluster-admin --user=ops-user which grants the cluster-admin ClusterRole to a user named ops-user across the entire cluster.
Kubernetes
50,791,303
171
What is the username/password/keys to ssh into the Minikube VM?
You can use the Minikube binary for this, minikube ssh.
Kubernetes
38,870,277
167
I can specify a specific version of a chart by doing: helm install --version <some_version> stable/<some_chart> But, how do I know which versions are available?
Short Answer You can list all available versions of a chart using the search repo functionality together with the --versions flag: helm search repo <reponame>/<chartname> --versions This requires that the repo was added previously and is up to date. If your repo was added some time ago, please make sure to keep the local cache updated using helm repo update to also see recently released versions. The behaviour of managing charts in a repository changed slightly between Helm v2 and Helm v3. So please refer to the corresponding section for details. Helm v3 Helm v3 changed to a more decentralized management of charts, so you might have added a certain repository upfront compared to obtaining many of them directly from the preconfigured stable repository. Listing the versions of a certain chart can be accomplished running the command helm search repo and specifying the full path of the chart (specifying repo and chart name) in combination with the --versions flag (or shorthand -l) like so: helm search repo <reponame>/<chartname> --versions If you are interested in pre-release builds like 1.1.0-rc.1 or 3.0.0-alpha.2, you have to add the --devel flag to also include those. helm search repo <reponame>/<chartname> --versions --devel You can limit the amount of results by specifying a version constraint using SEMVER notation with the --version flag in addition to --versions. This allows for example limiting the results to e.g. only v1 charts: helm search repo <reponame>/<chartname> --versions --version ^v1.0 Depending on your shell, it can be required to put the version string in single quotes (') due to special characters like ^. Example One concrete example using jetstack's charts for cert-manager: $ helm repo add jetstack https://charts.jetstack.io "jetstack" has been added to your repositories Regular search for results that contain jetstack $ helm search repo jetstack NAME CHART VERSION APP VERSION DESCRIPTION jetstack/cert-manager v1.0.4 v1.0.4 A Helm chart for cert-manager jetstack/tor-proxy 0.1.1 A Helm chart for Kubernetes Regular search for a specific chart $ helm search repo jetstack/cert-manager NAME CHART VERSION APP VERSION DESCRIPTION jetstack/cert-manager v1.0.4 v1.0.4 A Helm chart for cert-manager Listing all the versions for one specific chart $ helm search repo jetstack/cert-manager --versions NAME CHART VERSION APP VERSION DESCRIPTION jetstack/cert-manager v1.0.4 v1.0.4 A Helm chart for cert-manager jetstack/cert-manager v1.0.3 v1.0.3 A Helm chart for cert-manager jetstack/cert-manager v1.0.2 v1.0.2 A Helm chart for cert-manager jetstack/cert-manager v1.0.1 v1.0.1 A Helm chart for cert-manager ... Listing unstable/pre-release builds will also include the alpha versions. $ helm search repo jetstack/cert-manager --versions --devel NAME CHART VERSION APP VERSION DESCRIPTION jetstack/cert-manager v1.1.0-alpha.1 v1.1.0-alpha.1 A Helm chart for cert-manager jetstack/cert-manager v1.1.0-alpha.0 v1.1.0-alpha.0 A Helm chart for cert-manager jetstack/cert-manager v1.0.4 v1.0.4 A Helm chart for cert-manager jetstack/cert-manager v1.0.3 v1.0.3 A Helm chart for cert-manager ... As listing the versions is integrated into the search, using --versions is not limited to a single chart. Specifying this flag will list all available versions for all charts that match the query string. For additional information, please check the helm docs at https://helm.sh/docs/helm/helm_search_repo/ Helm v2 For Helm v2, many artifacts were accessible through the stable repo which came preconfigured with the Helm CLI. Listing all versions was done in a similar way but with a different command. To list the available versions of the chart with Helm v2 use the following command: helm search -l stable/<some_chart> The -l or --versions flag is used to display all and not only the latest version per chart. With Helm v2 you were able to keep your repos updated using the helm update command. Reference: https://v2.helm.sh/docs/helm/#helm-search
Kubernetes
51,031,294
162
I looking for the option to list all pods name How to do without awk (or cut). Now i'm using this command kubectl get --no-headers=true pods -o name | awk -F "/" '{print $2}'
Personally I prefer this method because it relies only on kubectl, is not very verbose and we don't get the pod/ prefix in the output: kubectl get pods --no-headers -o custom-columns=":metadata.name"
Kubernetes
35,797,906
161
I can sort my Kubernetes pods by name using: kubectl get pods --sort-by=.metadata.name How can I sort them (or other resoures) by age using kubectl?
Pods have status, which you can use to find out startTime. I guess something like kubectl get po --sort-by=.status.startTime should work. You could also try: kubectl get po --sort-by='{.firstTimestamp}'. kubectl get pods --sort-by=.metadata.creationTimestamp Thanks @chris Also apparently in Kubernetes 1.7 release, sort-by is broken. https://github.com/kubernetes/kubectl/issues/43 Here's the bug report : https://github.com/kubernetes/kubernetes/issues/48602 Here's the PR: https://github.com/kubernetes/kubernetes/pull/48659/files
Kubernetes
45,310,287
161
Kubernetes is billed as a container cluster "scheduler/orchestrator", but I have no idea what this means. After reading the Kubernetes site and (vague) GitHub wiki, the best I can tell is that its somehow figures out what VMs are available/capable of running your Docker container, and then deploys them there. But that is just my guess, and I haven't seen any concrete verbiage in their documentation to support that. So what is Kubernetes, exactly, and what are some specific problems that it solves?
The purpose of Kubernetes is to make it easier to organize and schedule your application across a fleet of machines. At a high level it is an operating system for your cluster. Basically, it allows you to not worry about what specific machine in your datacenter each application runs on. Additionally it provides generic primitives for health checking and replicating your application across these machines, as well as services for wiring your application into micro-services so that each layer in your application is decoupled from other layers so that you can scale/update/maintain them independently. While it is possible to do many of these things in application layer, such solutions tend to be one-off and brittle, it's much better to have separation of concerns, where an orchestration system worries about how to run your application, and you worry about the code that makes up your application.
Kubernetes
28,086,732
160
I am relatively new to all these, but I'm having troubles getting a clear picture among the listed technologies. Though, all of these try to solve different problems, but do have things in common too. I would like to understand what are the things that are common and what is different. It is likely that the combination of few would be great fit, if so what are they? I am listing a few of them along with questions, but it would be great if someone lists all of them in detail and answers the questions. Kubernetes vs Mesos: This link What's the difference between Apache's Mesos and Google's Kubernetes provides a good insight into the differences, but I'm unable to understand as to why Kubernetes should run on top of Mesos. Is it more to do with coming together of two opensource solutions? Kubernetes vs Core-OS Fleet: If I use kubernetes, is fleet required? How does Docker-Swarm fit into all the above?
Disclosure: I'm a lead engineer on Kubernetes I think that Mesos and Kubernetes are largely aimed at solving similar problems of running clustered applications, they have different histories and different approaches to solving the problem. Mesos focuses its energy on very generic scheduling, and plugging in multiple different schedulers. This means that it enables systems like Hadoop and Marathon to co-exist in the same scheduling environment. Mesos is less focused on running containers. Mesos existed prior to widespread interest in containers and has been re-factored in parts to support containers. In contrast, Kubernetes was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems. Fleet is a lower-level task distributor. It is useful for bootstrapping a cluster system, for example CoreOS uses it to distribute the kubernetes agents and binaries out to the machines in a cluster in order to turn-up a kubernetes cluster. It is not really intended to solve the same distributed application development problems, think of it more like systemd/init.d/upstart for your cluster. It's not required if you run kubernetes, you can use other tools (e.g. Salt, Puppet, Ansible, Chef, ...) to accomplish the same binary distribution. Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781 Join us on IRC @ #google-containers if you want to talk more.
Kubernetes
27,640,633
159
I'm running a Kubernetes cluster on AWS using kops. I've mounted an EBS volume onto a container and it is visible from my application but it's read only because my application does not run as root. How can I mount a PersistentVolumeClaim as a user other than root? The VolumeMount does not seem to have any options to control the user, group or file permissions of the mounted path. Here is my Deployment yaml file: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: notebook-1 spec: replicas: 1 template: metadata: labels: app: notebook-1 spec: volumes: - name: notebook-1 persistentVolumeClaim: claimName: notebook-1 containers: - name: notebook-1 image: jupyter/base-notebook ports: - containerPort: 8888 volumeMounts: - mountPath: "/home/jovyan/work" name: notebook-1
The Pod Security Context supports setting an fsGroup, which allows you to set the group ID that owns the volume, and thus who can write to it. The example in the docs: apiVersion: v1 kind: Pod metadata: name: hello-world spec: containers: # specification of the pod's containers # ... securityContext: fsGroup: 1234 More info on this is here
Kubernetes
43,544,370
153
Dockerfile has a parameter for ENTRYPOINT and while writing Kubernetes deployment YAML file, there is a parameter in Container spec for COMMAND. I am not able to figure out what's the difference and how each is used?
Kubernetes provides us with multiple options on how to use these commands: When you override the default Entrypoint and Cmd in Kubernetes .yaml file, these rules apply: If you do not supply command or args for a Container, the defaults defined in the Docker image are used. If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied. If you supply a command for a Container, only the supplied command is used. The default EntryPoint and the default Cmd defined in the Docker image are ignored. Your command is run with the args supplied (or no args if none supplied). Here is an example: Dockerfile: FROM alpine:latest COPY "executable_file" / ENTRYPOINT [ "./executable_file" ] Kubernetes yaml file: spec: containers: - name: container_name image: image_name args: ["arg1", "arg2", "arg3"] https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
Kubernetes
44,316,361
152
What is the difference between persistent volume (PV) and persistent volume claim (PVC) in Kubernetes/ Openshift by referring to documentation? What is the difference between both in simple terms?
From the docs PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. So a persistent volume (PV) is the "physical" volume on the host machine that stores your persistent data. A persistent volume claim (PVC) is a request for the platform to create a PV for you, and you attach PVs to your pods via a PVC. Something akin to Pod -> PVC -> PV -> Host machine
Kubernetes
48,956,049
148
I've been using K8S ConfigMap and Secret to manage our properties. My design is pretty simple, that keeps properties files in a git repo and use build server such as Thoughtworks GO to automatically deploy them to be ConfigMaps or Secrets (on choice condition) to my k8s cluster. Currently, I found it's not really efficient that I have to always delete the existing ConfigMap and Secret and create the new one to update as below: kubectl delete configmap foo kubectl create configmap foo --from-file foo.properties Is there a nice and simple way to make above one step and more efficient than deleting current? potentially what I'm doing now may compromise the container that uses these configmaps if it tries to mount while the old configmap is deleted and the new one hasn't been created.
You can get YAML from the kubectl create configmap command and pipe it to kubectl apply, like this: kubectl create configmap foo --from-file foo.properties -o yaml --dry-run=client | kubectl apply -f -
Kubernetes
38,216,278
147
I am new to Kubernetes and started reading through the documentation. There often the term 'endpoint' is used but the documentation lacks an explicit definition. What is an 'endpoint' in terms of Kubernetes? Where is it located? I could image the 'endpoint' is some kind of access point for an individual 'node' but that's just a guess.
Pods expose themselves through endpoints to a service. It is if you will part of a pod. Source: Services and Endpoints
Kubernetes
52,857,825
146
I had the below YAML for my Ingress and it worked (and continues to work): apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress namespace: test-layer annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: mylocalhost.com http: paths: - path: / backend: serviceName: test-app servicePort: 5000 However, it tells me that it's deprecated and I should change to using networking.k8s.io/v1. When I do that (see below) it throws an error. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress namespace: test-layer annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: mylocalhost.com http: paths: - path: / backend: serviceName: test-app servicePort: 5000 ERROR error: error validating "test-ingress.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend]; if you choose to ignore these errors, turn validation off with --validate=false Other than changing the API version, I made no other changes. kubectl version returns: Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:23:04Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
I think that this PR contains the change you're asking about. `Ingress` and `IngressClass` resources have graduated to `networking.k8s.io/v1`. Ingress and IngressClass types in the `extensions/v1beta1` and `networking.k8s.io/v1beta1` API versions are deprecated and will no longer be served in 1.22+. Persisted objects can be accessed via the `networking.k8s.io/v1` API. Notable changes in v1 Ingress objects (v1beta1 field names are unchanged): * `spec.backend` -> `spec.defaultBackend` * `serviceName` -> `service.name` * `servicePort` -> `service.port.name` (for string values) * `servicePort` -> `service.port.number` (for numeric values) * `pathType` no longer has a default value in v1; "Exact", "Prefix", or "ImplementationSpecific" must be specified Other Ingress API updates: * backends can now be resource or service backends * `path` is no longer required to be a valid regular expression If you look in the 1.19 Ingress doc, it looks like the new syntax would be: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minimal-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /testpath pathType: Prefix backend: service: name: test port: number: 80 I unfortunately don't have a 1.19 cluster to test myself, but I think this is what you're running into.
Kubernetes
64,125,048
146
As I understand the purpose of the Kubernetes Controller is to make sure that current state is equal to the desired state. Nevertheless, Kubernetes Operator does the same job. The list of controller in the Control-Plane: Deployment ReplicaSet StatefulSet DaemonSet etc From the Google Search, I found out that there are K8s Operators such as etcd Operator Prometheus Operator kong Operators However, I was not able to understand why it cannot be done using Controller? Is Operator complementing the Controllers? What's the difference between these two design as a purpose and functionality. What certain things need to keep in mind to choose between Controller and Operator? ?
I believe the term "kubernetes operator" was introduced by the CoreOS people here An Operator is an application-specific controller that extends the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user. It builds upon the basic Kubernetes resource and controller concepts, but also includes domain or application-specific knowledge to automate common tasks better managed by computers. So basically, a kubernetes operator is the name of a pattern that consists of a kubernetes controller that adds new objects to the Kubernetes API, in order to configure and manage an application, such as Prometheus or etcd. In one sentence: An operator is a domain specific controller. Update There is a new discussion on Github about this very same topic, linking to the same blog post. Relevant bits of the discussion are: All Operators use the controller pattern, but not all controllers are Operators. It's only an Operator if it's got: controller pattern + API extension + single-app focus. Operator is a customized controller implemented with CRD. It follows the same pattern as built-in controllers (i.e. watch, diff, action). Update 2 I found a new blog post that tries to explain the difference as well.
Kubernetes
47,848,258
145
I have my deployment.yaml file within the templates directory of Helm charts with several environment variables for the container I will be running using Helm. Now I want to be able to pull the environment variables locally from whatever machine the helm is ran so I can hide the secrets that way. How do I pass this in and have helm grab the environment variables locally when I use Helm to run the application? Here is some part of my deployment.yaml file ... ... spec: restartPolicy: Always containers: - name: sample-app image: "sample-app:latest" imagePullPolicy: Always env: - name: "USERNAME" value: "app-username" - name: "PASSWORD" value: "28sin47dsk9ik" ... ... How can I pull the value of USERNAME and PASSWORD from local environment variables when I run helm? Is this possible? If yes, then how do I do this?
You can export the variable and use it while running helm install. Before that, you have to modify your chart so that the value can be set while installation. Skip this part, if you already know, how to setup template fields. As you don't want to expose the data, so it's better to have it saved as secret in kubernetes. First of all, add this two lines in your Values file, so that these two values can be set from outside. username: root password: password Now, add a secret.yaml file inside your template folder. and, copy this code snippet into that file. apiVersion: v1 kind: Secret metadata: name: {{ .Release.Name }}-auth data: password: {{ .Values.password | b64enc }} username: {{ .Values.username | b64enc }} Now tweak your deployment yaml template and make changes in env section, like this ... ... spec: restartPolicy: Always containers: - name: sample-app image: "sample-app:latest" imagePullPolicy: Always env: - name: "USERNAME" valueFrom: secretKeyRef: key: username name: {{ .Release.Name }}-auth - name: "PASSWORD" valueFrom: secretKeyRef: key: password name: {{ .Release.Name }}-auth ... ... If you have modified your template correctly for --set flag, you can set this using environment variable. $ export USERNAME=root-user Now use this variable while running helm install, $ helm install --set username=$USERNAME ./mychart If you run this helm install in dry-run mode, you can verify the changes, $ helm install --dry-run --set username=$USERNAME --debug ./mychart [debug] Created tunnel using local port: '44937' [debug] SERVER: "127.0.0.1:44937" [debug] Original chart version: "" [debug] CHART PATH: /home/maruf/go/src/github.com/the-redback/kubernetes-yaml-drafts/helm-charts/mychart NAME: irreverant-meerkat REVISION: 1 RELEASED: Fri Apr 20 03:29:11 2018 CHART: mychart-0.1.0 USER-SUPPLIED VALUES: username: root-user COMPUTED VALUES: password: password username: root-user HOOKS: MANIFEST: --- # Source: mychart/templates/secret.yaml apiVersion: v1 kind: Secret metadata: name: irreverant-meerkat-auth data: password: password username: root-user --- # Source: mychart/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: irreverant-meerkat labels: app: irreverant-meerkat spec: replicas: 1 template: metadata: name: irreverant-meerkat labels: app: irreverant-meerkat spec: containers: - name: irreverant-meerkat image: alpine env: - name: "USERNAME" valueFrom: secretKeyRef: key: username name: irreverant-meerkat-auth - name: "PASSWORD" valueFrom: secretKeyRef: key: password name: irreverant-meerkat-auth imagePullPolicy: IfNotPresent restartPolicy: Always selector: matchLabels: app: irreverant-meerkat You can see that the data of username in secret has changed to root-user. I have added this example into github repo. There is also some discussion in kubernetes/helm repo regarding this. You can see this issue to know about all other ways to use environment variables.
Kubernetes
49,928,819
145
I used to be able to curl https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1beta3/namespaces/default/ as my base URL, but in kubernetes 0.18.0 it gives me "unauthorized". The strange thing is that if I used the external IP address of the API machine (http://172.17.8.101:8080/api/v1beta3/namespaces/default/), it works just fine.
In the official documentation I found this: https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod Apparently I was missing a security token that I didn't need in a previous version of Kubernetes. From that, I devised what I think is a simpler solution than running a proxy or installing golang on my container. See this example that gets the information, from the api, for the current container: KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" \ https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/pods/$HOSTNAME I also use include a simple binary, jq (http://stedolan.github.io/jq/download/), to parse the json for use in bash scripts.
Kubernetes
30,690,186
141
While deploying mojaloop, Kubernetes responds with the following errors: Error: validation failed: [unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta2", unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta1"] My Kubernetes version is 1.16. How can I fix the problem with the API version? From investigating, I have found that Kubernetes doesn't support apps/v1beta2, apps/v1beta1. How can I make Kubernetes use a not deprecated version or some other supported version? I am new to Kubernetes and anyone who can support me I am happy
In Kubernetes 1.16 some apis have been changed. You can check which apis support current Kubernetes object using $ kubectl api-resources | grep deployment deployments deploy apps true Deployment This means that only apiVersion with apps is correct for Deployments (extensions is not supporting Deployment). The same situation with StatefulSet. You need to change Deployment and StatefulSet apiVersion to apiVersion: apps/v1. If this does not help, please add your YAML to the question. EDIT As issue is caused by HELM templates included old apiVersions in Deployments which are not supported in version 1.16, there are 2 possible solutions: 1. git clone whole repo and replace apiVersion to apps/v1 in all templates/deployment.yaml using script 2. Use older version of Kubernetes (1.15) when validator accept extensions as apiVersion for Deployment and StatefulSet.
Kubernetes
58,481,850
141
Kubernetes assigns an IP address for each container, but how can I acquire the IP address from a container in the Pod? I couldn't find the way from documentations. Edit: I'm going to run Aerospike cluster in Kubernetes. and the config files need its own IP address. And I'm attempting to use confd to set the hostname. I would use the environment variable if it was set.
The simplest answer is to ensure that your pod or replication controller yaml/json files add the pod IP as an environment variable by adding the config block defined below. (the block below additionally makes the name and namespace available to the pod) env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP Recreate the pod/rc and then try echo $MY_POD_IP also run env to see what else kubernetes provides you with.
Kubernetes
30,746,888
140
Background: Currently we're using Docker and Docker Compose for our services. We have externalized the configuration for different environments into files that define environment variables read by the application. For example a prod.env file: ENV_VAR_ONE=Something Prod ENV_VAR_TWO=Something else Prod and a test.env file: ENV_VAR_ONE=Something Test ENV_VAR_TWO=Something else Test Thus we can simply use the prod.env or test.env file when starting the container: docker run --env-file prod.env <image> Our application then picks up its configuration based on the environment variables defined in prod.env. Questions: Is there a way to provide environment variables from a file in Kubernetes (for example when defining a pod) instead of hardcoding them like this: apiVersion: v1 kind: Pod metadata: labels: context: docker-k8s-lab name: mysql-pod name: mysql-pod spec: containers: - env: - name: MYSQL_USER value: mysql - name: MYSQL_PASSWORD value: mysql - name: MYSQL_DATABASE value: sample - name: MYSQL_ROOT_PASSWORD value: supersecret image: "mysql:latest" name: mysql ports: - containerPort: 3306 If this is not possible, what is the suggested approach?
You can populate a container's environment variables through the use of Secrets or ConfigMaps. Use Secrets when the data you are working with is sensitive (e.g. passwords), and ConfigMaps when it is not. In your Pod definition specify that the container should pull values from a Secret: apiVersion: v1 kind: Pod metadata: labels: context: docker-k8s-lab name: mysql-pod name: mysql-pod spec: containers: - image: "mysql:latest" name: mysql ports: - containerPort: 3306 envFrom: - secretRef: name: mysql-secret Note that this syntax is only available in Kubernetes 1.6 or later. On an earlier version of Kubernetes you will have to specify each value manually, e.g.: env: - name: MYSQL_USER valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_USER (Note that env take an array as value) And repeating for every value. Whichever approach you use, you can now define two different Secrets, one for production and one for dev. dev-secret.yaml: apiVersion: v1 kind: Secret metadata: name: mysql-secret type: Opaque data: MYSQL_USER: bXlzcWwK MYSQL_PASSWORD: bXlzcWwK MYSQL_DATABASE: c2FtcGxlCg== MYSQL_ROOT_PASSWORD: c3VwZXJzZWNyZXQK prod-secret.yaml: apiVersion: v1 kind: Secret metadata: name: mysql-secret type: Opaque data: MYSQL_USER: am9obgo= MYSQL_PASSWORD: c2VjdXJlCg== MYSQL_DATABASE: cHJvZC1kYgo= MYSQL_ROOT_PASSWORD: cm9vdHkK And deploy the correct secret to the correct Kubernetes cluster: kubectl config use-context dev kubectl create -f dev-secret.yaml kubectl config use-context prod kubectl create -f prod-secret.yaml Now whenever a Pod starts it will populate its environment variables from the values specified in the Secret.
Kubernetes
33,478,555
136
What is detached mode in the docker world? I read this article Link, but it does not explain exactly what detached mode mean.
You can start a docker container in detached mode with a -d option. So the container starts up and run in background. That means, you start up the container and could use the console after startup for other commands. The opposite of detached mode is foreground mode. That is the default mode, when -d option is not used. In this mode, the console you are using to execute docker run will be attached to standard input, output and error. That means your console is attached to the container's process. In detached mode, you can follow the standard output of your docker container with docker logs -f <container_ID>. Just try both options. I always use the detached mode to run my containers. I hope I could explain it a little bit clearer.
Kubernetes
34,029,680
136
I have followed the helloword tutorial on http://kubernetes.io/docs/hellonode/. When I run: kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 I get: The connection to the server localhost:8080 was refused - did you specify the right host or port? Why does the command line try to connect to the localhost?
The issue is that your kubeconfig is not right. To auto-generate it run: gcloud container clusters get-credentials "CLUSTER NAME" This worked for me.
Kubernetes
36,650,642
136
I was playing around in minikube and installed the wrong version of istio. I ran: kubectl apply -f install/kubernetes/istio-demo-auth.yaml instead of: kubectl apply -f install/kubernetes/istio-demo.yaml I figured I would just undo it and install the right one. But I cannot seem to find an unapply command. How do I undo a "kubectl apply" command?
One way would be kubectl delete -f <filename> but it implies few things: The resources were first created. It simply removes all of those, if you really want to "revert to the previous state" I'm not sure there are built-in tools in Kubernetes to do that (so you really would restore from a backup, if you have one) The containers did not modify the host machines: containers may mount root filesystem and change it, or kernel subsystems (iptables, etc). The delete command would not revert it either, and in that case you really need to check the documentation for the product to see if they offer any official way to guarantees a proper cleanup
Kubernetes
57,683,206
136
For example, a deployment yaml file: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: guestbook spec: replicas: 2 template: metadata: labels: app: guestbook spec: container: - name: guestbook image: {{Here want to read value from config file outside}} There is a ConfigMap feature with Kubernetes, but that's also write the key/value to the yaml file. Is there a way to set the key to environment variables?
You can also use envsubst when deploying. e.g. cat app/deployment.yaml | envsubst | kubectl apply ... It will replace all variables in the file with their values. We are successfully using this approach on our CI when deploying to multiple environments, also to inject the CI_TAG etc into the deployments.
Kubernetes
48,296,082
133
I am using kubectl scale --replicas=0 -f deployment.yaml to stop all my running pods. Please let me know if there are better ways to bring down all running pods to Zero keeping configuration, deployments etc.. intact, so that I can scale up later as required.
You are doing the correct action; traditionally the scale verb is applied just to the resource name, as in kubectl scale deploy my-awesome-deployment --replicas=0, which removes the need to always point at the specific file that describes that deployment, but there's nothing wrong (that I know of) with using the file if that is more convenient for you.
Kubernetes
47,572,597
130
Kubernetes seems to be all about deploying containers to a cloud of clusters. What it doesn't seem to touch is development and staging environments (or such). During development you want to be as close as possible to production environment with some important changes: Deployed locally (or at least somewhere where you and only you can access) Use latest source code on page refresh (supposing its a website; ideally page auto-refresh on local file save which can be done if you mount source code and use some stuff like Yeoman). Similarly one may want a non-public environment to do continuous integration. Does Kubernetes support such kind of development environment or is it something one has to build, hoping that during production it'll still work?
Update (2016-07-15) With the release of Kubernetes 1.3, Minikube is now the recommended way to run Kubernetes on your local machine for development. You can run Kubernetes locally via Docker. Once you have a node running you can launch a pod that has a simple web server and mounts a volume from your host machine. When you hit the web server it will read from the volume and if you've changed the file on your local disk it can serve the latest version.
Kubernetes
29,746,926
126
I'm looking for a way to tell (from within a script) when a Kubernetes Job has completed. I want to then get the logs out of the containers and perform cleanup. What would be a good way to do this? Would the best way be to run kubectl describe job <job_name> and grep for 1 Succeeded or something of the sort?
Since version 1.11, you can do: kubectl wait --for=condition=complete job/myjob and you can also set a timeout: kubectl wait --for=condition=complete --timeout=30s job/myjob
Kubernetes
44,686,568
125
kubectl logs -f pod shows all logs from the beginning and it becomes a problem when the log is huge and we have to wait for a few minutes to get the last log. Its become more worst when connecting remotely. Is there a way that we can tail the logs for the last 100 lines of logs and follow them?
In a cluster best practices are to gather all logs in a single point through an aggregator and analyze them with a dedicated tool. For that reason in K8S, log command is quite basic. Anyway kubectl logs -h shows some options useful for you: # Display only the most recent 20 lines of output in pod nginx kubectl logs --tail=20 nginx # Show all logs from pod nginx written in the last hour kubectl logs --since=1h nginx Some tools with your requirements (and more) are available on github, some of which are: https://github.com/boz/kail https://github.com/stern/stern
Kubernetes
51,835,066
125
I'm able to connect to an ElastiCache Redis instance in a VPC from EC2 instances. But I would like to know if there is a way to connect to an ElastiCache Redis node outside of Amazon EC2 instances, such as from my local dev setup or VPS instances provided by other vendors. Currently when trying from my local set up: redis-cli -h my-node-endpoint -p 6379 I only get a timeout after some time.
SSH port forwarding should do the trick. Try running this from you client. ssh -f -N -L 6379:<your redis node endpoint>:6379 <your EC2 node that you use to connect to redis> Then from your client redis-cli -h 127.0.0.1 -p 6379 Please note that default port for redis is 6379 not 6739. And also make sure you allow the security group of the EC2 node that you are using to connect to your redis instance into your Cache security group. Also, AWS now supports accessing your cluster more info here Update 04/13/2024: Many folks are running Kubernetes today. It's a very typical scenario for folks to have services running in Kubernetes accessing ElasticCache Redis. So there is a way to do this (test your redis connection locally through Kubernetes) using the kubectl ssh jump plugin. Follow the installation instructions. Then see case 2 here. For example: kubectl ssh-jump sshjump \ -i ~/.ssh/id_rsa_k8s -p ~/.ssh/id_rsa_k8s.pub \ -a "-L 6379:<your redis node endpoint>:6379" and then from your client: redis-cli -h 127.0.0.1 -p 6379
Kubernetes
21,917,661
121
I am trying to get the namespace of the currently used Kubernetes context using kubectl. I know there is a command kubectl config get-contexts but I see that it cannot output in json/yaml. The only script I've come with is this: kubectl config get-contexts --no-headers | grep '*' | grep -Eo '\S+$'
This works if you have a namespace selected in your context: kubectl config view --minify -o jsonpath='{..namespace}' Also, kube-ps1 can be used to display your current context and namespace in your shell prompt.
Kubernetes
55,853,977
121
While I explored yaml definitions of Kubernetes templates, I stumbled across different definitions of sizes. First I thought it's about the apiVersions but they are the same. So what is the difference there? Which are right when both are the same? storage: 5G and storage: 5Gi volumeClaimTemplates: - metadata: name: mongo-persistent-storage spec: resources: requests: storage: 2Gi see here in detail: https://github.com/cvallance/mongo-k8s-sidecar/blob/master/example/StatefulSet/mongo-statefulset.yaml and this one: volumeClaimTemplates: - metadata: name: mongo-persistent-storage spec: resources: requests: storage: 5G here in detail: https://github.com/openebs/openebs/blob/master/k8s/demo/mongodb/mongo-statefulset.yml
From Kubernetes source: Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: 128974848, 129e6, 129M, 123Mi So those are the "bibyte" counterparts, like user2864740 commented. A little info on those orders of magnitude: The kibibyte was designed to replace the kilobyte in those computer science contexts in which the term kilobyte is used to mean 1024 bytes. The interpretation of kilobyte to denote 1024 bytes, conflicting with the SI definition of the prefix kilo (1000), used to be common. So, as you can see, 5G means 5 Gigabytes while 5Gi means 5 Gibibytes. They amount to: 5 G = 5000000 KB / 5000 MB 5 Gi = 5368709.12 KB / 5368.70 MB Therefore, in terms of size, they are not the same. But don't worry if you don't understand the differences at first read. Even Windows gets it wrong!
Kubernetes
50,804,915
117
When I try to install a chart with helm: helm install stable/nginx-ingress --name my-nginx I get the error: Error: unknown flag: --name But I see the above command format in many documentations. Version: version.BuildInfo{Version:"v3.0.0-beta.3", GitCommit:"5cb923eecbe80d1ad76399aee234717c11931d9a", GitTreeState:"clean", GoVersion:"go1.12.9"} Platform: Windows 10 64 What could be the reason?
In Helm v3, the release name is now mandatory as part of the commmand, see helm install --help: Usage: helm install [NAME] [CHART] [flags] Your command should be: helm install my-nginx stable/nginx-ingress Furthermore, Helm will not auto-generate names for releases anymore. If you want the "old behavior", you can use the --generate-name flag. e.g: helm install --generate-name stable/nginx-ingress The v3 docs are available at https://v3.helm.sh/docs/, but as it is a beta version, the docs will not be accurate for a while. It's better to rely on the CLI --help, that is auto-generated by Go/Cobra.
Kubernetes
57,961,162
117
Is there a simple kubectl command to take a kubeconfig file (that contains a cluster+context+user) and merge it into the ~/.kube/config file as an additional context?
Do this: export KUBECONFIG=~/.kube/config:~/someotherconfig kubectl config view --flatten You can then pipe that out to a new file if needed.
Kubernetes
46,184,125
116
I have installed helm 2.6.2 on the kubernetes 8 cluster. helm init worked fine. but when I run helm list it giving this error. helm list Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system" How to fix this RABC error message?
Once these commands: kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' helm init --service-account tiller --upgrade were run, the issue has been solved.
Kubernetes
46,672,523
116
I created the following persistent volume by calling kubectl create -f nameOfTheFileContainingTheFollowingContent.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv-monitoring-static-content spec: capacity: storage: 100Mi accessModes: - ReadWriteOnce hostPath: path: "/some/path" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-monitoring-static-content-claim spec: accessModes: - ReadWriteOnce storageClassName: "" resources: requests: storage: 100Mi After this I tried to delete the pvc. But this command stuck. when calling kubectl describe pvc pv-monitoring-static-content-claim I get the following result Name: pv-monitoring-static-content-claim Namespace: default StorageClass: Status: Terminating (lasts 5m) Volume: pv-monitoring-static-content Labels: <none> Annotations: pv.kubernetes.io/bind-completed=yes pv.kubernetes.io/bound-by-controller=yes Finalizers: [foregroundDeletion] Capacity: 100Mi Access Modes: RWO Events: <none> And for kubectl describe pv pv-monitoring-static-content Name: pv-monitoring-static-content Labels: <none> Annotations: pv.kubernetes.io/bound-by-controller=yes Finalizers: [kubernetes.io/pv-protection foregroundDeletion] StorageClass: Status: Terminating (lasts 16m) Claim: default/pv-monitoring-static-content-claim Reclaim Policy: Retain Access Modes: RWO Capacity: 100Mi Node Affinity: <none> Message: Source: Type: HostPath (bare host directory volume) Path: /some/path HostPathType: Events: <none> There is no pod running that uses the persistent volume. Could anybody give me a hint why the pvc and the pv are not deleted?
This happens when persistent volume is protected. You should be able to cross verify this: Command: kubectl describe pvc PVC_NAME | grep Finalizers Output: Finalizers: [kubernetes.io/pvc-protection] You can fix this by setting finalizers to null using kubectl patch: kubectl patch pvc PVC_NAME -p '{"metadata":{"finalizers": []}}' --type=merge Ref; Storage Object in Use Protection
Kubernetes
51,358,856
116
I am trying to understand Stateful Sets. How does their use differ from the use of "stateless" Pods with Persistent Volumes? That is, assuming that a "normal" Pod may lay claim to persistent storage, what obvious thing am I missing that requires this new construct (with ordered start/stop and so on)?
Yes, a regular pod can use a persistent volume. However, sometimes you have multiple pods that logically form a "group". Examples of this would be database replicas, ZooKeeper hosts, Kafka nodes, etc. In all of these cases there's a bunch of servers and they work together and talk to each other. What's special about them is that each individual in the group has an identity. For example, for a database cluster one is the master and two are followers and each of the followers communicates with the master letting it know what it has and has not synced. So the followers know that "db-x-0" is the master and the master knows that "db-x-2" is a follower and has all the data up to a certain point but still needs data beyond that. In such situations you need a few things you can't easily get from a regular pod: A predictable name: you want to start your pods telling them where to find each other so they can form a cluster, elect a leader, etc. but you need to know their names in advance to do that. Normal pod names are random so you can't know them in advance. A stable address/DNS name: you want whatever names were available in step (1) to stay the same. If a normal pod restarts (you redeploy, the host where it was running dies, etc.) on another host it'll get a new name and a new IP address. A persistent link between an individual in the group and their persistent volume: if the host where one of your database master was running dies it'll get moved to a new host but should connect to the same persistent volume as there's one and only 1 volume that contains the right data for that "individual". So, for example, if you redeploy your group of 3 database hosts you want the same individual (by DNS name and IP address) to get the same persistent volume so the master is still the master and still has the same data, replica1 gets it's data, etc. StatefulSets solve these issues because they provide (quoting from https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/): Stable, unique network identifiers. Stable, persistent storage. Ordered, graceful deployment and scaling. Ordered, graceful deletion and termination. I didn't really talk about (3) and (4) but that can also help with clusters as you can tell the first one to deploy to become the master and the next one find the first and treat it as master, etc. As some have noted, you can indeed can some of the same benefits by using regular pods and services, but its much more work. For example, if you wanted 3 database instances you could manually create 3 deployments and 3 services. Note that you must manually create 3 deployments as you can't have a service point to a single pod in a deployment. Then, to scale up you'd manually create another deployment and another service. This does work and was somewhat common practice before PetSet/PersistentSet came along. Note that it is missing some of the benefits listed above (persistent volume mapping & fixed start order for example).
Kubernetes
41,732,819
115
The docs are great about explaining how to set a taint on a node, or remove one. And I can use kubectl describe node to get a verbose description of one node, including its taints. But what if I've forgotten the name of the taint I created, or which nodes I set it on? Can I list all of my nodes, with any taints that exist on them?
kubectl get nodes -o json | jq '.items[].spec' which will give the complete spec with node name, or: kubectl get nodes -o json | jq '.items[].spec.taints' will produce the list of the taints per each node
Kubernetes
43,379,415
114
I have a local kubernetes cluster on my local docker desktop. This is how my kubernetes service looks like when I do a kubectl describe service Name: helloworldsvc Namespace: test Labels: app=helloworldsvc Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"helloworldsvc"},"name":"helloworldsvc","namespace":"test... Selector: app=helloworldapp Type: ClusterIP IP: 10.108.182.240 Port: http 9111/TCP TargetPort: 80/TCP Endpoints: 10.1.0.28:80 Session Affinity: None Events: <none> This service is pointing to a deployment with a web app. My question how to I find the url for this service? I already tried http://localhost:9111/ and that did not work. I verified that the pod that this service points to is up and running.
URL of service is in the below format: <service-name>.<namespace>.svc.cluster.local:<service-port> In your case it is: helloworldsvc.test.svc.cluster.local:9111
Kubernetes
59,558,303
113
As stated in the title, is it possible to find out a K8s cluster name from the API? I looked around the API and could not find it.
kubectl config current-context does the trick (it outputs little bit more, like project name, region, etc., but it should give you the answer you need).
Kubernetes
38,242,062
111
I am evaluating Kubernetes as a platform for our new application. For now, it looks all very exciting! However, I’m running into a problem: I’m hosting my cluster on GCE and I need some mechanism to share storage between two pods - the continous integration server and my application server. What’s the best way for doing this with kubernetes? None of the volume types seems to fit my needs, since GCE disks can’t be shared if one pod needs to write to the disk. NFS would be perfect, but seems to require special build options for the kubernetes cluster? EDIT: Sharing storage seems to be a problem that I have encountered multiple times now using Kubernetes. There are multiple use cases where I'd just like to have one volume and hook it up to multiple pods (with write access). I can only assume that this would be a common use case, no? EDIT2: For example, this page describes how to set up an Elasticsearch cluster, but wiring it up with persistent storage is impossible (as described here), which kind of renders it pointless :(
Firstly, do you really need multiple readers / writers? From my experience of Kubernetes / micro-service architecture (MSA), the issue is often more related to your design pattern. One of the fundamental design patterns with MSA is the proper encapsulation of services, and this includes the data owned by each service. In much the same way as OOP, your service should look after the data that is related to its area of concern and should allow access to this data to other services via an interface. This interface could be an API, messages handled directly or via a brokage service, or using protocol buffers and gRPC. Generally, multi-service access to data is an anti-pattern akin to global variables in OOP and most programming languages. As an example, if you where looking to write logs, you should have a log service which each service can call with the relevant data it needs to log. Writing directly to a shared disk means that you'd need to update every container if you change your log directory structure, or decided to add extra functionality like sending emails on certain types of errors. In the major percentage of cases, you should be using some form of minimal interface before resorting to using a file system, avoiding the unintended side-effects of Hyrum's law that you are exposed to when using a file system. Without proper interfaces / contracts between your services, you heavily reduce your ability to build maintainable and resilient services. Ok, your situation is best solved using a file system. There are a number of options... There are obviously times when a file system that can handle multiple concurrent writers provides a superior solution over a more 'traditional' MSA forms of communication. Kubernetes supports a large number of volume types which can be found here. While this list is quite long, many of these volume types don't support multiple writers (also known as ReadWriteMany in Kubernetes). Those volume types that do support ReadWriteMany can be found in this table and at the time of writing this is AzureFile, CephFS, Glusterfs, Quobyte, NFS and PortworxVolume. There are also operators such as the popular rook.io which are powerful and provide some great features, but the learning curve for such systems can be a difficult climb when you just want a simple solution and keep moving forward. The simplest approach. In my experience, the best initial option is NFS. This is a great way to learn the basic ideas around ReadWriteMany Kubernetes storage, will serve most use cases and is the easiest to implement. After you've built a working knowledge of multi-service persistence, you can then make more informed decisions to use more feature rich offerings which will often require more work to implement. The specifics for setting up NFS differ based on how and where your cluster is running and the specifics of your NFS service and I've previously written two articles on how to set up NFS for on-prem clusters and using AWS NFS equivalent EFS on EKS clusters. These two articles give a good contrast for just how different implementations can be given your particular situation. For a bare minimum example, you will firstly need an NFS service. If you're looking to do a quick test or you have low SLO requirements, following this DO article is a great quick primer for setting up NFS on Ubuntu. If you have an existing NAS which provides NFS and is accessible from your cluster, this will also work as well. Once you have an NFS service, you can create a persistent volume similar to the following: --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-name spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteMany nfs: server: 255.0.255.0 # IP address of your NFS service path: "/desired/path/in/nfs" A caveat here is that your nodes will need binaries installed to use NFS, and I've discussed this more in my on-prem cluster article. This is also the reason you need to use EFS when running on EKS as your nodes don't have the ability to connect to NFS. Once you have the persistent volume set up, it is a simple case of using it like you would any other volume. --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-name spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - name: p-name volumeMounts: - mountPath: /data name: v-name volumes: - name: v-name persistentVolumeClaim: claimName: pvc-name
Kubernetes
31,693,529
110
I'm looking for some pros and cons of whether to go with Marathon and Chronos, Docker Swarm or Kubernetes when running Docker containers on DC/OS. For example, when is it better to use Marathon/Chronos than Kubernetes and vice versa? Right now I'm mostly into experimenting but hopefully we'll start using one of these services in production after the summer. This may disqualify Docker Swarm since I'm not sure if it'll be production ready by then. What I like about Docker Swarm is that it's essentially just "Docker commands" and you don't have to learn something completely new. We're already using docker-compose and that will work out of the box with Docker Swarm (at least in theory) so that would be a big plus. My main concern with Docker Swarm is if it'll cover all use cases required to run a system in production.
I'll try to break down the unique aspects of each container orchestration framework on Mesos. Use Docker Swarm if: You want to use the familiar Docker API to launch Docker containers on Mesos. Swarm may eventually provide an API to talk to Kubernetes (even K8s-Mesos) too. See: http://www.techrepublic.com/article/docker-and-mesos-like-peanut-butter-and-jelly/ Use Kubernetes-Mesos if: You want to launch K8s Pods, which are groups of containers co-scheduled and co-located together, sharing resources. You want to launch a service alongside one or more sidekick containers (e.g. log archiver, metrics monitor) that live next to the parent container. You want to use the K8s label-based service-discovery, load-balancing, and replication control. See http://kubernetesio.blogspot.com/2015/04/kubernetes-and-mesosphere-dcos.html Use Marathon if: You want to launch Docker or non-Docker long-running apps/services. You want to use Mesos attributes for constraint-based scheduling. You want to use Application Groups and Dependencies to launch, scale, or upgrade related services. You want to use health checks to automatically restart unhealthy services or rollback unhealthy deployments/upgrades. You want to integrate HAProxy or Consul for service discovery. You want to launch and monitor apps through a web UI or REST API. You want to use a framework built from the start with Mesos in mind. Use Chronos if: You want to launch Docker or non-Docker tasks that are expected to exit. You want to schedule a task to run at a specific time/schedule (a la cron). You want to schedule a DAG workflow of dependent tasks. You want to launch and monitor jobs through a web UI or REST API. You want to use a framework built from the start with Mesos in mind.
Kubernetes
29,198,840
109
My question is simple. How to execute a bash command in the pod? I want to do everything with a single bash command. [root@master ~]# kubectl exec -it --namespace="tools" mongo-pod --bash -c "mongo" Error: unknown flag: --bash So, the command is simply ignored. [root@master ~]# kubectl exec -it --namespace="tools" mongo-pod bash -c "mongo" root@mongo-deployment-78c87cb84-jkgxx:/# Or so. [root@master ~]# kubectl exec -it --namespace="tools" mongo-pod bash mongo Defaulting container name to mongo. Use 'kubectl describe pod/mongo-deployment-78c87cb84-jkgxx -n tools' to see all of the containers in this pod. /usr/bin/mongo: /usr/bin/mongo: cannot execute binary file command terminated with exit code 126 If it's just a bash, it certainly works. But I want to jump into the mongo shell immediatelly. I found a solution, but it does not work. Tell me if this is possible now? Executing multiple commands( or from a shell script) in a kubernetes pod Thanks.
The double dash symbol "--" is used to separate the command you want to run inside the container from the kubectl arguments. So the correct way is: kubectl exec -it --namespace=tools mongo-pod -- bash -c "mongo" You forgot a space between "--" and "bash". To execute multiple commands you may want: to create a script and mount it as a volume in your pod and execute it to launch a side container with the script and run it
Kubernetes
51,247,619
109
I need to configure Ingress Nginx on azure k8s, and my question is if is possible to have ingress configured in one namespace et. ingress-nginx and some serivces in other namespace eg. resources? My files looks like so: # ingress-nginx.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx spec: replicas: 3 selector: matchLabels: app: ingress-nginx template: metadata: labels: app: ingress-nginx annotations: prometheus.io/port: '10254' prometheus.io/scrape: 'true' spec: containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --annotations-prefix=nginx.ingress.kubernetes.io - --publish-service=$(POD_NAMESPACE)/ingress-nginx env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 # configmap.yaml kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx --- # default-backend.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: default-http-backend labels: app: default-http-backend namespace: ingress-nginx spec: replicas: 1 selector: matchLabels: app: default-http-backend template: metadata: labels: app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissible as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.4 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: ingress-nginx labels: app: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: app: default-http-backend kind: Service apiVersion: v1 metadata: name: ingress-nginx namespace: ingress-nginx labels: app: ingress-nginx spec: externalTrafficPolicy: Local type: LoadBalancer selector: app: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: https # app-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: app-ingress namespace: ingress-nginx annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / spec: tls: - hosts: - api-sand.fake.com rules: - host: api-sand.fake.com http: paths: - backend: serviceName: api-sand servicePort: 80 path: / And then I have some app running in the resources namespace, and problem is that I am getting the following error error obtaining service endpoints: error getting service resources/api-sand from the cache: service resources/api-sand was not found If I deploy api-sand in the same namespace where ingress is then this service works fine.
I would like to simplify the answer a bit for those who are relatively new to Kubernetes and its ingress options. There are 2 separate things that need to be present for ingress(es) to work: Ingress Controller: a separate DaemonSet (a controller which runs on all nodes, including any future ones) along with a Service that can be used to utilize routing and proxying. It's based for example on NGINX which acts as the old-school reverse proxy receiving incoming traffic and forwarding it to HTTP(S) routes defined in the Ingress resources in point 2 below (distinguished by their different routes/URLs); Ingress rules: separate Kubernetes resources with kind: Ingress. Will only take effect if Ingress Controller is already deployed on that node. While Ingress Controller can be deployed in any namespace it is usually deployed in a namespace separate from your app services (e.g. ingress or kube-system). It can see Ingress rules in all other namespaces and pick them up. However, each of the Ingress rules must reside in the namespace where the app that they configure reside. There are some workarounds for that, but this is the most common approach.
Kubernetes
59,844,622
109
I am new to kubernetes. I have an issue in the pods. When I run the command kubectl get pods Result: NAME READY STATUS RESTARTS AGE mysql-apim-db-1viwg 1/1 Running 1 20h mysql-govdb-qioee 1/1 Running 1 20h mysql-userdb-l8q8c 1/1 Running 0 20h wso2am-default-813fy 0/1 ImagePullBackOff 0 20h Due to an issue of "wso2am-default-813fy" node, I need to restart it. Any suggestion?
In case of not having the yaml file: kubectl get pod PODNAME -n NAMESPACE -o yaml | kubectl replace --force -f -
Kubernetes
40,259,178
108
Let say I want to find the kubelet and apiserver version of my k8s master(s), what's the best way to do it? I am aware of the following commands: kubectl cluster-info which only shows the endpoints. kubectl get nodes; kubectl describe node <node>; which shows very detail information but only the nodes and not master. There's also kubectl version but that only shows the kubectl version and not the kubelet or apiserver version. What other commands can I use to identify the properties of my cluster?
kubectl version also shows the apiserver version. For example, this is the output when I run it: $ kubectl version Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"} The second line ("Server Version") contains the apiserver version. There isn't a way to get the master's kubelet version if it isn't registered as one of the nodes (which it isn't if it isn't showing up in kubectl get nodes), but in most deployments it'll be the same version as the apiserver.
Kubernetes
38,230,452
107
Can I have multiple values.yaml files in a Helm chart? Something like mychart/templates/internalValues.yaml, mychart/templates/customSettings.yaml, etc? Accessing properties in a values.yaml file can be done by {{ .Values.property1 }}. How would I reference the properties in these custom values.yaml files?
Yes, it's possible to have multiple values files with Helm. Just use the --values flag (or -f). Example: helm install ./path --values ./internalValues.yaml --values ./customSettings.yaml You can also pass in a single value using --set. Example: helm install ./path --set username=ADMIN --set password=${PASSWORD} From the official documentation: You can specify the '--values'/'-f' flag multiple times. The priority will be given to the last (right-most) file specified. You can specify the '--set' flag multiple times. The priority will be given to the last (right-most) set specified. (Thanks to Seth for the updated docs link)
Kubernetes
51,097,553
105
I am building the Dockerfile for python script which will run in minikube windows 10 system below is my Dockerfile Building the docker using the below command docker build -t python-helloworld . and loading that in minikube docker demon docker save python-helloworld | (eval $(minikube docker-env) && docker load) Docker File FROM python:3.7-alpine #add user group and ass user to that group RUN addgroup -S appgroup && adduser -S appuser -G appgroup #creates work dir WORKDIR /app #copy python script to the container folder app COPY helloworld.py /app/helloworld.py #user is appuser USER appuser ENTRYPOINT ["python", "/app/helloworld.py"] pythoncronjob.yml file (cron job file) apiVersion: batch/v1beta1 kind: CronJob metadata: name: python-helloworld spec: schedule: "*/1 * * * *" jobTemplate: spec: backoffLimit: 5 template: spec: containers: - name: python-helloworld image: python-helloworld imagePullPolicy: IfNotPresent command: [/app/helloworld.py] restartPolicy: OnFailure Below is the command to run this Kubernetes job kubectl create -f pythoncronjob.yml But getting the below error job is not running scuessfully but when u ran the Dockerfile alone its work fine standard_init_linux.go:211: exec user process caused "exec format error"
This can also happen when your host machine has a different architecture from your guest container image. E.g. running an arm container on a host with x86-64 architecture
Kubernetes
58,298,774
101
I have an admin.conf file containing info about a cluster, so that the following command works fine: kubectl --kubeconfig ./admin.conf get nodes How can I config kubectl to use the cluster, user and authentication from this file as default in one command? I only see separate set-cluster, set-credentials, set-context, use-context etc. I want to get the same output when I simply run: kubectl get nodes
The best way I've found was to use an environment variable: export KUBECONFIG=/path/to/admin.conf
Kubernetes
40,447,295
100
In kubernetes there is a rolling update (automatically without downtime) but there is not a rolling restart, at least i could not find. We have to change deployment yaml. Is there a way to make rolling "restart", preferably without changing deployment yaml?
Before kubernetes 1.15 the answer is no. But there is a workaround of patching deployment spec with a dummy annotation: kubectl patch deployment web -p \ "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" As of kubernetes 1.15 you can use: kubectl rollout restart deployment your_deployment_name CLI Improvements Created a new kubectl rollout restart command that does a rolling restart of a deployment. kubectl rollout restart now works for DaemonSets and StatefulSets
Kubernetes
57,559,357
99
In minikube, how to expose a service using nodeport ? For example, I start a kubernetes cluster using the following command and create and expose a port like this: $ minikube start $ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080 $ kubectl expose deployment hello-minikube --type=NodePort $ curl $(minikube service hello-minikube --url) CLIENT VALUES: client_address=192.168.99.1 command=GET real path=/ .... Now how to access the exposed service from the host? I guess the minikube node needs to be configured to expose this port as well.
I am not exactly sure what you are asking as it seems you already know about the minikube service <SERVICE_NAME> --url command which will give you a url where you can access the service. In order to open the exposed service, the minikube service <SERVICE_NAME> command can be used: $ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080 deployment "hello-minikube" created $ kubectl expose deployment hello-minikube --type=NodePort service "hello-minikube" exposed $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-minikube 10.0.0.102 <nodes> 8080/TCP 7s kubernetes 10.0.0.1 <none> 443/TCP 13m $ minikube service hello-minikube Opening kubernetes service default/hello-minikube in default browser... This command will open the specified service in your default browser. There is also a --url option for printing the url of the service which is what gets opened in the browser: $ minikube service hello-minikube --url http://192.168.99.100:31167
Kubernetes
40,767,164
98
Any idea to view the log files of a crashed pod in kubernetes? My pod is listing it's state as "CrashLoopBackOff" after started the replicationController. I search the available docs and couldn't find any.
Assuming that your pod still exists: kubectl logs <podname> --previous $ kubectl logs -h -p, --previous[=false]: If true, print the logs for the previous instance of the container in a pod if it exists.
Kubernetes
34,084,689
97
In the Kubernetes minikube tutorial there is this command to use Minikube Docker daemon : $ eval $(minikube docker-env) What exactly does this command do, that is, what exactly does minikube docker-env mean?
The command minikube docker-env returns a set of Bash environment variable exports to configure your local environment to re-use the Docker daemon inside the Minikube instance. Passing this output through eval causes bash to evaluate these exports and put them into effect. You can review the specific commands which will be executed in your shell by omitting the evaluation step and running minikube docker-env directly. However, this will not perform the configuration – the output needs to be evaluated for that. This is a workflow optimization intended to improve your experience with building and running Docker images which you can run inside the minikube environment. It is not mandatory that you re-use minikube's Docker daemon to use minikube effectively, but doing so will significantly improve the speed of your code-build-test cycle. In a normal workflow, you would have a separate Docker registry on your host machine to that in minikube, which necessitates the following process to build and run a Docker image inside minikube: Build the Docker image on the host machine. Re-tag the built image in your local machine's image registry with a remote registry or that of the minikube instance. Push the image to the remote registry or minikube. (If using a remote registry) Configure minikube with the appropriate permissions to pull images from the registry. Set up your deployment in minikube to use the image. By re-using the Docker registry inside Minikube, this becomes: Build the Docker image using Minikube's Docker instance. This pushes the image to Minikube's Docker registry. Set up your deployment in minikube to use the image. More details of the purpose can be found in the minikube docs.
Kubernetes
52,310,599
96
I initialized the master node and add 2 worker nodes, but only master and one of the worker node show up when I run the following command: kubectl get nodes also, both these nodes are in 'Not Ready' state. What are the steps should I take to understand what the problem could be? I can ping all the nodes from each of the other nodes. The version of Kubernetes is 1.8. OS is Cent OS 7 I used the following repo to install Kubernetes: cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 EOF yum install kubelet kubeadm kubectl kubernetes-cni
First, describe nodes and see if it reports anything: $ kubectl describe nodes Look for conditions, capacity and allocatable: Conditions: Type Status ---- ------ OutOfDisk False MemoryPressure False DiskPressure False Ready True Capacity: cpu: 2 memory: 2052588Ki pods: 110 Allocatable: cpu: 2 memory: 1950188Ki pods: 110 If everything is alright here, SSH into the node and observe kubelet logs to see if it reports anything. Like certificate erros, authentication errors etc. If kubelet is running as a systemd service, you can use $ journalctl -u kubelet
Kubernetes
47,107,117
95
I've lost the original 'kubeadm join' command when I previously ran kubeadm init. How can I retrieve this value again?
kubeadm token create --print-join-command
Kubernetes
51,126,164
95
Some characteristics of Apache Parquet are: Self-describing Columnar format Language-independent In comparison to Apache Avro, Sequence Files, RC File etc. I want an overview of the formats. I have already read : How Impala Works with Hadoop File Formats. It gives some insights on the formats but I would like to know how the access to data & storage of data is done in each of these formats. How does Parquet have an advantage over the others?
I think the main difference I can describe relates to record oriented vs. column oriented formats. Record oriented formats are what we're all used to -- text files, delimited formats like CSV, TSV. AVRO is slightly cooler than those because it can change schema over time, e.g. adding or removing columns from a record. Other tricks of various formats (especially including compression) involve whether a format can be split -- that is, can you read a block of records from anywhere in the dataset and still know it's schema? But here's more detail on columnar formats like Parquet. Parquet, and other columnar formats handle a common Hadoop situation very efficiently. It is common to have tables (datasets) having many more columns than you would expect in a well-designed relational database -- a hundred or two hundred columns is not unusual. This is so because we often use Hadoop as a place to denormalize data from relational formats -- yes, you get lots of repeated values and many tables all flattened into a single one. But it becomes much easier to query since all the joins are worked out. There are other advantages such as retaining state-in-time data. So anyway it's common to have a boatload of columns in a table. Let's say there are 132 columns, and some of them are really long text fields, each different column one following the other and use up maybe 10K per record. While querying these tables is easy with SQL standpoint, it's common that you'll want to get some range of records based on only a few of those hundred-plus columns. For example, you might want all of the records in February and March for customers with sales > $500. To do this in a row format the query would need to scan every record of the dataset. Read the first row, parse the record into fields (columns) and get the date and sales columns, include it in your result if it satisfies the condition. Repeat. If you have 10 years (120 months) of history, you're reading every single record just to find 2 of those months. Of course this is a great opportunity to use a partition on year and month, but even so, you're reading and parsing 10K of each record/row for those two months just to find whether the customer's sales are > $500. In a columnar format, each column (field) of a record is stored with others of its kind, spread all over many different blocks on the disk -- columns for year together, columns for month together, columns for customer employee handbook (or other long text), and all the others that make those records so huge all in their own separate place on the disk, and of course columns for sales together. Well heck, date and months are numbers, and so are sales -- they are just a few bytes. Wouldn't it be great if we only had to read a few bytes for each record to determine which records matched our query? Columnar storage to the rescue! Even without partitions, scanning the small fields needed to satisfy our query is super-fast -- they are all in order by record, and all the same size, so the disk seeks over much less data checking for included records. No need to read through that employee handbook and other long text fields -- just ignore them. So, by grouping columns with each other, instead of rows, you can almost always scan less data. Win! But wait, it gets better. If your query only needed to know those values and a few more (let's say 10 of the 132 columns) and didn't care about that employee handbook column, once it had picked the right records to return, it would now only have to go back to the 10 columns it needed to render the results, ignoring the other 122 of the 132 in our dataset. Again, we skip a lot of reading. (Note: for this reason, columnar formats are a lousy choice when doing straight transformations, for example, if you're joining all of two tables into one big(ger) result set that you're saving as a new table, the sources are going to get scanned completely anyway, so there's not a lot of benefit in read performance, and because columnar formats need to remember more about the where stuff is, they use more memory than a similar row format). One more benefit of columnar: data is spread around. To get a single record, you can have 132 workers each read (and write) data from/to 132 different places on 132 blocks of data. Yay for parallelization! And now for the clincher: compression algorithms work much better when it can find repeating patterns. You could compress AABBBBBBCCCCCCCCCCCCCCCC as 2A6B16C but ABCABCBCBCBCCCCCCCCCCCCCC wouldn't get as small (well, actually, in this case it would, but trust me :-) ). So once again, less reading. And writing too. So we read a lot less data to answer common queries, it's potentially faster to read and write in parallel, and compression tends to work much better. Columnar is great when your input side is large, and your output is a filtered subset: from big to little is great. Not as beneficial when the input and outputs are about the same. But in our case, Impala took our old Hive queries that ran in 5, 10, 20 or 30 minutes, and finished most in a few seconds or a minute.
Avro
36,822,224
209
All of these provide binary serialization, RPC frameworks and IDL. I'm interested in key differences between them and characteristics (performance, ease of use, programming languages support). If you know any other similar technologies, please mention it in an answer.
ASN.1 is an ISO/ISE standard. It has a very readable source language and a variety of back-ends, both binary and human-readable. Being an international standard (and an old one at that!) the source language is a bit kitchen-sinkish (in about the same way that the Atlantic Ocean is a bit wet) but it is extremely well-specified and has decent amount of support. (You can probably find an ASN.1 library for any language you name if you dig hard enough, and if not there are good C language libraries available that you can use in FFIs.) It is, being a standardized language, obsessively documented and has a few good tutorials available as well. Thrift is not a standard. It is originally from Facebook and was later open-sourced and is currently a top level Apache project. It is not well-documented -- especially tutorial levels -- and to my (admittedly brief) glance doesn't appear to add anything that other, previous efforts don't already do (and in some cases better). To be fair to it, it has a rather impressive number of languages it supports out of the box including a few of the higher-profile non-mainstream ones. The IDL is also vaguely C-like. Protocol Buffers is not a standard. It is a Google product that is being released to the wider community. It is a bit limited in terms of languages supported out of the box (it only supports C++, Python and Java) but it does have a lot of third-party support for other languages (of highly variable quality). Google does pretty much all of their work using Protocol Buffers, so it is a battle-tested, battle-hardened protocol (albeit not as battle-hardened as ASN.1 is. It has much better documentation than does Thrift, but, being a Google product, it is highly likely to be unstable (in the sense of ever-changing, not in the sense of unreliable). The IDL is also C-like. All of the above systems use a schema defined in some kind of IDL to generate code for a target language that is then used in encoding and decoding. Avro does not. Avro's typing is dynamic and its schema data is used at runtime directly both to encode and decode (which has some obvious costs in processing, but also some obvious benefits vis a vis dynamic languages and a lack of a need for tagging types, etc.). Its schema uses JSON which makes supporting Avro in a new language a bit easier to manage if there's already a JSON library. Again, as with most wheel-reinventing protocol description systems, Avro is also not standardized. Personally, despite my love/hate relationship with it, I'd probably use ASN.1 for most RPC and message transmission purposes, although it doesn't really have an RPC stack (you'd have to make one, but IOCs make that simple enough).
Avro
4,633,611
133
I am running into some issues setting up default values for Avro fields. I have a simple schema as given below: data.avsc: { "namespace":"test", "type":"record", "name":"Data", "fields":[ { "name": "id", "type": [ "long", "null" ] }, { "name": "value", "type": [ "string", "null" ] }, { "name": "raw", "type": [ "bytes", "null" ] } ] } I am using the avro-maven-plugin v1.7.6 to generate the Java model. When I create an instance of the model using: Data data = Data.newBuilder().build();, it fails with an exception: org.apache.avro.AvroRuntimeException: org.apache.avro.AvroRuntimeException: Field id type:UNION pos:0 not set and has no default value. But if I specify the "default" property, { "name": "id", "type": [ "long", "null" ], "default": "null" }, I do not get this error. I read in the documentation that first schema in the union becomes the default schema. So my question is, why do I still need to specify the "default" property? How else do I make a field optional? And if I do need to specify the default values, how does that work for a union; do I need to specify default values for each schema in the union and how does that work in terms of order/syntax? Thanks.
The default value of a union corresponds to the first schema of the union (Source). Your union is defined as ["long", "null"] therefor the default value must be a long number. null is not a long number that is why you are getting an error. If you still want to define null as a default value then put null schema first, i.e. change the union to ["null", "long"] instead.
Avro
22,938,124
75
Does anyone knows how to create Avro schema which contains list of objects of some class? I want my generated classes to look like below : class Child { String name; } class Parent { list<Child> children; } For this, I have written part of schema file but do not know how to tell Avro to create list of objects of type Children? My schema file looks like below : { "name": "Parent", "type":"record", "fields":[ { "name":"children", "type":{ "name":"Child", "type":"record", "fields":[ {"name":"name", "type":"string"} ] } } ] } Now problem is that I can mark field children as either Child type or array but do not know how to mark it as a array of objects of type Child class? Can anyone please help?
You need to use array type for creating the list. Following is the updated schema that handles your usecase. { "name": "Parent", "type":"record", "fields":[ { "name":"children", "type":{ "type": "array", "items":{ "name":"Child", "type":"record", "fields":[ {"name":"name", "type":"string"} ] } } } ] }
Avro
25,076,786
54
Working on a pet project (cassandra, spark, hadoop, kafka) I need a data serialization framework. Checking out the common three frameworks - namely Thrift, Avro and Protocolbuffers - I noticed most of them seem to be dead-alive having 2 minor releases a year at most. This leaves me with two assumptions: They are as complete as such a framework should be and just rest in maintenance mode as long as no new features are needed There is no reason to exist for such framework - not being obvious to me why. If so, what alternatives are out there? If anyone could give me a hint to my assumptions, any input is welcome.
Protocol Buffers is a very mature framework, having been first introduced nearly 15 years ago at Google. It's certainly not dead: Nearly every service inside Google uses it. But after so much usage, there probably isn't much that needs to change at this point. In fact, they did a major release (3.0) this year, but the release was as much about removing features as adding them. Protobuf's associated RPC system, gRPC, is relatively new and has had much more activity recently. (However, it is based on Google's internal RPC system which has seen some 12 years of development.) I don't know as much about Thrift or Avro but they have been around a while too.
Avro
40,968,303
53
I need to use the Confluent kafka-avro-serializer Maven artifact. From the official guide I should add this repository to my Maven pom <repository> <id>confluent</id> <url>http://packages.confluent.io/maven/</url> </repository> The problem is that the URL http://packages.confluent.io/maven/ seems to not work at the moment as I get the response below <Error> <Code>NoSuchKey</Code> <Message>The specified key does not exist.</Message> <Key>maven/</Key> <RequestId>15E287D11E5D4DFA</RequestId> <HostId> QVr9lCF0y3SrQoa1Z0jDWtmxD3eJz1gAEdivauojVJ+Bexb2gB6JsMpnXc+JjF95i082hgSLJSM= </HostId> </Error> In fact Maven does not find the artifact <dependency> <groupId>io.confluent</groupId> <artifactId>kafka-avro-serializer</artifactId> <version>3.1.1</version> </dependency> Do you know what the problem could be? Thank you
Needs to add confluent repositories in pom.xml Please add below lines in the pom.xml <repositories> <repository> <id>confluent</id> <url>https://packages.confluent.io/maven/</url> </repository> </repositories>
Avro
43,488,853
52
I'm trying to get Python to parse Avro schemas such as the following... from avro import schema mySchema = """ { "name": "person", "type": "record", "fields": [ {"name": "firstname", "type": "string"}, {"name": "lastname", "type": "string"}, { "name": "address", "type": "record", "fields": [ {"name": "streetaddress", "type": "string"}, {"name": "city", "type": "string"} ] } ] }""" parsedSchema = schema.parse(mySchema) ...and I get the following exception: avro.schema.SchemaParseException: Type property "record" not a valid Avro schema: Could not make an Avro Schema object from record. What am I doing wrong?
According to other sources on the web I would rewrite your second address definition: mySchema = """ { "name": "person", "type": "record", "fields": [ {"name": "firstname", "type": "string"}, {"name": "lastname", "type": "string"}, { "name": "address", "type": { "type" : "record", "name" : "AddressUSRecord", "fields" : [ {"name": "streetaddress", "type": "string"}, {"name": "city", "type": "string"} ] } } ] }"""
Avro
11,764,287
45
I wrote one Avro schema in which some of the fields ** need to be ** of type String but Avro has generated those fields of type CharSequence. I am not able to find any way to tell Avro to make those fields of type String. I tried to use "fields": [ { "name":"startTime", "type":"string", "avro.java.stringImpl":"String" }, { "name":"endTime", "type":"string", "avro.java.string":"String" } ] but for both the fields Avro is generating fields of type CharSequence. Is there any other way to make those fields of type String?
If you want all you string fields be instances of java.lang.String then you only have to configure the compiler: java -jar /path/to/avro-tools-1.7.7.jar compile -string schema or if you are using the Maven plugin <plugin> <groupId>org.apache.avro</groupId> <artifactId>avro-maven-plugin</artifactId> <version>1.7.7</version> <configuration> <stringType>String</stringType> </configuration> [...] </plugin> If you want one specific field to be of type java.lang.String then... you can't. It is not supported by the compiler. You can use "java-class" with the reflect API but the compiler does not care. If you want to learn more, you can set a breakpoint in SpecificCompiler line 372, Avro 1.7.7. You can see that before the call to addStringType() the schema have the required information in the props field. If you pass this schema to SpecificCompiler.javaType() then it will do what you want. But then addStringType replaces your schema by a static one. I will most likely ask the question on the mailing list since I don't see the point.
Avro
25,118,727
36
Apache Avro provides a compact, fast, binary data format, rich data structure for serialization. However, it requires user to define a schema (in JSON) for object which need to be serialized. In some case, this can not be possible (e.g: the class of that Java object has some members whose types are external java classes in external libraries). Hence, I wonder there is a tool can get the information from object's .class file and generate the Avro schema for that object (like Gson use object's .class information to convert certain object to JSON string).
Take a look at the Java reflection API. Getting a schema looks like: Schema schema = ReflectData.get().getSchema(T); See the example from Doug on another question for a working example. Credits of this answer belong to Sean Busby.
Avro
22,954,315
35
I'm trying to validate a JSON file using an Avro schema and write the corresponding Avro file. First, I've defined the following Avro schema named user.avsc: {"namespace": "example.avro", "type": "record", "name": "user", "fields": [ {"name": "name", "type": "string"}, {"name": "favorite_number", "type": ["int", "null"]}, {"name": "favorite_color", "type": ["string", "null"]} ] } Then created a user.json file: {"name": "Alyssa", "favorite_number": 256, "favorite_color": null} And then tried to run: java -jar ~/bin/avro-tools-1.7.7.jar fromjson --schema-file user.avsc user.json > user.avro But I get the following exception: Exception in thread "main" org.apache.avro.AvroTypeException: Expected start-union. Got VALUE_NUMBER_INT at org.apache.avro.io.JsonDecoder.error(JsonDecoder.java:697) at org.apache.avro.io.JsonDecoder.readIndex(JsonDecoder.java:441) at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:290) at org.apache.avro.io.parsing.Parser.advance(Parser.java:88) at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:267) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:155) at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:193) at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:183) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:151) at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:142) at org.apache.avro.tool.DataFileWriteTool.run(DataFileWriteTool.java:99) at org.apache.avro.tool.Main.run(Main.java:84) at org.apache.avro.tool.Main.main(Main.java:73) Am I missing something? Why do I get "Expected start-union. Got VALUE_NUMBER_INT".
According to the explanation by Doug Cutting, Avro's JSON encoding requires that non-null union values be tagged with their intended type. This is because unions like ["bytes","string"] and ["int","long"] are ambiguous in JSON, the first are both encoded as JSON strings, while the second are both encoded as JSON numbers. http://avro.apache.org/docs/current/spec.html#json_encoding Thus your record must be encoded as: {"name": "Alyssa", "favorite_number": {"int": 7}, "favorite_color": null}
Avro
27,485,580
35
I'm using a Kafka Source in Spark Structured Streaming to receive Confluent encoded Avro records. I intend to use Confluent Schema Registry, but the integration with spark structured streaming seems to be impossible. I have seen this question, but unable to get it working with the Confluent Schema Registry. Reading Avro messages from Kafka with Spark 2.0.2 (structured streaming)
It took me a couple months of reading source code and testing things out. In a nutshell, Spark can only handle String and Binary serialization. You must manually deserialize the data. In spark, create the confluent rest service object to get the schema. Convert the schema string in the response object into an Avro schema using the Avro parser. Next, read the Kafka topic as normal. Then map over the binary typed "value" column with the Confluent KafkaAvroDeSerializer. I strongly suggest getting into the source code for these classes because there is a lot going on here, so for brevity I'll leave out many details. //Used Confluent version 3.2.2 to write this. import io.confluent.kafka.schemaregistry.client.rest.RestService import io.confluent.kafka.serializers.KafkaAvroDeserializer import org.apache.avro.Schema case class DeserializedFromKafkaRecord(key: String, value: String) val schemaRegistryURL = "http://127.0.0.1:8081" val topicName = "Schema-Registry-Example-topic1" val subjectValueName = topicName + "-value" //create RestService object val restService = new RestService(schemaRegistryURL) //.getLatestVersion returns io.confluent.kafka.schemaregistry.client.rest.entities.Schema object. val valueRestResponseSchema = restService.getLatestVersion(subjectValueName) //Use Avro parsing classes to get Avro Schema val parser = new Schema.Parser val topicValueAvroSchema: Schema = parser.parse(valueRestResponseSchema.getSchema) //key schema is typically just string but you can do the same process for the key as the value val keySchemaString = "\"string\"" val keySchema = parser.parse(keySchemaString) //Create a map with the Schema registry url. //This is the only Required configuration for Confluent's KafkaAvroDeserializer. val props = Map("schema.registry.url" -> schemaRegistryURL) //Declare SerDe vars before using Spark structured streaming map. Avoids non serializable class exception. var keyDeserializer: KafkaAvroDeserializer = null var valueDeserializer: KafkaAvroDeserializer = null //Create structured streaming DF to read from the topic. val rawTopicMessageDF = sql.readStream .format("kafka") .option("kafka.bootstrap.servers", "127.0.0.1:9092") .option("subscribe", topicName) .option("startingOffsets", "earliest") .option("maxOffsetsPerTrigger", 20) //remove for prod .load() //instantiate the SerDe classes if not already, then deserialize! val deserializedTopicMessageDS = rawTopicMessageDF.map{ row => if (keyDeserializer == null) { keyDeserializer = new KafkaAvroDeserializer keyDeserializer.configure(props.asJava, true) //isKey = true } if (valueDeserializer == null) { valueDeserializer = new KafkaAvroDeserializer valueDeserializer.configure(props.asJava, false) //isKey = false } //Pass the Avro schema. val deserializedKeyString = keyDeserializer.deserialize(topicName, row.key, keySchema).toString //topic name is actually unused in the source code, just required by the signature. Weird right? val deserializedValueString = valueDeserializer.deserialize(topicName, row.value, topicValueAvroSchema).toString DeserializedFromKafkaRecord(deserializedKeyString, deserializedValueString) } val deserializedDSOutputStream = deserializedTopicMessageDS.writeStream .outputMode("append") .format("console") .option("truncate", false) .start()
Avro
48,882,723
33
Is it possible to write an Avro schema/IDL that will generate a Java class that either extends a base class or implements an interface? It seems like the generated Java class extends the org.apache.avro.specific.SpecificRecordBase. So, the implements might be the way to go. But, I don't know if this is possible. I have seen examples with suggestions to define an explicit "type" field in each specific schema, with more of an association than inheritance semantics. I use my base class heavily in my factory classes and other parts of the code with generics like <T extends BaseObject>. Currently, I had it code generated from the JSON Schema, which supports inheritance. Another side question: can you use IDL to define just records without the protocol definition? I think the answer is no because the compiler complains about the missing protocol keyword. Help appreciated! Thanks.
I found a better way to solve this problem. Looking at the Schema generation source in Avro, I figured out that internally the class generation logic uses Velocity schemas to generate the classes. I modified the record.vm template to also implement my specific interface. There is a way to specify the location of velocity directory using the templateDirectory configuration in the maven build plugin. I also switched to using SpecificDatumWriter instead of reflectDatumWriter. <plugin> <groupId>org.apache.avro</groupId> <artifactId>avro-maven-plugin</artifactId> <version>${avro.version}</version> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>schema</goal> </goals> <configuration> <sourceDirectory>${basedir}/src/main/resources/avro/schema</sourceDirectory> <outputDirectory>${basedir}/target/java-gen</outputDirectory> <fieldVisibility>private</fieldVisibility> <stringType>String</stringType> <templateDirectory>${basedir}/src/main/resources/avro/velocity-templates/</templateDirectory> </configuration> </execution> </executions> </plugin>
Avro
20,864,470
32
I'm trying to use Avro for messages being read from/written to Kafka. Does anyone have an example of using the Avro binary encoder to encode/decode data that will be put on a message queue? I need the Avro part more than the Kafka part. Or, perhaps I should look at a different solution? Basically, I'm trying to find a more efficient solution to JSON with regards to space. Avro was just mentioned since it can be more compact than JSON.
This is a basic example. I have not tried it with multiple partitions/topics. //Sample producer code import org.apache.avro.Schema; import org.apache.avro.generic.GenericData; import org.apache.avro.generic.GenericRecord; import org.apache.avro.io.*; import org.apache.avro.specific.SpecificDatumReader; import org.apache.avro.specific.SpecificDatumWriter; import org.apache.commons.codec.DecoderException; import org.apache.commons.codec.binary.Hex; import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMessage; import kafka.producer.ProducerConfig; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.IOException; import java.nio.charset.Charset; import java.util.Properties; public class ProducerTest { void producer(Schema schema) throws IOException { Properties props = new Properties(); props.put("metadata.broker.list", "0:9092"); props.put("serializer.class", "kafka.serializer.DefaultEncoder"); props.put("request.required.acks", "1"); ProducerConfig config = new ProducerConfig(props); Producer<String, byte[]> producer = new Producer<String, byte[]>(config); GenericRecord payload1 = new GenericData.Record(schema); //Step2 : Put data in that genericrecord object payload1.put("desc", "'testdata'"); //payload1.put("name", "अasa"); payload1.put("name", "dbevent1"); payload1.put("id", 111); System.out.println("Original Message : "+ payload1); //Step3 : Serialize the object to a bytearray DatumWriter<GenericRecord>writer = new SpecificDatumWriter<GenericRecord>(schema); ByteArrayOutputStream out = new ByteArrayOutputStream(); BinaryEncoder encoder = EncoderFactory.get().binaryEncoder(out, null); writer.write(payload1, encoder); encoder.flush(); out.close(); byte[] serializedBytes = out.toByteArray(); System.out.println("Sending message in bytes : " + serializedBytes); //String serializedHex = Hex.encodeHexString(serializedBytes); //System.out.println("Serialized Hex String : " + serializedHex); KeyedMessage<String, byte[]> message = new KeyedMessage<String, byte[]>("page_views", serializedBytes); producer.send(message); producer.close(); } public static void main(String[] args) throws IOException, DecoderException { ProducerTest test = new ProducerTest(); Schema schema = new Schema.Parser().parse(new File("src/test_schema.avsc")); test.producer(schema); } } //Sample consumer code Part 1 : Consumer group code : as you can have more than multiple consumers for multiple partitions/ topics. import kafka.consumer.ConsumerConfig; import kafka.consumer.KafkaStream; import kafka.javaapi.consumer.ConsumerConnector; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Properties; import java.util.concurrent.Executor; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; /** * Created by on 9/1/15. */ public class ConsumerGroupExample { private final ConsumerConnector consumer; private final String topic; private ExecutorService executor; public ConsumerGroupExample(String a_zookeeper, String a_groupId, String a_topic){ consumer = kafka.consumer.Consumer.createJavaConsumerConnector( createConsumerConfig(a_zookeeper, a_groupId)); this.topic = a_topic; } private static ConsumerConfig createConsumerConfig(String a_zookeeper, String a_groupId){ Properties props = new Properties(); props.put("zookeeper.connect", a_zookeeper); props.put("group.id", a_groupId); props.put("zookeeper.session.timeout.ms", "400"); props.put("zookeeper.sync.time.ms", "200"); props.put("auto.commit.interval.ms", "1000"); return new ConsumerConfig(props); } public void shutdown(){ if (consumer!=null) consumer.shutdown(); if (executor!=null) executor.shutdown(); System.out.println("Timed out waiting for consumer threads to shut down, exiting uncleanly"); try{ if(!executor.awaitTermination(5000, TimeUnit.MILLISECONDS)){ } }catch(InterruptedException e){ System.out.println("Interrupted"); } } public void run(int a_numThreads){ //Make a map of topic as key and no. of threads for that topic Map<String, Integer> topicCountMap = new HashMap<String, Integer>(); topicCountMap.put(topic, new Integer(a_numThreads)); //Create message streams for each topic Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap); List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic); //initialize thread pool executor = Executors.newFixedThreadPool(a_numThreads); //start consuming from thread int threadNumber = 0; for (final KafkaStream stream : streams) { executor.submit(new ConsumerTest(stream, threadNumber)); threadNumber++; } } public static void main(String[] args) { String zooKeeper = args[0]; String groupId = args[1]; String topic = args[2]; int threads = Integer.parseInt(args[3]); ConsumerGroupExample example = new ConsumerGroupExample(zooKeeper, groupId, topic); example.run(threads); try { Thread.sleep(10000); } catch (InterruptedException ie) { } example.shutdown(); } } Part 2 : Indiviual consumer that actually consumes the messages. import kafka.consumer.ConsumerIterator; import kafka.consumer.KafkaStream; import kafka.message.MessageAndMetadata; import org.apache.avro.Schema; import org.apache.avro.generic.GenericRecord; import org.apache.avro.generic.IndexedRecord; import org.apache.avro.io.DatumReader; import org.apache.avro.io.Decoder; import org.apache.avro.io.DecoderFactory; import org.apache.avro.specific.SpecificDatumReader; import org.apache.commons.codec.binary.Hex; import java.io.File; import java.io.IOException; public class ConsumerTest implements Runnable{ private KafkaStream m_stream; private int m_threadNumber; public ConsumerTest(KafkaStream a_stream, int a_threadNumber) { m_threadNumber = a_threadNumber; m_stream = a_stream; } public void run(){ ConsumerIterator<byte[], byte[]>it = m_stream.iterator(); while(it.hasNext()) { try { //System.out.println("Encoded Message received : " + message_received); //byte[] input = Hex.decodeHex(it.next().message().toString().toCharArray()); //System.out.println("Deserializied Byte array : " + input); byte[] received_message = it.next().message(); System.out.println(received_message); Schema schema = null; schema = new Schema.Parser().parse(new File("src/test_schema.avsc")); DatumReader<GenericRecord> reader = new SpecificDatumReader<GenericRecord>(schema); Decoder decoder = DecoderFactory.get().binaryDecoder(received_message, null); GenericRecord payload2 = null; payload2 = reader.read(null, decoder); System.out.println("Message received : " + payload2); }catch (Exception e) { e.printStackTrace(); System.out.println(e); } } } } Test AVRO schema : { "namespace": "xyz.test", "type": "record", "name": "payload", "fields":[ { "name": "name", "type": "string" }, { "name": "id", "type": ["int", "null"] }, { "name": "desc", "type": ["string", "null"] } ] } Important things to note are : Youll need the standard kafka and avro jars to run this code out of the box. Is very important props.put("serializer.class", "kafka.serializer.DefaultEncoder"); Dont use stringEncoder as that wont work if you are sending a byte array as message. You can convert the byte[] to a hex string and send that and on the consumer reconvert hex string to byte[] and then to the original message. Run the zookeeper and the broker as mentioned here :- http://kafka.apache.org/documentation.html#quickstart and create a topic called "page_views" or whatever you want. Run the ProducerTest.java and then the ConsumerGroupExample.java and see the avro data being produced and consumed.
Avro
8,298,308
30
I'm a noob to Kafka and Avro. So i have been trying to get the Producer/Consumer running. So far i have been able to produce and consume simple Bytes and Strings, using the following : Configuration for the Producer : Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer"); Schema.Parser parser = new Schema.Parser(); Schema schema = parser.parse(USER_SCHEMA); Injection<GenericRecord, byte[]> recordInjection = GenericAvroCodecs.toBinary(schema); KafkaProducer<String, byte[]> producer = new KafkaProducer<>(props); for (int i = 0; i < 1000; i++) { GenericData.Record avroRecord = new GenericData.Record(schema); avroRecord.put("str1", "Str 1-" + i); avroRecord.put("str2", "Str 2-" + i); avroRecord.put("int1", i); byte[] bytes = recordInjection.apply(avroRecord); ProducerRecord<String, byte[]> record = new ProducerRecord<>("mytopic", bytes); producer.send(record); Thread.sleep(250); } producer.close(); } Now this is all well and good, the problem comes when i'm trying to serialize a POJO. So , i was able to get the AvroSchema from the POJO using the utility provided with Avro. Hardcoded the schema, and then tried to create a Generic Record to send through the KafkaProducer the producer is now set up as : Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.KafkaAvroSerializer"); Schema.Parser parser = new Schema.Parser(); Schema schema = parser.parse(USER_SCHEMA); // this is the Generated AvroSchema KafkaProducer<String, byte[]> producer = new KafkaProducer<>(props); this is where the problem is : the moment i use KafkaAvroSerializer, the producer doesn't come up due to : missing mandatory parameter : schema.registry.url I read up on why this is required, so that my consumer is able to decipher whatever the producer is sending to me. But isn't the schema already embedded in the AvroMessage? Would be really great if someone can share a working example of using KafkaProducer with the KafkaAvroSerializer without having to specify schema.registry.url would also really appreciate any insights/resources on the utility of the schema registry. thanks!
Note first: KafkaAvroSerializer is not provided in vanilla apache kafka - it is provided by Confluent Platform. (https://www.confluent.io/), as part of its open source components (http://docs.confluent.io/current/platform.html#confluent-schema-registry) Rapid answer: no, if you use KafkaAvroSerializer, you will need a schema registry. See some samples here: http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html The basic idea with schema registry is that each topic will refer to an avro schema (ie, you will only be able to send data coherent with each other. But a schema can have multiple version, so you still need to identify the schema for each record) We don't want to write the schema for everydata like you imply - often, schema is bigger than your data! That would be a waste of time parsing it everytime when reading, and a waste of ressources (network, disk, cpu) Instead, a schema registry instance will do a binding avro schema <-> int schemaId and the serializer will then write only this id before the data, after getting it from registry (and caching it for later use). So inside kafka, your record will be [<id> <bytesavro>] (and magic byte for technical reason), which is an overhead of only 5 bytes (to compare to the size of your schema) And when reading, your consumer will find the corresponding schema to the id, and deserializer avro bytes regarding it. You can find way more in confluent doc If you really have a use where you want to write the schema for every record, you will need an other serializer (I think writing your own, but it will be easy, just reuse https://github.com/confluentinc/schema-registry/blob/master/avro-serializer/src/main/java/io/confluent/kafka/serializers/AbstractKafkaAvroSerializer.java and remove the schema registry part to replace it with the schema, same for reading). But if you use avro, I would really discourage this - one day a later, you will need to implement something like avro registry to manage versioning
Avro
45,635,726
30
How do you extract first the schema and then the data from an avro file in Java? Identical to this question except in java. I've seen examples of how to get the schema from an avsc file but not an avro file. What direction should I be looking in? Schema schema = new Schema.Parser().parse( new File("/home/Hadoop/Avro/schema/emp.avsc") );
If you want know the schema of a Avro file without having to generate the corresponding classes or care about which class the file belongs to, you can use the GenericDatumReader: DatumReader<GenericRecord> datumReader = new GenericDatumReader<>(); DataFileReader<GenericRecord> dataFileReader = new DataFileReader<>(new File("file.avro"), datumReader); Schema schema = dataFileReader.getSchema(); System.out.println(schema); And then you can read the data inside the file: GenericRecord record = null; while (dataFileReader.hasNext()) { record = dataFileReader.next(record); System.out.println(record); }
Avro
45,496,786
28
Avro serialization is popular with Hadoop users but examples are so hard to find. Can anyone help me with this sample code? I'm mostly interested in using the Reflect API to read/write into files and to use the Union and Null annotations. public class Reflect { public class Packet { int cost; @Nullable TimeStamp stamp; public Packet(int cost, TimeStamp stamp){ this.cost = cost; this.stamp = stamp; } } public class TimeStamp { int hour = 0; int second = 0; public TimeStamp(int hour, int second){ this.hour = hour; this.second = second; } } public static void main(String[] args) throws IOException { TimeStamp stamp; Packet packet; stamp = new TimeStamp(12, 34); packet = new Packet(9, stamp); write(file, packet); packet = new Packet(8, null); write(file, packet); file.close(); // open file to read. packet = read(file); packet = read(file); } }
Here's a version of the above program that works. This also uses compression on the file. import java.io.File; import org.apache.avro.Schema; import org.apache.avro.file.DataFileWriter; import org.apache.avro.file.DataFileReader; import org.apache.avro.file.CodecFactory; import org.apache.avro.io.DatumWriter; import org.apache.avro.io.DatumReader; import org.apache.avro.reflect.ReflectData; import org.apache.avro.reflect.ReflectDatumWriter; import org.apache.avro.reflect.ReflectDatumReader; import org.apache.avro.reflect.Nullable; public class Reflect { public static class Packet { int cost; @Nullable TimeStamp stamp; public Packet() {} // required to read public Packet(int cost, TimeStamp stamp){ this.cost = cost; this.stamp = stamp; } } public static class TimeStamp { int hour = 0; int second = 0; public TimeStamp() {} // required to read public TimeStamp(int hour, int second){ this.hour = hour; this.second = second; } } public static void main(String[] args) throws Exception { // one argument: a file name File file = new File(args[0]); // get the reflected schema for packets Schema schema = ReflectData.get().getSchema(Packet.class); // create a file of packets DatumWriter<Packet> writer = new ReflectDatumWriter<Packet>(Packet.class); DataFileWriter<Packet> out = new DataFileWriter<Packet>(writer) .setCodec(CodecFactory.deflateCodec(9)) .create(schema, file); // write 100 packets to the file, odds with null timestamp for (int i = 0; i < 100; i++) { out.append(new Packet(i, (i%2==0) ? new TimeStamp(12, i) : null)); } // close the output file out.close(); // open a file of packets DatumReader<Packet> reader = new ReflectDatumReader<Packet>(Packet.class); DataFileReader<Packet> in = new DataFileReader<Packet>(file, reader); // read 100 packets from the file & print them as JSON for (Packet packet : in) { System.out.println(ReflectData.get().toString(packet)); } // close the input file in.close(); } }
Avro
11,866,466
27
In one of our projects we are using Kafka with AVRO to transfer data across applications. Data is added to an AVRO object and object is binary encoded to write to Kafka. We use binary encoding as it is generally mentioned as a minimal representation compared to other formats. The data is usually a JSON string and when it is saved in a file, it uses up to 10 Mb of disk. However, when the file is compressed (.zip), it uses only few KBs. We are concerned storing such data in Kafka, so trying to compress before writing to a Kafka topic. When length of binary encoded message (i.e. length of byte array) is measured, it is proportional to the length of the data string. So I assume binary encoding is not reducing any size. Could someone tell me if binary encoding compresses data? If not, how can I apply compression? Thanks!
If binary encoding compresses data? Yes and no, it depends on your data. According to avro binary encoding, yes for it only stores the schema once for each .avro file, regardless how many datas in that file, hence save some space w/o storing JSON's key name many times. And avro serialization do a bit compression with storing int and long leveraging variable-length zig-zag coding(only for small values). For the rest, avro don't "compress" data. No for in some extreme case avro serialized data could be bigger than raw data. Eg. one .avro file with one Record in which only one string field. The schema overhead can defeat the saving from don't need to store the key name. If not, how can I apply compression? According to avro codecs, avro has built-in compression codec and optional ones. Just add one line while writing object container files : DataFileWriter.setCodec(CodecFactory.deflateCodec(6)); // using deflate or DataFileWriter.setCodec(CodecFactory.snappyCodec()); // using snappy codec To use snappy you need to include snappy-java library into your dependencies.
Avro
26,711,256
25
I want to use Avro to serialize the data for my Kafka messages and would like to use it with an Avro schema repository so I don't have to include the schema with every message. Using Avro with Kafka seems like a popular thing to do, and lots of blogs / Stack Overflow questions / usergroups etc reference sending the Schema Id with the message but I cannot find an actual example of where it should go. I think it should go in the Kafka message header somewhere but I cannot find an obvious place. If it was in the Avro message you would have to decode it against a schema to get the message contents and reveal the schema you need to decode against, which has obvious problems. I am using the C# client but an example in any language would be great. The message class has these fields: public MessageMetadata Meta { get; set; } public byte MagicNumber { get; set; } public byte Attribute { get; set; } public byte[] Key { get; set; } public byte[] Value { get; set; } but non of these seem correct. The MessageMetaData only has Offset and PartitionId. So, where should the Avro Schema Id go?
The schema id is actually encoded in the avro message itself. Take a look at this to see how encoders/decoders are implemented. In general what's happening when you send an Avro message to Kafka: The encoder gets the schema from the object to be encoded. Encoder asks the schema registry for an id for this schema. If the schema is already registered you'll get an existing id, if not - the registry will register the schema and return the new id. The object gets encoded as follows: [magic byte][schema id][actual message] where magic byte is just a 0x0 byte which is used to distinguish that kind of messages, schema id is a 4 byte integer value the rest is the actual encoded message. When you decode the message back here's what happens: The decoder reads the first byte and makes sure it is 0x0. The decoder reads the next 4 bytes and converts them to an integer value. This is how schema id is decoded. Now when the decoder has a schema id it may ask the schema registry for the actual schema for this id. Voila! If your key is Avro encoded then your key will be of the format described above. The same applies for value. This way your key and value may be both Avro values and use different schemas. Edit to answer the question in comment: The actual schema is stored in the schema repository (that is the whole point of schema repository actually - to store schemas :)). The Avro Object Container Files format has nothing to do with the format described above. KafkaAvroEncoder/Decoder use slightly different message format (but the actual messages are encoded exactly the same way sure). The main difference between these formats is that Object Container Files carry the actual schema and may contain multiple messages corresponding to that schema, whereas the format described above carries only the schema id and exactly one message corresponding to that schema. Passing object-container-file-encoded messages around would probably be not obvious to follow/maintain because one Kafka message would then contain multiple Avro messages. Or you could ensure that one Kafka message contains only one Avro message but that would result in carrying schema with each message. Avro schemas can be quite large (I've seen schemas like 600 KB and more) and carrying the schema with each message would be really costly and wasteful so that is where schema repository kicks in - the schema is fetched only once and gets cached locally and all other lookups are just map lookups that are fast.
Avro
31,204,201
25
Latest Avro compiler (1.8.2) generates java sources for dates logical types with Joda-Time based implementations. How can I configure Avro compiler to produce sources that used Java 8 date-time API?
Currently (avro 1.8.2) this is not possible. It's hardcoded to generate Joda date/time classes. The current master branch has switched to Java 8 and there is an open issue (with Pull Request) to add the ability to generate classes with java.time.* types. I have no idea on any kind of release schedule for whatever is currently in master unfortunately. If you feel adventurous you can apply the patch to 1.8.2, since in theory it should all be compatible. The underlying base types when serializing / deserializing are still integers and longs.
Avro
45,712,231
25
I am trying to convert a Json string into a generic Java Object, with an Avro Schema. Below is my code. String json = "{\"foo\": 30.1, \"bar\": 60.2}"; String schemaLines = "{\"type\":\"record\",\"name\":\"FooBar\",\"namespace\":\"com.foo.bar\",\"fields\":[{\"name\":\"foo\",\"type\":[\"null\",\"double\"],\"default\":null},{\"name\":\"bar\",\"type\":[\"null\",\"double\"],\"default\":null}]}"; InputStream input = new ByteArrayInputStream(json.getBytes()); DataInputStream din = new DataInputStream(input); Schema schema = Schema.parse(schemaLines); Decoder decoder = DecoderFactory.get().jsonDecoder(schema, din); DatumReader<Object> reader = new GenericDatumReader<Object>(schema); Object datum = reader.read(null, decoder); I get "org.apache.avro.AvroTypeException: Expected start-union. Got VALUE_NUMBER_FLOAT" Exception. The same code works, if I don't have unions in the schema. Can someone please explain and give me a solution.
For anyone who uses Avro - 1.8.2, JsonDecoder is not directly instantiable outside the package org.apache.avro.io now. You can use DecoderFactory for it as shown in the following code: String schemaStr = "<some json schema>"; String genericRecordStr = "<some json record>"; Schema.Parser schemaParser = new Schema.Parser(); Schema schema = schemaParser.parse(schemaStr); DecoderFactory decoderFactory = new DecoderFactory(); Decoder decoder = decoderFactory.jsonDecoder(schema, genericRecordStr); DatumReader<GenericData.Record> reader = new GenericDatumReader<>(schema); GenericRecord genericRecord = reader.read(null, decoder);
Avro
27,559,543
24
ZigZag requires a lot of overhead to write/read numbers. Actually I was stunned to see that it doesn't just write int/long values as they are, but does a lot of additional scrambling. There's even a loop involved: https://github.com/mardambey/mypipe/blob/master/avro/lang/java/avro/src/main/java/org/apache/avro/io/DirectBinaryEncoder.java#L90 I don't seem to be able to find in Protocol Buffers docs or in Avro docs, or reason myself, what's the advantage of scrambling numbers like that? Why is it better to have positive and negative numbers alternated after encoding? Why they're not just written in little-endian, big-endian, network order which would only require reading them into memory and possibly reverse bit endianness? What do we buy paying with performance?
It is a variable length 7-bit encoding. The first byte of the encoded value has it high bit set to 0, subsequent bytes have it at 1. Which is the way the decoder can tell how many bytes were used to encode the value. Byte order is always little-endian, regardless of the machine architecture. It is an encoding trick that permits writing as few bytes as needed to encode the value. So an 8 byte long with a value between -64 and 63 takes only one byte. Which is common, the range provided by long is very rarely used in practice. Packing the data tightly without the overhead of a gzip-style compression method was the design goal. Also used in the .NET Framework. The processor overhead needed to en/decode the value is inconsequential. Already much lower than a compression scheme, it is a very small fraction of the I/O cost.
Avro
33,935,266
22
I am trying to use Confluent kafka-avro-console-consumer, but how to pass parameters for Schema Registry to it?
Just a guess at what you are looking for... kafka-avro-console-consumer --topic topicX --bootstrap-server kafka:9092 \ --property schema.registry.url="http://schema-registry:8081" No, you cannot specify a schema version. The ID is consumed directly from the Avro data in the topic. The subject name is mapped to the topic name. Use --property print.key=true to see the Kafka message key. This is a general property of the regular console consumer. These are the only extra options in the avro-console-consumer script, meaning other than what's already defined in kafka-consumer-consumer, you can only provide --formatter or --property schema.registry.url, and no other Schema Registry specific parameters (whatever those may be) for OPTION in "$@" do case $OPTION in --formatter) DEFAULT_AVRO_FORMATTER="" ;; --*) ;; *) PROPERTY=$OPTION case $PROPERTY in schema.registry.url*) DEFAULT_SCHEMA_REGISTRY_URL="" ;; esac ;; esac done
Avro
49,927,747
22
I like to use the same record type in an Avro schema multiple times. Consider this schema definition { "type": "record", "name": "OrderBook", "namespace": "my.types", "doc": "Test order update", "fields": [ { "name": "bids", "type": { "type": "array", "items": { "type": "record", "name": "OrderBookVolume", "namespace": "my.types", "fields": [ { "name": "price", "type": "double" }, { "name": "volume", "type": "double" } ] } } }, { "name": "asks", "type": { "type": "array", "items": { "type": "record", "name": "OrderBookVolume", "namespace": "my.types", "fields": [ { "name": "price", "type": "double" }, { "name": "volume", "type": "double" } ] } } } ] } This is not a valid Avro schema and the Avro schema parser fails with org.apache.avro.SchemaParseException: Can't redefine: my.types.OrderBookVolume I can fix this by making the type unique by moving the OrderBookVolume into two different namespaces: { "type": "record", "name": "OrderBook", "namespace": "my.types", "doc": "Test order update", "fields": [ { "name": "bids", "type": { "type": "array", "items": { "type": "record", "name": "OrderBookVolume", "namespace": "my.types.bid", "fields": [ { "name": "price", "type": "double" }, { "name": "volume", "type": "double" } ] } } }, { "name": "asks", "type": { "type": "array", "items": { "type": "record", "name": "OrderBookVolume", "namespace": "my.types.ask", "fields": [ { "name": "price", "type": "double" }, { "name": "volume", "type": "double" } ] } } } ] } This is not a valid solution as the Avro code generation would generate two different classes, which is very annoying if I like to use the type also for other things and not just for deser and ser. This problem is related to this issue here: Avro Spark issue #73 Which added differentiation of nested records with the same name by prepending the namespace with the outer record names. Their use case may be purely storage related so it may work for them but not for us. Does anybody know a better solution? Is this a hard limitation of Avro?
It's not well documented, but Avro allows you to reference previously defined names by using the full namespace for the name that is being referenced. In your case, the following code would result in only one class being generated, referenced by each array. It also DRYs up the schema nicely. { "type": "record", "name": "OrderBook", "namespace": "my.types", "doc": "Test order update", "fields": [ { "name": "bids", "type": { "type": "array", "items": { "type": "record", "name": "OrderBookVolume", "namespace": "my.types.bid", "fields": [ { "name": "price", "type": "double" }, { "name": "volume", "type": "double" } ] } } }, { "name": "asks", "type": { "type": "array", "items": "my.types.bid.OrderBookVolume" } } ] }
Avro
48,100,575
21
I'm dealing with server logs which are JSON format, and I want to store my logs on AWS S3 in Parquet format(and Parquet requires an Avro schema). First, all logs have a common set of fields, second, all logs have a lot of optional fields which are not in the common set. For example, the follwoing are three logs: { "ip": "172.18.80.109", "timestamp": "2015-09-17T23:00:18.313Z", "message":"blahblahblah"} { "ip": "172.18.80.112", "timestamp": "2015-09-17T23:00:08.297Z", "message":"blahblahblah", "microseconds": 223} { "ip": "172.18.80.113", "timestamp": "2015-09-17T23:00:08.299Z", "message":"blahblahblah", "thread":"http-apr-8080-exec-1147"} All of the three logs have 3 shared fields: ip, timestamp and message, some of the logs have additional fields, such as microseconds and thread. If I use the following schema then I will lose all additional fields.: {"namespace": "example.avro", "type": "record", "name": "Log", "fields": [ {"name": "ip", "type": "string"}, {"name": "timestamp", "type": "String"}, {"name": "message", "type": "string"} ] } And the following schema works fine: {"namespace": "example.avro", "type": "record", "name": "Log", "fields": [ {"name": "ip", "type": "string"}, {"name": "timestamp", "type": "String"}, {"name": "message", "type": "string"}, {"name": "microseconds", "type": [null,long]}, {"name": "thread", "type": [null,string]} ] } But the only problem is that I don't know all the names of optional fields unless I scan all the logs, besides, there will new additional fields in future. Then I think out an idea that combines record and map: {"namespace": "example.avro", "type": "record", "name": "Log", "fields": [ {"name": "ip", "type": "string"}, {"name": "timestamp", "type": "String"}, {"name": "message", "type": "string"}, {"type": "map", "values": "string"} // error ] } Unfortunately this won't compile: java -jar avro-tools-1.7.7.jar compile schema example.avro . It will throw out an error: Exception in thread "main" org.apache.avro.SchemaParseException: No field name: {"type":"map","values":"long"} at org.apache.avro.Schema.getRequiredText(Schema.java:1305) at org.apache.avro.Schema.parse(Schema.java:1192) at org.apache.avro.Schema$Parser.parse(Schema.java:965) at org.apache.avro.Schema$Parser.parse(Schema.java:932) at org.apache.avro.tool.SpecificCompilerTool.run(SpecificCompilerTool.java:73) at org.apache.avro.tool.Main.run(Main.java:84) at org.apache.avro.tool.Main.main(Main.java:73) Is there a way to store JSON strings in Avro format which are flexible to deal with unknown optional fields? Basically this is a schema evolution problem, Spark can deal with this problem by Schema Merging. I'm seeking a solution with Hadoop.
The map type is a "complex" type in avro terminology. The below snippet works: { "namespace": "example.avro", "type": "record", "name": "Log", "fields": [ {"name": "ip", "type": "string"}, {"name": "timestamp", "type": "string"}, {"name": "message", "type": "string"}, {"name": "additional", "type": {"type": "map", "values": "string"}} ] }
Avro
32,642,154
20
I get an error when running kafka-mongodb-source-connect I was trying to run connect-standalone with connect-avro-standalone.properties and MongoSourceConnector.properties so that Connect write data which is written in MongoDB to Kafka topic. This is what I wanted to do bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties share/confluent-hub-components/mongodb-kafka-connect-mongodb/etc/MongoSourceConnector.properties connect-avro-standalone.properties # Sample configuration for a standalone Kafka Connect worker that uses Avro serialization and # integrates the the Schema Registry. This sample configuration assumes a local installation of # Confluent Platform with all services running on their default ports. # Bootstrap Kafka servers. If multiple servers are specified, they should be comma-separated. bootstrap.servers=localhost:9092 # The converters specify the format of data in Kafka and how to translate it into Connect data. # Every Connect user will need to configure these based on the format they want their data in # when loaded from or stored into Kafka key.converter=io.confluent.connect.avro.AvroConverter key.converter.schema.registry.url=http://localhost:8081 value.converter=io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url=http://localhost:8081 # The internal converter used for offsets and config data is configurable and must be specified, # but most users will always want to use the built-in default. Offset and config data is never # visible outside of Connect in this format. internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false # Local storage file for offset data offset.storage.file.filename=/tmp/connect.offsets # Confluent Control Center Integration -- uncomment these lines to enable Kafka client interceptors # that will report audit data that can be displayed and analyzed in Confluent Control Center # producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor # consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor # These are provided to inform the user about the presence of the REST host and port configs # Hostname & Port for the REST API to listen on. If this is set, it will bind to the interface used to listen to requests. #rest.host.name= #rest.port=8083 # The Hostname & Port that will be given out to other workers to connect to i.e. URLs that are routable from other servers. #rest.advertised.host.name= #rest.advertised.port= # Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins # (connectors, converters, transformations). The list should consist of top level directories that include # any combination of: # a) directories immediately containing jars with plugins and their dependencies # b) uber-jars with plugins and their dependencies # c) directories immediately containing the package directory structure of classes of plugins and their dependencies # Examples: # plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors, # Replace the relative path below with an absolute path if you are planning to start Kafka Connect from within a # directory other than the home directory of Confluent Platform. plugin.path=share/java,/Users/anton/Downloads/confluent-5.3.2/share/confluent-hub-components MongoSourceConnecor.properties name=mongo-source connector.class=com.mongodb.kafka.connect.MongoSourceConnector tasks.max=1 # Connection and source configuration connection.uri=mongodb://localhost:27017 database=test collection=test This is the error: [2020-01-02 18:55:11,546] ERROR WorkerSourceTask{id=mongo-source-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:179) com.mongodb.MongoCommandException: Command failed with error 40573 (Location40573): 'The $changeStream stage is only supported on replica sets' on server localhost:27017. The full response is {"ok": 0.0, "errmsg": "The $changeStream stage is only supported on replica sets", "code": 40573, "codeName": "Location40573"}
MongoDB change streams option is available only in replica sets setup. However, you can update your standalone installation to a single node replica set by following the below steps. Locate the mongodb.conf file and add the replica set details Add the following replica set details to mongodb.conf file replication: replSetName: "<replica-set name>" Example replication: replSetName: "rs0" Note: Location in brew installed MongoDB /usr/local/etc/mongod.conf Initiate the replica set using rs.initiate() Login to the MongoDB shell and run command rs.initiate() this will start your replica set. Logs look like the following on successful start > rs.initiate() { "info2" : "no configuration specified. Using a default configuration for the set", "me" : "127.0.0.1:27017", "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1577545731, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1577545731, 1) } That's all with these two simple steps you are running a MongoDB replica set with one node only. Reference: https://onecompiler.com/posts/3vchuyxuh/enabling-replica-set-in-mongodb-with-just-one-node
Avro
59,571,945
20
I can't find a way to deserialize an Apache Avro file with C#. The Avro file is a file generated by the Archive feature in Microsoft Azure Event Hubs. With Java I can use Avro Tools from Apache to convert the file to JSON: java -jar avro-tools-1.8.1.jar tojson --pretty inputfile > output.json Using NuGet package Microsoft.Hadoop.Avro I am able to extract SequenceNumber, Offset and EnqueuedTimeUtc, but since I don't know what type to use for Body an exception is thrown. I've tried with Dictionary<string, object> and other types. static void Main(string[] args) { var fileName = "..."; using (Stream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.Read)) { using (var reader = AvroContainer.CreateReader<EventData>(stream)) { using (var streamReader = new SequentialReader<EventData>(reader)) { var record = streamReader.Objects.FirstOrDefault(); } } } } [DataContract(Namespace = "Microsoft.ServiceBus.Messaging")] public class EventData { [DataMember(Name = "SequenceNumber")] public long SequenceNumber { get; set; } [DataMember(Name = "Offset")] public string Offset { get; set; } [DataMember(Name = "EnqueuedTimeUtc")] public string EnqueuedTimeUtc { get; set; } [DataMember(Name = "Body")] public foo Body { get; set; } // More properties... } The schema looks like this: { "type": "record", "name": "EventData", "namespace": "Microsoft.ServiceBus.Messaging", "fields": [ { "name": "SequenceNumber", "type": "long" }, { "name": "Offset", "type": "string" }, { "name": "EnqueuedTimeUtc", "type": "string" }, { "name": "SystemProperties", "type": { "type": "map", "values": [ "long", "double", "string", "bytes" ] } }, { "name": "Properties", "type": { "type": "map", "values": [ "long", "double", "string", "bytes" ] } }, { "name": "Body", "type": [ "null", "bytes" ] } ] }
I was able to get full data access working using dynamic. Here's the code for accessing the raw body data, which is stored as an array of bytes. In my case, those bytes contain UTF8-encoded JSON, but of course it depends on how you initially created your EventData instances that you published to the Event Hub: using (var reader = AvroContainer.CreateGenericReader(stream)) { while (reader.MoveNext()) { foreach (dynamic record in reader.Current.Objects) { var sequenceNumber = record.SequenceNumber; var bodyText = Encoding.UTF8.GetString(record.Body); Console.WriteLine($"{sequenceNumber}: {bodyText}"); } } } If someone can post a statically-typed solution, I'll upvote it, but given that the bigger latency in any system will almost certainly be the connection to the Event Hub Archive blobs, I wouldn't worry about parsing performance. :)
Avro
39,846,833
19
I am receiving from a remote server Kafka Avro messages in Python (using the consumer of Confluent Kafka Python library), that represent clickstream data with json dictionaries with fields like user agent, location, url, etc. Here is what a message looks like: b'\x01\x00\x00\xde\x9e\xa8\xd5\x8fW\xec\x9a\xa8\xd5\x8fW\x1axxx.xxx.xxx.xxx\x02:https://website.in/rooms/\x02Hhttps://website.in/wellness-spa/\x02\xaa\x14\x02\x9c\n\x02\xaa\x14\x02\xd0\x0b\x02V0:j3lcu1if:rTftGozmxSPo96dz1kGH2hvd0CREXmf2\x02V0:j3lj1xt7:YD4daqNRv_Vsea4wuFErpDaWeHu4tW7e\x02\x08null\x02\nnull0\x10pageview\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x10Thailand\x02\xa6\x80\xc4\x01\x02\x0eBangkok\x02\x8c\xba\xc4\x01\x020*\xa9\x13\xd0\x84+@\x02\xec\xc09#J\x1fY@\x02\x8a\x02Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/58.0.3029.96 Chrome/58.0.3029.96 Safari/537.36\x02\x10Chromium\x02\x10Chromium\x028Google Inc. and contributors\x02\x0eBrowser\x02\x1858.0.3029.96\x02"Personal computer\x02\nLinux\x02\x00\x02\x1cCanonical Ltd.' How to decode it? I tried bson decode but the string was not recognized as UTF-8 as it's a specific Avro encoding I guess. I found https://github.com/verisign/python-confluent-schemaregistry but it only supports Python 2.7. Ideally I would like to work with Python 3.5+ and MongoDB to process the data and store it as it's my current infrastructure.
If you use Confluent Schema Registry and want to deserialize avro messages, just add message_bytes.seek(5) to the decode function, since Confluent adds 5 extra bytes before the typical avro-formatted data. def decode(msg_value): message_bytes = io.BytesIO(msg_value) message_bytes.seek(5) decoder = BinaryDecoder(message_bytes) event_dict = reader.read(decoder) return event_dict
Avro
44,407,780
19
I'm actually trying to serialize objects containing dates with Avro, and the deserialized date doesn't match the expected value (tested with avro 1.7.2 and 1.7.1). Here's the class I'm serializing : import java.text.SimpleDateFormat; import java.util.Date; public class Dummy { private Date date; private SimpleDateFormat df = new SimpleDateFormat("dd/MM/yyyy hh:mm:ss.SSS"); public Dummy() { } public void setDate(Date date) { this.date = date; } public Date getDate() { return date; } @Override public String toString() { return df.format(date); } } The code used to serialize / deserialize : import java.io.ByteArrayOutputStream; import java.io.IOException; import java.util.Date; import org.apache.avro.Schema; import org.apache.avro.io.DatumReader; import org.apache.avro.io.DatumWriter; import org.apache.avro.io.Decoder; import org.apache.avro.io.DecoderFactory; import org.apache.avro.io.Encoder; import org.apache.avro.io.EncoderFactory; import org.apache.avro.reflect.ReflectData; import org.apache.avro.reflect.ReflectDatumReader; import org.apache.avro.reflect.ReflectDatumWriter; public class AvroSerialization { public static void main(String[] args) { Dummy expected = new Dummy(); expected.setDate(new Date()); System.out.println("EXPECTED: " + expected); Schema schema = ReflectData.get().getSchema(Dummy.class); ByteArrayOutputStream baos = new ByteArrayOutputStream(); Encoder encoder = EncoderFactory.get().binaryEncoder(baos, null); DatumWriter<Dummy> writer = new ReflectDatumWriter<Dummy>(schema); try { writer.write(expected, encoder); encoder.flush(); Decoder decoder = DecoderFactory.get().binaryDecoder(baos.toByteArray(), null); DatumReader<Dummy> reader = new ReflectDatumReader<Dummy>(schema); Dummy actual = reader.read(null, decoder); System.out.println("ACTUAL: " + actual); } catch (IOException e) { System.err.println("IOException: " + e.getMessage()); } } } And the output : EXPECTED: 06/11/2012 05:43:29.188 ACTUAL: 06/11/2012 05:43:29.387 Is it related to a known bug, or is it related to the way I'm serializing the object ?
Avro 1.8 now has a date "logicalType", which annotates int. For example: {"name": "date", "type": "int", "logicalType": "date"} Quoting the spec: A date logical type annotates an Avro int, where the int stores the number of days from the unix epoch, 1 January 1970 (ISO calendar).
Avro
13,255,589
18
When attempting to write avro, I get the following error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 7 in stage 35.0 failed 1 times, most recent failure: Lost task 7.0 in stage 35.0 (TID 110, localhost): java.lang.ClassCastException: java.util.HashMap cannot be cast to org.apache.avro.mapred.AvroWrapper I had read in an avro file with 3 records using: avro_rdd = sc.newAPIHadoopFile( "threerecords.avro", "org.apache.avro.mapreduce.AvroKeyInputFormat", "org.apache.avro.mapred.AvroKey", "org.apache.hadoop.io.NullWritable", keyConverter="org.apache.spark.examples.pythonconverters.AvroWrapperToJavaConverter", conf=None) output = avro_rdd.map(lambda x: x[0]).collect() Then I tried to write out a single record (output kept in avro) with: conf = {"avro.schema.input.key": reduce(lambda x, y: x + y, sc.textFile("myschema.avsc", 1).collect())} sc.parallelize([output[0]]).map(lambda x: (x, None)).saveAsNewAPIHadoopFile( "output.avro", "org.apache.avro.mapreduce.AvroKeyOutputFormat", "org.apache.avro.mapred.AvroKey", "org.apache.hadoop.io.NullWritable", keyConverter="org.apache.spark.examples.pythonconverters.AvroWrapperToJavaConverter", conf=conf) How do I get around that error/write out an individual avro record succsssfully? I know my schema is correct because it is from the avro itself.
It looks like this isn't supported at the moment. You are now trying to use the java map as an Avro Record and covert it to a Java map again. That's why you get the error the error about the java hashmap. There is a pull request from staslos to add the Avro output format, see link for the pull request and the example. There is a converter required which is missing in AvroConverters.scala to convert from the java map back to the avro format.
Avro
29,619,081
18