question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
Google Labs Browser Size I've always preferred fixed-width layouts over fluid-width layouts, one of the main reasons is that I'm able to better understand how the whole picture will be without having to worry about the screen resolution. But now the "picture" has changed, there is a high discrepancy between the lowest and highest resolutions used by most users nowadays and they seem to be here to stay. I've a netbook that only supports 800 pixels or 1024 pixels wide; I also have a 22" monitor that supports 1650 pixels and, 24" monitors that support 1920 pixels and more are becoming pretty common. I've pretty much "ignored" the 800 pixels users for some time and I've been developing with fixed 950/960 pixels wide, I also notice that popular sites (SO for one) either use this approach or the fluid one. For text (almost) only websites (like Wikipedia) I don't see a problem using the fluid system but what about all the other websites that depend on images / video to create interesting content? Social Networks, Classifieds, and so on... What is (will be) their approach to address this issue? Seam Carving seems like a good option for the near future but it hasn't matured enough (neither browsers nor jQuery nativelly support it at this point in time), I also feel like users wouldn't understand it, get confused with it and as a consequence abandon the website. The de facto standard on the web is still 1024 pixels wide, and leaving 980 pixels unused in a 24" monitor just seems plain odd not to say wrong. So what are our options? I would love to hear what you think about this and your experiences with both fluid and fixed systems. PS: Popular websites using either one of these systems are also welcome, I'm specially interested in seeing non-text websites that use the fluid system. EDIT: I just saw this answer and I got kinda confused about the difference between fluid and liquid layouts, aren't they supposed to be exactly the same?
leaving 980 pixels unused in a 24" monitor just seems plain odd not to say wrong I’d disagree here. If you’ve got a monitor with that high a resolution, you’re probably not running with a maximised browser window. And even if you are, are you really fussed that the content’s all in a fixed area in the middle? Really? As long your site has a decent, useable layout, I don’t see the problem with space either side on high-res monitors.
Fluid
1,963,041
10
I have a fluid template, from where I call an often used snippet (called "partial"): Template: <f:render partial="fbLikeBox" arguments="{settings}"/> Partial fbLikeBox.html: <div id="fb-root"></div><script src="http://connect.facebook.net/xxxxxxxx"></script> <fb:like href="{settings.baseURL}/details/?guide_uid={audioguide.uid}"> </fb:like> As you can see, I need both values from the {settings} and the {audioguide} array passed to the partial. How can I achieve that?
Starting with TYPO3 4.6, you could just use <f:render partial="fbLikeBox" arguments="{_all}" /> The {_all} will simple make sure all variables currently available in your template, are available in the partial.
Fluid
7,668,620
10
I want to compare two dates. I want to make a condition which checks if the date in my database is older than the date in two days. Here are my two variables: {<f:format.date format="d.m.Y">+2 days</f:format.date>} {<f:format.date format="d.m.Y">{day.valid}</f:format.date>} I want to solve this in the template, not in the model or the controller.
Assign your date to variable »yourdate«. <f:if condition="{f:format.date(date: '+2 days', format: 'Y-m-d')} < {f:format.date(date: yourdate, format: 'Y-m-d')}"> <f:then> yourdate is smaller than now + 2 days. </f:then> <f:else> yourdate is greater than or equal to now + 2 days. </f:else> </f:if>
Fluid
15,929,181
10
I opted to use the Raphaël javascript library for its extensive browser support, but I'm having trouble getting the SVG to display properly in any browser, except for Chrome and Firefox. I've been scratching my head over this for a while now and would love to hear how I could make SVG work in a responsive layout. Chrome and Firefox display the SVG exactly as I'd like. It scales uniformly, maintains a correct aspect ratio and its parent's percentage given width. Internet Explorer maintains a correct aspect ratio, but does not properly scale with its parent. Safari scales properly with its parent in width, but not in height. The height, in relation to the parent container, is somehow set to a 100%. Javascript var menu = Raphael('menu', '100%', '100%'); menu.setViewBox('0', '0', '50', '50', true); var menu_bg = menu.rect(0,0, 50, 50); menu_bg.attr({ id : 'menu_bg', 'stroke-width' : '0', 'fill' : '#000' }); CSS * { margin: 0; padding: 0; -moz-box-sizing: border-box; box-sizing: border-box; } html, body { height: 100%; } #menu { width: 50%; background: #60F; padding: 2.5%; } #menu svg { display: block; width: 100%; height: 100%; max-height: 100%; } #text { width: 50%; background: #309; padding: 2.5%; color: #FFF; } HTML <div id="menu"></div> <div id="text"> Filler text </div> Live example can be viewed at http://jsfiddle.net/R8Qv3/
You can add these properties to your SVG tag - <svg viewBox="0 0 300 329" preserveAspectRatio="xMidYMid meet"> to preserve aspect ratio. Taken from the article I read that in... To preserve the aspect ratio of the containing element and ensure that is scales uniformly, we include the viewbox and preserveAspectRatio attributes. The value of the viewbox attribute is a list of four space- or comma-separated numbers: min-x, min-y, width and height. By defining the width and height of our viewbox, we define the aspect ratio of the SVG image. The values we set for the preserveAspectRatio attribute — 300 × 329 — preserve the aspect ratio defined in viewbox. From this article.
Fluid
16,983,749
10
plugin.tx_xxx { setting { storagePid = 23 } } I want this TYPO3 settings in utility file. Please help me.
The above method works only in controller or services class try below it will work in any PHP files in Extension. $objectManager = \TYPO3\CMS\Core\Utility\GeneralUtility::makeInstance('TYPO3\\CMS\Extbase\\Object\\ObjectManager'); $configurationManager = $objectManager->get('TYPO3\\CMS\\Extbase\\Configuration\\ConfigurationManager'); $extbaseFrameworkConfiguration = $configurationManager->getConfiguration(\TYPO3\CMS\Extbase\Configuration\ConfigurationManagerInterface::CONFIGURATION_TYPE_FULL_TYPOSCRIPT); $storagePid = $extbaseFrameworkConfiguration['plugin.']['tx_guesthouse_guesthouse.']['settings.']['storagePid'];
Fluid
30,839,907
10
I am trying to find the best way to convert map[string]string to type string. I tried converting to JSON with marshalling to keep the format and then converting back to a string, but this was not successful. More specifically, I am trying to convert a map containing keys and values to a string to accommodate Environment Variables and structs.go. For example, the final string should be like LOG_LEVEL="x" API_KEY="y" The map m := map[string]string{ "LOG_LEVEL": "x", "API_KEY": "y", }
You need some key=value pair on each line representing one map entry, and you need quotes around the values: package main import ( "bytes" "fmt" ) func createKeyValuePairs(m map[string]string) string { b := new(bytes.Buffer) for key, value := range m { fmt.Fprintf(b, "%s=\"%s\"\n", key, value) } return b.String() } func main() { m := map[string]string{ "LOG_LEVEL": "DEBUG", "API_KEY": "12345678-1234-1234-1234-1234-123456789abc", } println(createKeyValuePairs(m)) } Here is a working example on Go Playground.
Nomad
48,149,969
24
nomad docker image will be fetched from Docker Hub. But I have want use some local images. How can I use theme. (I dont want to use private repo) Example I want to use local image test > docker images REPOSITORY TAG IMAGE ID CREATED SIZE test latest da795ca8a32f 36 minutes ago 567MB job "test" { datacenters = ["dc1"] group "example" { task "test" { driver = "docker" config { image = "test" } resources { cpu = 500 memory = 256 } } } } It's wrong !
I'm not sure if this can be treated as an answer or a "hack". But if you want Nomad to use docker image that is already present on a node the image MUST NOT be tagged latest. For testing I tag my images as IMAGE:local. This way Nomad uses it if present, pulls it from remote if not.
Nomad
56,342,572
10
Nomad has three different ways to map ports: Network stanza under group level Network stanza under config -> resources level port_map stanza under config level What is the difference and when I should use which?
First of all port_map is deprecated, so you shouldn't be using that as part of task driver configuration. Up until Nomad 0.12, ports could be specified in a task's resource stanza and set using the docker port_map field. As more features have been added to the group network resource allocation, task based network resources are deprecated. With it the port_map field is also deprecated and can only be used with task network resources. Users should migrate their jobs to define ports in the group network stanza and specified which ports a task maps with the ports field. port in the group network stanza defines labels that can be used to identify the port in service discovery. This label is also used as apart of environment variable name that indicates which port your application should bind to. ports at the task level specifies which port from network stanza should be available inside task allocation/container. From official docs A Docker container typically specifies which port a service will listen on by specifying the EXPOSE directive in the Dockerfile. Because dynamic ports will not match the ports exposed in your Dockerfile, Nomad will automatically expose any ports specified in the ports field. TLDR; So there is only one correct definition: job "example" { group "example-group" { network { # Dynamic ports port "foo" {} port "bar" {} # Mapped ports port "http" { to = 80 } port "https" { to = 443 } # Static ports port "lb" { static = 8080 } } task "task-1" { driver = "docker" config { ... ports = [ "foo", "http", ] } } task "task-2" { driver = "docker" config { ... ports = [ "bar", "https", ] } } task "task-3" { driver = "docker" config { ... ports = [ "lb", ] } } } } Consider running this type of job file (with whatever images). Then you will get the following port mapping between a backend and containers: for port in $(docker ps --format "{{.Ports}}"); do echo $port; done | grep tcp | cut -d':' -f 2 # Dynamic ports 'foo' and 'bar' # 25968->25968/tcp, # 29080->29080/tcp, # Mapped ports 'http' and 'https' # 29936->80/tcp, # 20987->443/tcp, # Static port 'lb' # 8080->8080/tcp, Now, if you get inside task-1 allocation/container and check env variables, then you would be able to get values for allocated ports if your tasks need to communicate with one another. env | grep NOMAD | grep PORT # NOMAD_PORT_bar=29080 # NOMAD_HOST_PORT_bar=29080 # NOMAD_PORT_foo=25968 # NOMAD_HOST_PORT_foo=25968 # NOMAD_PORT_http=80 # NOMAD_HOST_PORT_http=29936 # NOMAD_PORT_https=443 # NOMAD_HOST_PORT_https=20987 # NOMAD_PORT_lb=8080 # NOMAD_HOST_PORT_lb=8080 In order to make communication between services easier, it is better to use service discovery, e.g. Consul (also from HashiCorp) and to make you life even easier consider some sort of load balancer, e.g. Fabio or Traefik. Here is a nice blog post from HashiCorp's Engineer about it.
Nomad
63,601,913
10
According to Prefect's Hybrid Execution model, Agents "watch for any scheduled flow runs and execute them accordingly on your infrastructure," while Executors "are responsible for actually running tasks [...] users can submit functions and wait for their results." While this makes some sense from a high-level design perspective, in practice how are these parts actually composed? For instance, if I specify that a Flow Run should make use of Docker Agent and a Dask Executor, what interactions are concretely happening between the Agent and the Executor? What if I use a Docker Agent and a Local Executor? Or a Local Agent and a Dask Executor? In short, what exactly is happening at each step of the process within each component — that is, on the Server, the Agent, and the Executor?
Agents represent the local infrastructure that a Flow can and should execute on, as specified by that Flow's RunConfig. If a Flow should only run on Docker (or Kubernetes, or ECS, or whatever else) then the Flow Run is served by that Agent only. Agents can serve multiple Flows, so long as those Flows are all supported by that particular infrastructure. If a Flow Run is not tied to any particular infrastructure, then a UniversalRun is appropriate, and can be handled by any Agent. Most importantly, the Agent guarantees that the code and data associated with the Flows are never seen by the Prefect Server, by submitting requests to the server for Flows to run, along with updates on Flows in progress. Executors, on the other hand, are responsible for the actual computation: that is, actually running the individual Tasks that make up a Flow. The Agent manages execution at a high level by calling submit on Tasks in the appropriate order, and by handling the results that the Executor returns. Because of this, an Executor has no knowledge of the Flow as a whole, rather only the Tasks that it received from the Agent. All Tasks in a single Flow are required to use the same Executor, but an Agent may communicate with different Executors between separate flows. Similarly, Executors can serve multiple Flow Runs, but at the Task level only. In specific terms: For a Docker Agent and a Dask Executor, there would be a Docker container that would manage resolution of the DAG and status reports back to the server. The actual computation of each Task's results would take place outside of that container though, on a Dask Distributed cluster. For a Docker Agent and a Local Executor, the container would perform the same roles as above. However, the computation of the Tasks' results would also occur within that container ("local" to that Agent). For a Local Agent and a Dask Executor, the machine that registered as the agent would manage DAG resolution and communication to the Server as a standalone process on that machine, instead of within a container. The computation for each Task though would still take place externally, on a Dask Distributed cluster. In short, the Agent sits between the Server and the Executor, acting as a custodian for the lifetime of a Flow Run and delineating the separation of concerns for each of the other components.
Prefect
66,959,695
13
I'm following the Prefect tutorial available at: https://docs.prefect.io/core/tutorial/01-etl-before-prefect.html. The code can be downloaded from the git: https://github.com/PrefectHQ/prefect/tree/master/examples/tutorial The tutorials have a dependency to aircraftlib which is a directory under tutorials. I can execute the Flows through the terminal with: python 02_etl_... and it executes perfectly! I've created a project, and added the Flow to that project. Through the Prefect Server UI I can run the Flow, but it fails with the error message: State Message: Failed to load and execute Flow's environment: ModuleNotFoundError("No module named 'aircraftlib'") How should I handle the dependency when executing the Flows through the Prefect Server UI?
This depends partially on the type of Flow Storage and Agent you are using. Since you are running with Prefect Server, I assume you are using Local Storage + a Local Agent; in this case, you need to make sure the aircraftlib directory is on your local importable Python PATH. There are a few ways of doing this: run your Prefect Agent in the tutorial directory; your Local Agent's path will then be inherited by the flows it submits manually add the tutorial/ directory to your global python path (I don't recommend this) add the tutorial/ directory to your Agent's path with the -p CLI flag; for example: prefect agent start -p ~/Developer/prefect/examples/tutorial (this is the approach I recommend)
Prefect
63,881,231
10
I'm running Kubernetes 1.11, and trying to configure the Kubernetes cluster to check a local name server first. I read the instructions on the Kubernetes site for customizing CoreDNS, and used the Dashboard to edit the system ConfigMap for CoreDNS. The resulting corefile value is: .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream 192.168.1.3 209.18.47.61 fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . /etc/resolv.conf cache 30 reload } You can see the local address as the first upstream name server. My problem is that this doesn't seem to have made any impact. I have a container running with ping & nslookup, and neither will resolve names from the local name server. I've worked around the problem for the moment by specifying the name server configuration in a few pod specifications that need it, but I don't like the workaround. How do I force CoreDNS to update based on the changed ConfigMap? I can see that it is a Deployment in kube-system namespace, but I haven't found any docs on how to get it to reload or otherwise respond to a changed configuration.
One way to apply Configmap changes would be to redeploy CoreDNS pods: kubectl rollout restart -n kube-system deployment/coredns
CoreDNS
53,498,438
15
What happened Resolving an external domain from within a pod fails with SERVFAIL message. In the logs, i/o timeout error is mentioned. What I expected to happen External domains should be successfully resolved from the pods. How to reproduce it apiVersion: v1 kind: Pod metadata: name: dnsutils namespace: default spec: containers: - name: dnsutils image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always Create the pod above (from Debugging DNS Resolution help page). Run kubectl exec dnsutils -it -- nslookup google.com pig@pig202:~$ kubectl exec dnsutils -it -- nslookup google.com Server: 10.152.183.10 Address: 10.152.183.10#53 ** server can't find google.com.mshome.net: SERVFAIL command terminated with exit code 1 Also run kubectl exec dnsutils -it -- nslookup google.com. pig@pig202:~$ kubectl exec dnsutils -it -- nslookup google.com. Server: 10.152.183.10 Address: 10.152.183.10#53 ** server can't find google.com: SERVFAIL command terminated with exit code 1 Additional information I am using microk8s environment in a Hyper-V virtual machine. Resolving DNS from the virtual machine works, and Kubernetes is able to pull container images. It's only from within the pods that the resolution is failing meaning I cannot communicate with the Internet from within the pods. This is OK: pig@pig202:~$ kubectl exec dnsutils -it -- nslookup kubernetes.default Server: 10.152.183.10 Address: 10.152.183.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.152.183.1 Environment The version of CoreDNS image: 'coredns/coredns:1.6.6' Corefile (taken from ConfigMap) Corefile: | .:53 { errors health { lameduck 5s } ready log . { class error } kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . 8.8.8.8 8.8.4.4 cache 30 loop reload loadbalance } Logs pig@pig202:~$ kubectl logs --namespace=kube-system -l k8s-app=kube-dns -f [INFO] 10.1.99.26:47204 - 29832 "AAAA IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002558s [ERROR] plugin/errors: 2 grafana.com. AAAA: read udp 10.1.99.19:52008->8.8.8.8:53: i/o timeout [INFO] 10.1.99.26:59350 - 50446 "A IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002028s [ERROR] plugin/errors: 2 grafana.com. A: read udp 10.1.99.19:60405->8.8.8.8:53: i/o timeout [INFO] 10.1.99.26:43050 - 13676 "AAAA IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002151s [ERROR] plugin/errors: 2 grafana.com. AAAA: read udp 10.1.99.19:45624->8.8.8.8:53: i/o timeout [INFO] 10.1.99.26:36997 - 30359 "A IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002791s [ERROR] plugin/errors: 2 grafana.com. A: read udp 10.1.99.19:37554->8.8.4.4:53: i/o timeout [INFO] 10.1.99.32:57927 - 53858 "A IN google.com.mshome.net. udp 39 false 512" NOERROR - 0 2.0001987s [ERROR] plugin/errors: 2 google.com.mshome.net. A: read udp 10.1.99.19:34079->8.8.4.4:53: i/o timeout [INFO] 10.1.99.32:38403 - 36398 "A IN google.com.mshome.net. udp 39 false 512" NOERROR - 0 2.000224s [ERROR] plugin/errors: 2 google.com.mshome.net. A: read udp 10.1.99.19:59835->8.8.8.8:53: i/o timeout [INFO] 10.1.99.26:57447 - 20295 "AAAA IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0001892s [ERROR] plugin/errors: 2 grafana.com.mshome.net. AAAA: read udp 10.1.99.19:51534->8.8.8.8:53: i/o timeout [INFO] 10.1.99.26:41052 - 56059 "A IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0001879s [ERROR] plugin/errors: 2 grafana.com.mshome.net. A: read udp 10.1.99.19:47378->8.8.8.8:53: i/o timeout [INFO] 10.1.99.26:56748 - 51804 "AAAA IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0003226s [INFO] 10.1.99.26:45442 - 61916 "A IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0001922s [ERROR] plugin/errors: 2 grafana.com.mshome.net. AAAA: read udp 10.1.99.19:35528->8.8.8.8:53: i/o timeout [ERROR] plugin/errors: 2 grafana.com.mshome.net. A: read udp 10.1.99.19:53568->8.8.8.8:53: i/o timeout OS pig@pig202:~$ cat /etc/os-release NAME="Ubuntu" VERSION="20.04 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04 LTS" VERSION_ID="20.04" Tried on Ubuntu 18.04.3 LTS, same issue. Other mshome.net search domain comes from Hyper-V network, I assume. Perhaps this will be of help: pig@pig202:~$ nmcli device show eth0 GENERAL.DEVICE: eth0 GENERAL.TYPE: ethernet GENERAL.HWADDR: 00:15:5D:88:26:02 GENERAL.MTU: 1500 GENERAL.STATE: 100 (connected) GENERAL.CONNECTION: Wired connection 1 GENERAL.CON-PATH: /org/freedesktop/NetworkManager/ActiveConnection/1 WIRED-PROPERTIES.CARRIER: on IP4.ADDRESS[1]: 172.19.120.188/28 IP4.GATEWAY: 172.19.120.177 IP4.ROUTE[1]: dst = 0.0.0.0/0, nh = 172.19.120.177, mt = 100 IP4.ROUTE[2]: dst = 172.19.120.176/28, nh = 0.0.0.0, mt = 100 IP4.ROUTE[3]: dst = 169.254.0.0/16, nh = 0.0.0.0, mt = 1000 IP4.DNS[1]: 172.19.120.177 IP4.DOMAIN[1]: mshome.net IP6.ADDRESS[1]: fe80::6b4a:57e2:5f1b:f739/64 IP6.GATEWAY: -- IP6.ROUTE[1]: dst = fe80::/64, nh = ::, mt = 100 IP6.ROUTE[2]: dst = ff00::/8, nh = ::, mt = 256, table=255
Finally found the solution which was the combination of two changes. After applying both changes, my pods could finally resolve addresses properly. Kubelet configuration Based on known issues, change resolv-conf path for Kubelet to use. # Add resolv-conf flag to Kubelet configuration echo "--resolv-conf=/run/systemd/resolve/resolv.conf" >> /var/snap/microk8s/current/args/kubelet # Restart Kubelet sudo service snap.microk8s.daemon-kubelet restart CoreDNS forward Change forward address in CoreDNS config map from default (8.8.8.8 8.8.4.4) to DNS on eth0 device. # Dump definition of CoreDNS microk8s.kubectl get configmap -n kube-system coredns -o yaml > coredns.yaml Partial content of coredns.yaml: Corefile: | .:53 { errors health { lameduck 5s } ready log . { class error } kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . 8.8.8.8 8.8.4.4 cache 30 loop reload loadbalance } Fetch DNS: # Fetch eth0 DNS address (this will print 172.19.120.177 in my case) nmcli dev show 2>/dev/null | grep DNS | sed 's/^.*:\s*//' Change the following line and save: forward . 8.8.8.8 8.8.4.4 # From this forward . 172.19.120.177 # To this (your DNS will probably be different) Finally apply to change CoreDNS forwarding: microk8s.kubectl apply -f coredns.yaml
CoreDNS
62,664,701
13
I have a self made Kubernetes cluster consisting of VMs. My problem is, that the coredns pods are always go in CrashLoopBackOff state, and after a while they go back to Running as nothing happened.. One solution that I found and could not try yet, is changing the default memory limit from 170Mi to something higher. As I'm not an expert in this, I thought this is not a hard thing, but I don't know how to change a running pod's configuration. It may be impossible, but there must be a way to recreate them with new configuration. I tried with kubectl patch, and looked up rolling-update too, but I just can't figure it out. How can I change the limit? Here is the relevant part of the pod's data: apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/podIP: 176.16.0.12/32 creationTimestamp: 2018-11-18T10:29:53Z generateName: coredns-78fcdf6894- labels: k8s-app: kube-dns pod-template-hash: "3497892450" name: coredns-78fcdf6894-gnlqw namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: coredns-78fcdf6894 uid: e3349719-eb1c-11e8-9000-080027bbdf83 resourceVersion: "73564" selfLink: /api/v1/namespaces/kube-system/pods/coredns-78fcdf6894-gnlqw uid: e34930db-eb1c-11e8-9000-080027bbdf83 spec: containers: - args: - -conf - /etc/coredns/Corefile image: k8s.gcr.io/coredns:1.1.3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi EDIT: It turned out, that in Ubuntu the Network Manager's dnsmasq drives the Corends pods crazy, so in /etc/NetworkManager/NetworkManager.conf I commented out the dnsmasq line, reboot and everything is okay.
You must edit coredns pod's template in coredns deployment definition: kubectl edit deployment -n kube-system coredns Once your default editor is opened with coredns deployment, in the templateSpec you will find part which is responsible for setting memory and cpu limits.
CoreDNS
53,448,665
13
i have been trying to setup k8s in a single node,everything was installed fine. but when i check the status of my kube-system pods, CNI -> flannel pod has crashed, reason -> Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: x.x.x.x x.x.x.x x.x.x.x CoreDNS pods status is ContainerCreating. In My Office, the current server has been configured to have an static ip and when i checked /etc/resolv.conf This is the output # Generated by NetworkManager search ORGDOMAIN.BIZ nameserver 192.168.1.12 nameserver 192.168.2.137 nameserver 192.168.2.136 # NOTE: the libc resolver may not support more than 3 nameservers. # The nameservers listed below may not be recognized. nameserver 192.168.1.10 nameserver 192.168.1.11 i'm unable to find the root cause, what should i be looking at?
In short, you have too many entries in /etc/resolv.conf. This is a known issue: Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm (>= 1.11) automatically detects systemd-resolved, and adjusts the kubelet flags accordingly. Also Linux’s libc is impossibly stuck (see this bug from 2005) with limits of just 3 DNS nameserver records and 6 DNS search records. Kubernetes needs to consume 1 nameserver record and 3 search records. This means that if a local installation already uses 3 nameservers or uses more than 3 searches, some of those settings will be lost. As a partial workaround, the node can run dnsmasq which will provide more nameserver entries, but not more search entries. You can also use kubelet’s --resolv-conf flag. If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. Check here for more information. You possibly could change that in the Kubernetes code, but I'm not sure about the functionality. As it's set to that value for purpose. Code can be located here const ( // Limits on various DNS parameters. These are derived from // restrictions in Linux libc name resolution handling. // Max number of DNS name servers. MaxDNSNameservers = 3 // Max number of domains in search path. MaxDNSSearchPaths = 6 // Max number of characters in search path. MaxDNSSearchListChars = 256 )
CoreDNS
59,890,834
11
I have a running k8s cluster with two replicas of CoreDNS. But when i try enter the bash prompt of the POD it's throwing me below error # kubectl exec -it coredns-5644d7b6d9-285bj -n kube-system sh error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "94f45da89fa5493a8283888464623788ef5e832dc31e0d89e427e71d86391fd6": OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown But i am able to login to other pods without any issues. I tried with nsenter with kernel process ID it works but it only works for network related openrations like, # nsenter -t 24931 -n ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default link/ether 7a:70:99:aa:53:6c brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.0.2/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::7870:99ff:feaa:536c/64 scope link valid_lft forever preferred_lft forever How to enter into this pod using kubectl and get rid of that error?
You can use the sidecar pattern following the instructions here: https://support.rancher.com/hc/en-us/articles/360041568712-How-to-troubleshoot-using-the-namespace-of-a-container#sidecar-container-0-2 In short, do this to find a node where a coredns pod is running: kubectl -n kube-system get po -o wide | grep coredns ssh to one of those nodes, then: docker ps -a | grep coredns Copy the Container ID to clipboard and run: ID=<paste ID here> docker run -it --net=container:$ID --pid=container:$ID --volumes-from=$ID alpine sh You will now be inside the "sidecar" container and can poke around. I.e. cat /etc/coredns/Corefile
CoreDNS
60,666,170
11
Question How to get the Kubernetes related keys from etcd? Tried to list keys in etcd but could not see related keys. Also where is etcdctl installed? $ etcdctl bash: etcdctl: command not found.. $ sudo netstat -tnlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 386/etcd tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 386/etcd $ curl -s http://localhost:2379/v2/keys | python -m json.tool { "action": "get", "node": { "dir": true } } Background Installed Kubernetes 1.8.5 by following Using kubeadm to Create a Cluster on CentOS 7. When I looked at Getting started with etcd, v2/keys looks to be the end point.
Usually you need to get etcdctl by yourself. Just download the latest etcdctl archive from etcd releases page. Also, starting from Kubernetes version 1.6 it uses etcd version 3, so to get a list of all keys is: ETCDCTL_API=3 etcdctl --endpoints=<etcd_ip>:2379 get / --prefix --keys-only You can find all etcdctl v3 actions using: ETCDCTL_API=3 etcdctl --endpoints=<etcd_ip>:2379 --help EDIT (thanks to @leodotcloud): In case ETCD is configured with TLS certificates support: ETCDCTL_API=3 etcdctl --endpoints <etcd_ip>:2379 --cacert <ca_cert_path> --cert <cert_path> --key <cert_key_path> get / --prefix --keys-only
etcd
47,807,892
20
Now we building a realtime analytics system and it should be highly distributed. We plan to use distributed locks and counters to ensure data consistency, and we need a some kind of distributed map to know which client is connected to which server. I have no prior experience in distributed systems before, but I think we have two options: Java+Hazelcast Golang+ETCD But what is the pros/cons of each other in topic context?
Hazelcast and etcd are two very different systems. The reason is the CAP theorem. The CAP theorem states that no distributed system can have Consistency, Availability, and Partition-tolerance. Distributed systems normally fall closer to CA or CP. Hazelcast is an AP system, and etcd (being a Raft implementation) is CP. So, your choice is between consistency and availability/performance. In general, Hazelcast will be much more performant and be able to handle more failures than Raft and etcd, but at the cost of potential data loss or consistency issues. The way Hazelcast works is it partitions data and stores pieces of the data on different nodes. So, in a 5 node cluster, the key "foo" may be stored on nodes 1 and 2, and bar may be stored on nodes 3 and 4. You can control the number of nodes to which Hazelcast replicates data via the Hazelcast and map configuration. However, during a network or other failure, there is some risk that you'll see old data or even lose data in Hazelcast. Alternatively, Raft and etcd is a single-leader highly consistent system that stores data on all nodes. This means it's not ideal for storing large amounts of state. But even during a network failure, etcd can guarantee that your data will remain consistent. In other words, you'll never see old/stale data. But this comes at a cost. CP systems require that a majority of the cluster be alive to operate normally. The consistency issue may or may not be relevant for basic key-value storage, but it can be extremely relevant to locks. If you're expecting your locks to be consistent across the entire cluster - meaning only one node can hold a lock even during a network or other failure - do not use Hazelcast. Because Hazelcast sacrifices consistency in favor of availability (again, see th CAP theorem), it's entirely possible that a network failure can lead two nodes to believe a lock is free to be acquired. Alternatively, Raft guarantees that during a network failure only one node will remain the leader of the etcd cluster, and therefore all decisions are made through that one node. This means that etcd can guarantee it has a consistent view of the cluster state at all times and can ensure that something like a lock can only be obtained by a single process. Really, you need to consider what you are looking for in your database and go seek it out. The use cases for CP and AP data stores are vastly different. If you want consistency for storing small amounts of state, consistent locks, leader elections, and other coordination tools, use a CP system like ZooKeeper or Consul. If you want high availability and performance at the potential cost of consistency, use Hazelcast or Cassandra or Riak. Source: I am the author of a Raft implementation
etcd
31,011,105
17
Is it safe to use etcd across multiple data centers? As it expose etcd port to public internet. Do I have to use client certificates in this case or etcd has some sort of authification?
Yes, but there are two big issues you need to tackle: Security. This all depends on what type of info you are storing in etcd. Using a point to point VPN is probably preferred over exposing the entire cluster to the internet. Client certificates can also be used. Tuning. etcd relies on replication between machines for two things, aliveness and consensus. Since a successful write must be committed to at majority of the cluster before it returns as successful, your write performance will degrade as the distance between the machines increases. Aliveness is measured with periodic heartbeats between the machines. By default, etcd has a fairly aggressive 50ms heartbeat timeout, which is optimized for bare metal servers running on a local network. Without tuning this timeout value, your cluster will constantly think that members have disappeared and trigger frequent master elections. This gets worse if both of your environments are on cloud providers that have variable networks plus disk writes that traverse the network, a double whammy. More info on etcd tuning: https://etcd.io/docs/latest/tuning/
etcd
26,075,680
15
I have been playing with docker-compose and have cobbled together a project from the docker hub website. One thing that eludes me is how I can scale individual services up (by adding more instances) AND have existing instances somehow made aware of those new instances. For example, the canonical docker-compose example comprises a cluster of: redis node python (flask) node haproxy load balancer I create the cluster and everything works fine, however I attempt to add another node to the cluster: $ docker-compose scale web=2 Creating and starting 2 ... done $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e83f6ed94546 packetops/web:latest "/bin/sh -c 'python /" 6 minutes ago Up About a minute 80/tcp swarm-slave/1_web_2 40e01a615a2f tutum/haproxy "python /haproxy/main" 7 minutes ago Up About a minute 443/tcp, 1936/tcp, 172.16.186.165:80->80/tcp swarm-slave/1_lb_1 f16357a28ac4 packetops/web:latest "/bin/sh -c 'python /" 7 minutes ago Up About a minute 80/tcp swarm-slave/1_lb_1/1_web_1,swarm-slave/1_lb_1/web,swarm-slave/1_lb_1/web_1,swarm-slave/1_web_1 8dd59686e7be redis "/entrypoint.sh redis" 8 minutes ago Up About a minute 6379/tcp swarm-slave/1_redis_1,swarm-slave/1_web_1/1_redis_1,swarm-slave/1_web_1/redis,swarm-slave/1_web_1/redis_1,swarm-slave/1_web_2/1_redis_1,swarm-slave/1_web_2/redis,swarm-slave/1_web_2/redis_1 That worked... But lets see what the haproxy node sees of the cluster (docker-machine modifies the '/etc/hosts' file) # docker exec -i -t swarm-slave/1_lb_1 /bin/bash -c 'cat /etc/hosts' 172.17.0.4 40e01a615a2f 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.3 1_web_1 f16357a28ac4 172.17.0.3 web f16357a28ac4 1_web_1 172.17.0.3 web_1 f16357a28ac4 1_web_1 If I were to restart the entire cluster using docker-compose that node should have it's /etc/hosts populated but it now seems to have broken even further: $ docker-compose up --force-recreate -d Recreating 1_redis_1 Recreating 1_web_2 Recreating 1_web_1 Recreating 1_lb_1 ERROR: Unable to find a node fulfilling all dependencies: --link=1_web_1:1_web_1 --link=1_web_1:web --link=1_web_1:web_1 --link=1_web_2:1_web_2 --link=1_web_2:web --link=1_web_2:web_2 $ docker-compose up -d 1_redis_1 is up-to-date 1_web_1 is up-to-date 1_web_2 is up-to-date Starting 40e01a615a_1_lb_1 $ docker exec -i -t swarm-slave/40e01a615a_1_lb_1 /bin/bash -c 'cat /etc/hosts' 172.17.0.4 40e01a615a2f 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters So in conclusion is there a smarter way to do this (resolution and discovery)? Is there another smarter way rather than just updating the hosts files ? What's the best practice here?
Docker just released a new Version with built in orchestration: https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration/ You can start a new Swarm Cluster with: docker swarm init And create Services: docker service create –name frontend –replicas 5 -p 80:80/tcp nginx:latest The created Services will be load balanced and you can scale it up and down: docker service scale frontend=X
etcd
35,002,493
15
We are choosing the best option for implementing a leader election for our service (written in Java) comprised of multiple (e.g., 3) instances for high availability. Our goal is to have only a single instance active at any given time. Would be great to hear your opinion about the following options: 1) Hazelcast. Using "quorum" and a lock we can implement a leader election. However, we can run into a split-brain problem where for some time two leaders may be present. Also, it seems that Hazelcast does not support SSL. 2) Zookeeper. We can implement leader election on top of a Zookeeper ensemble (where a ZK node is run on each instance of our service). Does Zookeeper provide better consistency guarantees than Hazelcast? Does it also suffer from the split-brain problem? 3) Etcd. We can use the Jetcd library which seems like the most modern and robust technology. Is it really better in terms of consistency than Zookeeper? Thank you.
1) Hazelcast, by version 3.12, provides a CPSubsystem which is a CP system in terms of CAP and built using Raft consensus algorithm inside the Hazelcast cluster. CPSubsytem has a distributed lock implementation called FencedLock which can be used to implement a leader election. For more information about CPSubsystem and FencedLock see; CP Subsystem Reference manual Riding the CP Subsystem Distributed Locks are Dead; Long Live Distributed Locks! Hazelcast versions before 3.12 are not suitable for leader election. As you already mentioned, it can choose availability during network splits, which can lead to election of multiple leaders. 2) Zookeeper doesn't suffer from the mentioned split-brain problem, you will not observe multiple leaders when network split happens. Zookeeper is built on ZAB atomic broadcast protocol. 3) Etcd is using Raft consensus protocol. Raft and ZAB have similar consistency guarantees, which both can be used to implement a leader election process. Disclaimer: I work at Hazelcast.
etcd
53,605,172
13
I created ConfigMap using kubectl and I can also see it using: kubectl get cm I am just curious where kubernetes stores this data/information within the cluster? Does it store in etc? How do I view it, if it stored in etcd? Does it store in any file/folder location or anywhere else? I mean where kubernetes stores it internally?
Yes etcd is used for storing ConfigMaps and other resources you deploy to the cluster. See https://matthewpalmer.net/kubernetes-app-developer/articles/how-does-kubernetes-use-etcd.html and note https://github.com/kubernetes/kubernetes/issues/19781#issuecomment-172553264 You view the content of the configmap with 'kubectl get cm -oyaml' i.e. through the k8s API directly as illustrated in https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ You don't need to look inside etcd to see the content of a configmap.
etcd
53,935,597
13
I am dealing with CRDs and creating Custom resources. I need to keep lots of information about my application in the Custom resource. As per the official doc, etcd works with request up to 1.5MB. I am hitting errors something like "error": "Request entity too large: limit is 3145728" I believe the specified limit in the error is 3MB. Any thoughts around this? Any way out for this problem?
The "error": "Request entity too large: limit is 3145728" is probably the default response from kubernetes handler for objects larger than 3MB, as you can see here at L305 of the source code: expectedMsgFor1MB := `etcdserver: request is too large` expectedMsgFor2MB := `rpc error: code = ResourceExhausted desc = trying to send message larger than max` expectedMsgFor3MB := `Request entity too large: limit is 3145728` expectedMsgForLargeAnnotation := `metadata.annotations: Too long: must have at most 262144 bytes` The ETCD has indeed a 1.5MB limit for processing a file and you will find on ETCD Documentation a suggestion to try the--max-request-bytes flag but it would have no effect on a GKE cluster because you don't have such permission on master node. But even if you did, it would not be ideal because usually this error means that you are consuming the objects instead of referencing them which would degrade your performance. I highly recommend that you consider instead these options: Determine whether your object includes references that aren't used; Break up your resource; Consider a volume mount instead; There's a request for a new API Resource: File (orBinaryData) that could apply to your case. It's very fresh but it's good to keep an eye on. If you still need help let me know.
etcd
60,468,110
10
I am trying to understand what the difference is between SLURM's srun and sbatch commands. I will be happy with a general explanation, rather than specific answers to the following questions, but here are some specific points of confusion that can be a starting point and give an idea of what I'm looking for. According to the documentation, srun is for submitting jobs, and sbatch is for submitting jobs for later execution, but the practical difference is unclear to me, and their behavior seems to be the same. For example, I have a cluster with 2 nodes, each with 2 CPUs. If I execute srun testjob.sh & 5x in a row, it will nicely queue up the fifth job until a CPU becomes available, as will executing sbatch testjob.sh. To make the question more concrete, I think a good place to start might be: What are some things that I can do with one that I cannot do with the other, and why? Many of the arguments to both commands are the same. The ones that seem the most relevant are --ntasks, --nodes, --cpus-per-task, --ntasks-per-node. How are these related to each other, and how do they differ for srun vs sbatch? One particular difference is that srun will cause an error if testjob.sh does not have executable permission i.e. chmod +x testjob.sh whereas sbatch will happily run it. What is happening "under the hood" that causes this to be the case? The documentation also mentions that srun is commonly used inside of sbatch scripts. This leads to the question: How do they interact with each other, and what is the "canonical" usecase for each them? Specifically, would I ever use srun by itself?
The documentation says srun is used to submit a job for execution in real time while sbatch is used to submit a job script for later execution. They both accept practically the same set of parameters. The main difference is that srun is interactive and blocking (you get the result in your terminal and you cannot write other commands until it is finished), while sbatch is batch processing and non-blocking (results are written to a file and you can submit other commands right away). If you use srun in the background with the & sign, then you remove the 'blocking' feature of srun, which becomes interactive but non-blocking. It is still interactive though, meaning that the output will clutter your terminal, and the srun processes are linked to your terminal. If you disconnect, you will loose control over them, or they might be killed (depending on whether they use stdout or not basically). And they will be killed if the machine to which you connect to submit jobs is rebooted. If you use sbatch, you submit your job and it is handled by Slurm ; you can disconnect, kill your terminal, etc. with no consequence. Your job is no longer linked to a running process. What are some things that I can do with one that I cannot do with the other, and why? A feature that is available to sbatch and not to srun is job arrays. As srun can be used within an sbatch script, there is nothing that you cannot do with sbatch. How are these related to each other, and how do they differ for srun vs sbatch? All the parameters --ntasks, --nodes, --cpus-per-task, --ntasks-per-node have the same meaning in both commands. That is true for nearly all parameters, with the notable exception of --exclusive. What is happening "under the hood" that causes this to be the case? srun immediately executes the script on the remote host, while sbatch copies the script in an internal storage and then uploads it on the compute node when the job starts. You can check this by modifying your submission script after it has been submitted; changes will not be taken into account (see this). How do they interact with each other, and what is the "canonical" use-case for each of them? You typically use sbatch to submit a job and srun in the submission script to create job steps as Slurm calls them. srun is used to launch the processes. If your program is a parallel MPI program, srun takes care of creating all the MPI processes. If not, srun will run your program as many times as specified by the --ntasks option. There are many use cases depending on whether your program is paralleled or not, has a long-running time or not, is composed of a single executable or not, etc. Unless otherwise specified, srun inherits by default the pertinent options of the sbatch or salloc which it runs under (from here). Specifically, would I ever use srun by itself? Other than for small tests, no. A common use is srun --pty bash to get a shell on a compute job.
Slurm
43,767,866
178
I have a job running a linux machine managed by slurm. Now that the job is running for a few hours I realize that I underestimated the time required for it to finish and thus the value of the --time argument I specified is not enough. Is there a way to add time to an existing running job through slurm?
Use the scontrol command to modify a job scontrol update jobid=<job_id> TimeLimit=<new_timelimit> Use the SLURM time format, eg. for 8 days 15 hours: TimeLimit=8-15:00:00 Requires admin privileges, on some machines. Will be allowed to users only if the job is not running yet, on most machines.
Slurm
28,413,418
115
I suppose it's a pretty trivial question but nevertheless, I'm looking for the (sacct I guess) command that will display the CPU time and memory used by a slurm job ID.
If your job is finished, then the sacct command is what you're looking for. Otherwise, look into sstat. For sacct the --format switch is the other key element. If you run this command: sacct -e you'll get a printout of the different fields that can be used for the --format switch. The details of each field are described in the Job Account Fields section of the man page. For CPU time and memory, CPUTime and MaxRSS are probably what you're looking for. cputimeraw can also be used if you want the number in seconds, as opposed to the usual Slurm time format. sacct --format="CPUTime,MaxRSS"
Slurm
24,020,420
100
Suppose that I have the following simple bash script which I want to submit to a batch server through SLURM: #!/bin/bash #SBATCH -o "outFile"$1".txt" #SBATCH -e "errFile"$1".txt" hostname exit 0 In this script, I simply want to write the output of hostname on a textfile whose full name I control via the command-line, like so: login-2:jobs$ sbatch -D `pwd` exampleJob.sh 1 Submitted batch job 203775 Unfortunately, it seems that my last command-line argument (1) is not parsed through sbatch, since the files created do not have the suffix I'm looking for and the string "$1" is interpreted literally: login-2:jobs$ ls errFile$1.txt exampleJob.sh outFile$1.txt I've looked around places in SO and elsewhere, but I haven't had any luck. Essentially what I'm looking for is the equivalent of the -v switch of the qsub utility in Torque-enabled clusters. Edit: As mentioned in the underlying comment thread, I solved my problem the hard way: instead of having one single script that would be submitted several times to the batch server, each with different command line arguments, I created a "master script" that simply echoed and redirected the same content onto different scripts, the content of each being changed by the command line parameter passed. Then I submitted all of those to my batch server through sbatch. However, this does not answer the original question, so I hesitate to add it as an answer to my question or mark this question solved.
I thought I'd offer some insight because I was also looking for the replacement to the -v option in qsub, which for sbatch can be accomplished using the --export option. I found a nice site here that shows a list of conversions from Torque to Slurm, and it made the transition much smoother. You can specify the environment variable ahead of time in your bash script: $ var_name='1' $ sbatch -D `pwd` exampleJob.sh --export=var_name Or define it directly within the sbatch command just like qsub allowed: $ sbatch -D `pwd` exampleJob.sh --export=var_name='1' Whether this works in the # preprocessors of exampleJob.sh is also another question, but I assume that it should give the same functionality found in Torque.
Slurm
27,708,656
77
Is it possible to expand the number of characters used in the JobName column of the command sacct in SLURM? For example, I currently have: JobID JobName Elapsed NCPUS NTasks State ------------ ---------- ---------- ---------- -------- ---------- 12345 lengthy_na+ 00:00:01 4 1 FAILED and I would like: JobID JobName Elapsed NCPUS NTasks State ------------ ---------- ---------- ---------- -------- ---------- 12345 lengthy_name 00:00:01 4 1 FAILED
You should use the format option, with: sacct --helpformat you'll see the parameters to show, for instance: sacct --format="JobID,JobName%30" will print the job id and the name up to 30 characters: JobID JobName ------------ ------------------------------ 19009 bash 19010 launch.sh 19010.0 hydra_pmi_proxy 19010.1 hydra_pmi_proxy Now, you can customize your own output.
Slurm
42,217,102
69
When I use sinfo I see the following: $ sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST [...] RG3 up 28-00:00:0 1 drain rg3hpc4 [...] What does the state 'drain' mean?
It means no further job will be scheduled on that node, but the currently running jobs will keep running (by contrast with setting the node down which kills all jobs running on the node). Nodes are often set to that state so that some maintenance operation can take place once all running jobs are finished. From the manpage of the scontrol command: If you want to remove a node from service, you typically want to set it's state to "DRAIN" Note that the system administrator most probably gave a reason why the node is drained, and you can see that reason with sinfo -R
Slurm
22,480,627
54
I have searched google and read the documentation. My local cluster is using SLURM. I want to check the following things: How many cores does each node have? How many cores has each job in the queue reserved? Any advice would be much appreciated!
in order to see the details of all the nodes you can use: scontrol show node For an specific node: scontrol show node "nodename" And for the cores of job you can use the format mark %C, for instance: squeue -o"%.7i %.9P %.8j %.8u %.2t %.10M %.6D %C" More info about format.
Slurm
41,299,911
51
How do the terms "job", "task", and "step" as used in the SLURM docs relate to each other? AFAICT, a job may consist of multiple tasks, and also it make consist of multiple steps, but, assuming this is true, it's still not clear to me how tasks and steps relate. It would be helpful to see an example showing the full complexity of jobs/tasks/steps.
A job consists of one or more steps, each consisting of one or more tasks each using one or more CPU. Jobs are typically created with the sbatch command, steps are created with the srun command, tasks are requested, at the job level with --ntasks or --ntasks-per-node, or at the step level with --ntasks. CPUs are requested per task with --cpus-per-task. Note that jobs submitted with sbatch have one implicit step; the Bash script itself. Assume the hypothetical job: #SBATCH --nodes 8 #SBATCH --tasks-per-node 8 # The job requests 64 CPUs, on 8 nodes. # First step, with a sub-allocation of 8 tasks (one per node) to create a tmp dir. # No need for more than one task per node, but it has to run on every node srun --nodes 8 --ntasks 8 mkdir -p /tmp/$USER/$SLURM_JOBID # Second step with the full allocation (64 tasks) to run an MPI # program on some data to produce some output. srun process.mpi <input.dat >output.txt # Third step with a sub allocation of 48 tasks (because for instance # that program does not scale as well) to post-process the output and # extract meaningful information srun --ntasks 48 --nodes 6 --exclusive postprocess.mpi <output.txt >result.txt & # Fourth step with a sub-allocation on a single node # to compress the raw output. This step runs at the same time as # the previous one thanks to the ampersand `&` srun --ntasks 12 --nodes 1 --exclusive compress.mpi output.txt & wait Four steps were created and so the accounting information for that job will have 5 lines; one per step plus one for the Bash script itself.
Slurm
46,506,784
50
The terminology used in the sbatch man page might be a bit confusing. Thus, I want to be sure I am getting the options set right. Suppose I have a task to run on a single node with N threads. Am I correct to assume that I would use --nodes=1 and --ntasks=N? I am used to thinking about using, for example, pthreads to create N threads within a single process. Is the result of that what they refer to as "cores" or "cpus per task"? CPUs and threads are not the same things in my mind.
Depending on the parallelism you are using: distributed or shared memory --ntasks=# : Number of "tasks" (use with distributed parallelism). --ntasks-per-node=# : Number of "tasks" per node (use with distributed parallelism). --cpus-per-task=# : Number of CPUs allocated to each task (use with shared memory parallelism). From this question: if every node has 24 cores, is there any difference between these commands? sbatch --ntasks 24 [...] sbatch --ntasks 1 --cpus-per-task 24 [...] Answer: (by Matthew Mjelde) Yes there is a difference between those two submissions. You are correct that usually ntasks is for mpi and cpus-per-task is for multithreading, but let’s look at your commands: For your first example, the sbatch --ntasks 24 […] will allocate a job with 24 tasks. These tasks in this case are only 1 CPUs, but may be split across multiple nodes. So you get a total of 24 CPUs across multiple nodes. For your second example, the sbatch --ntasks 1 --cpus-per-task 24 [...] will allocate a job with 1 task and 24 CPUs for that task. Thus you will get a total of 24 CPUs on a single node. In other words, a task cannot be split across multiple nodes. Therefore, using --cpus-per-task will ensure it gets allocated to the same node, while using --ntasks can and may allocate it to multiple nodes. Another good Q&A from CÉCI's support website: Suppose you need 16 cores. Here are some use cases: you use mpi and do not care about where those cores are distributed: --ntasks=16 you want to launch 16 independent processes (no communication): --ntasks=16 you want those cores to spread across distinct nodes: --ntasks=16 and --ntasks-per-node=1 or --ntasks=16 and --nodes=16 you want those cores to spread across distinct nodes and no interference from other jobs: --ntasks=16 --nodes=16 --exclusive you want 16 processes to spread across 8 nodes to have two processes per node: --ntasks=16 --ntasks-per-node=2 you want 16 processes to stay on the same node: --ntasks=16 --ntasks-per-node=16 you want one process that can use 16 cores for multithreading: --ntasks=1 --cpus-per-task=16 you want 4 processes that can use 4 cores each for multithreading: --ntasks=4 --cpus-per-task=4
Slurm
51,139,711
49
Using sinfo it shows 3 nodes are in drain state, PARTITION AVAIL TIMELIMIT NODES STATE NODELIST all* up infinite 3 drain node[10,11,12] Which command line should I use to undrain such nodes?
Found an approach, enter scontrol interpreter (in command line type scontrol) and then scontrol: update NodeName=node10 State=DOWN Reason="undraining" scontrol: update NodeName=node10 State=RESUME Then scontrol: show node node10 displays amongst other info State=IDLE Update: some of these nodes got DRAIN state back; noticed their root partition was full after e.g. show node a10 which showed Reason=SlurmdSpoolDir is full, thus in Ubuntu sudo apt-get clean to remove /var/cache/apt contents and also gzipped some /var/log files.
Slurm
29,535,118
42
On a SLURM cluster one can use squeue to get information about jobs on the system. I know that "R" means running; and "PD" meaning pending, but what is "CG"? I understand it to be "canceling" or "failing" from experience, but does "CG" apply when a job successfully closes? What is the G?
"CG" stands for "completing" and it happens to a job that cannot be terminated, probably because of an I/O operation. More detailed info in the Slurm Troubleshooting Guide
Slurm
42,032,634
41
When I launch a computation on the cluster, I usually have a separate program doing the post-processing at the end : sbatch simulation sbatch --dependency=afterok:JOBIDHERE postprocessing I want to avoid mistyping and automatically have the good job id inserted. Any idea? Thanks
You can do something like this: RES=$(sbatch simulation) && sbatch --dependency=afterok:${RES##* } postprocessing The RES variable will hold the result of the sbatch command, something like Submitted batch job 102045. The construct ${RES##* } isolates the last word (see more info here), in the current case the job id. The && part ensures you do not try to submit the second job in the case the first submission fails.
Slurm
19,960,332
40
I have a couple of thousand jobs to run on a SLURM cluster with 16 nodes. These jobs should run only on a subset of the available nodes of size 7. Some of the tasks are parallelized, hence use all the CPU power of a single node while others are single threaded. Therefore, multiple jobs should run at the same time on a single node. None of the tasks should spawn over multiple nodes. Currently I submit each of the jobs as follow: sbatch --nodelist=myCluster[10-16] myScript.sh However this parameter makes slurm to wait till the submitted job terminates, and hence leaves 3 nodes completely unused and, depending on the task (multi- or single-threaded), also the currently active node might be under low load in terms of CPU capability. What are the best parameters of sbatch that force slurm to run multiple jobs at the same time on the specified nodes?
You can work the other way around; rather than specifying which nodes to use, with the effect that each job is allocated all the 7 nodes, specify which nodes not to use: sbatch --exclude=myCluster[01-09] myScript.sh and Slurm will never allocate more than 7 nodes to your jobs. Make sure though that the cluster configuration allows node sharing, and that your myScript.sh contains #SBATCH --ntasks=1 --cpu-per-task=n with n the number of threads of each job. Update: since version 23.02, the --nodelist may contain more nodes than specified by --nodes. From the changelog: -- Allow for --nodelist to contain more nodes than required by --nodes.
Slurm
26,216,727
40
In a sbatch script, you can directly launch programs or scripts (for example an executable file myapp) but in many tutorials people use srun myapp instead. Despite reading some documentation on the topic, I do not understand the difference and when to use each of those syntaxes. I hope this question is precise enough (1st question on SO), thanks in advance for your answers.
The srun command is used to create job 'steps'. First, it will bring better reporting of the resource usage ; the sstat command will provide real-time resource usage for processes that are started with srun, and each step (each call to srun) will be reported individually in the accounting. Second, it can be used to setup many instances of a serial program (program that only use one CPU) into a single job, and micro-schedule those programs inside the job allocation. Finally, for parallel jobs, srun will also play the important role of starting the parallel program and setup the parallel environment. It will start as many instances of the program as were requested with the --ntasks option on the CPUs that were allocated for the job. In the case of a MPI program, it will also handle the communication between the MPI library and Slurm.
Slurm
53,636,752
40
I am running a Python code that contains print statements via SLURM. Normally when I run the Python code directly via "python program.py" the print statements appear in the terminal. When I run my program via SLURM, as expected the print statements do not appear in the terminal. How can I save the print statements to a file so I can check them as the program is running? Below is my submission script that I submit via "sbatch submit.sh". Notice that I've already tried two methods to write the output either to test1.out or test2.out. Please let me know where I'm going wrong! #!/bin/bash #SBATCH -J mysubmission #SBATCH -p New #SBATCH -n 1 #SBATCH -t 23:59:00 #SBATCH -o test1.out module load gnu python python program.py > test2.out
By default, print in Python is buffered, meaning that it does not write to files or stdout immediately, and needs to be 'flushed' to force the writing to stdout immediately. See this question for available options. The simplest option is to start the Python interpreter with the -u option. From the python man page: -u Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin, stdout and stderr in binary mode. Note that there is internal buffering in xreadlines(), readlines() and file-object iterators ("for line in sys.stdin") which is not influenced by this option. To work around this, you will want to use "sys.stdin.readline()" inside a "while 1:" loop.
Slurm
33,178,514
38
I'm working in a SLURM cluster and I was running several processes at the same time (on several input files), and using the same bash script. At the end of the job, the process was killed and this is the error I obtained. slurmstepd: error: Detected 1 oom-kill event(s) in step 1090990.batch cgroup. My guess is that there is some issue with memory. But how can I know more about? Did I not provide enough memory? or as user I was requesting more than what I have access to? Any suggestion?
The approved answer is correct but, to be more precise, error slurmstepd: error: Detected 1 oom-kill event(s) in step 1090990.batch cgroup. indicates that you are low on Linux's CPU RAM memory. If you were, for instance, running some computation on GPU, requesting more GPU memory than what is available will result in an error like this (example for PyTorch): RuntimeError: CUDA out of memory. Tried to allocate 8.94 GiB (GPU 0; 15.90 GiB total capacity; 8.94 GiB already allocated; 6.34 GiB free; 0 bytes cached) Check out the explanation in this article for more details. Solution: Increase or add in your script parameter --mem-per-cpu. If you are using sbatch: sbatch your_script.sh to run your script, add in it following line: #SBATCH --mem-per-cpu=<value bigger than you've requested before> If you are using srun: srun python3 your_script.py add this parameter like this: srun --mem-per-cpu=<value bigger than you've requested before> python3 your_script.py
Slurm
52,421,068
36
I am using slurm on a cluster to run jobs and submit a script that looks like below with sbatch: #!/usr/bin/env bash #SBATCH -o slurm.sh.out #SBATCH -p defq #SBATCH --mail-type=ALL #SBATCH [email protected] echo "hello" Can I somehow comment out a #SBATCH line, e.g. the #SBATCH [email protected] in this script? Since the slurm instructions are bash comments themselves I would not know how to achieve this.
just add another # at the beginning. ##SBATCH --mail-user... This will not be processed by Slurm
Slurm
40,346,871
34
I wanted to run a python script main.py multiple times with different arguments through a sbatch_run.sh script as in: #!/bin/bash #SBATCH --job-name=sbatch_run #SBATCH --array=1-1000 #SBATCH --exclude=node047 arg1=10 #arg to be change during runs arg2=12 #arg to be change during runs python main.py $arg1 $arg2 The arguments are encoded in the bash file ran by sbatch. I was worried that if I ran sbatch_run.sh multiple times one after the other but changing the value of arg1 and arg2 during each run, that it might cause errors in my runs. For example if I do: sbatch sbatch_run.sh # with arg1=10 and arg2=12 and then immediately after I change sbatch_run.sh but run the file again as in: sbatch sbatch_run.sh # with arg1=69 and arg2=666 would case my runs to all run with the last one (i.e. arg1=69 and arg2=666) instead of each run with its own arguments. I know for sure that if I hard code the arguments in main.py and then run the same sbatch script but change the main.py it will run the last one. I was wondering if that is the case too if I change the sbatch_run.sh script. Just so you know, I did try this experiment, by running 1000 scripts, then some get queued and put a sleep command and then change the sbatch_run.sh. It seems to not change what my run is, however, if I am wrong this is way too important to be wrong by accident and wanted to make sure I asked too. For the record I ran: #!/bin/bash #SBATCH --job-name=ECHO #SBATCH --array=1-1000 #SBATCH --exclude=node047 sleep 15 echo helloworld echo 5 and then change the echo to echo 10 or echo byebyeworld.
When sbatch is run, Slurm copies the submission script to its internal database ; you can convince yourself with the following experiment: $ cat submit.sh #!/bin/bash #SBATCH --hold echo helloworld The --hold is there to make sure the job does not start. Submit it : $ sbatch submit.sh Then modify the submission script: $ sed -i 's/hello/bye/' submit.sh $ cat submit.sh #!/bin/bash #SBATCH --hold echo byeworld and now use control show job to see the script Slurm is planning to run: $ scontrol show -ddd job YOURJOBID JobId=******* JobName=submit.sh [...] BatchScript= #!/bin/bash #SBATCH --hold echo helloworld [...] It hasn't changed although the original script has. [EDIT] Recent versions of Slurm use scontrol write batch_script <job_id> [<optional_filename>] rather than scontrol show -dd job to write the submission script to a file named <optional_filename>. The optional filename can be - to display the script to the screen rather than save it to a file.
Slurm
38,778,844
24
Would someone be able to clarify what each of these things actually are? From what I gathered, nodes are computing points within the cluster, essentially a single computer. Tasks are processes that can be executed either on a single node or on multiple nodes. And cores are basically how much of a CPU on a single node do you want to be allocated to executing the task assigned to that CPU. Is this correct? Am I confusing something?
The terms can have different meanings in different context, but if we stick to a Slurm context: A (compute) node is a computer part of a larger set of nodes (a cluster). Besides compute nodes, a cluster comprises one or more login nodes, file server nodes, management nodes, etc. A compute node offers resources such as processors, volatile memory (RAM), permanent disk space (e.g. SSD), accelerators (e.g. GPU) etc. A core is the part of a processor that does the computations. A processor comprises multiple cores, as well as a memory controller, a bus controller, and possibly many other components. A processor in the Slurm context is referred to as a socket, which actually is the name of the slot on the motherboard that hosts the processor. A single core can have one or two hardware threads. This is a technology that allows virtually doubling the number of cores the operating systems perceives while only doubling part of the core components -- typically the components related to memory and I/O and not the computation components. Hardware multi-threading is very often disabled in HPC. a CPU in a general context refers to a processor, but in the Slurm context, a CPU is a consumable resource offered by a node. It can refer to a socket, a core, or a hardware thread, based on the Slurm configuration. The role of Slurm is to match those resources to jobs. A job comprises one or more (sequential) steps, and each step has one or more (parallel) tasks. A task is an instance of a running program, i.e. at a process, possibly along with subprocesses or software threads. Multiple tasks are dispatched on possibly multiple nodes depending on how many core each task needs. The number of cores a task needs depends on the number of subprocesses or software threads in the instance of the running program. The idea is to map each hardware thread to one core, and make sure that each task has all assigned cores assigned on the same node.
Slurm
65,603,381
24
I submitted several jobs via SLURM to our school's HPC cluster. Because the shell scripts all have the same name, so the job names appear exactly the same. It looks like [myUserName@rclogin06 ~]$ sacct -u myUserName JobID JobName Partition Account AllocCPUS State ExitCode ------------ ---------- ---------- ---------- ---------- ---------- -------- 12577766 run.sh general ourQueue_+ 4 RUNNING 0:0 12659777 run.sh general ourQueue_+ 8 RUNNING 0:0 12675983 run.sh general ourQueue_+ 16 RUNNING 0:0 How can I know from which directory a job is submitted so that I can differentiate the jobs?
You can use the scontrol command to see the job details. $ scontrol show job <jobid> For example, for a running job on our SLURM cluster: $ scontrol show job 1665191 JobId=1665191 Name=tasktest ... Shared=OK Contiguous=0 Licenses=(null) Network=(null) Command=/lustre/work/.../slurm_test/task.submit WorkDir=/lustre/work/.../slurm_test You are looking for the last line, WorkDir.
Slurm
24,586,699
22
I have submitted a job to a SLURM queue, the job has run and completed. I then check the completed jobs using the sacct command. But looking at the results of the sacct command I notice additional results that I did not expect: JobID JobName State NCPUS Timelimit 5297048 test COMPLETED 1 00:10:00 5297048.bat+ batch COMPLETED 1 5297048.ext+ extern COMPLETED 1 Can anyone explain what the 'batch' and 'extern' jobs are and what their purpose is. Why does the extern job always complete even when the primary job fails. I have attempted to search the documentation but have not found a satisfactory and complete answer. EDIT: Here's the script I am submitting to produce the above sacct output: #!/bin/bash echo test_script > done.txt With the following sbatch command: sbatch -A BRIDGE-CORE-SL2-CPU --nodes=1 --ntasks=1 -p skylake --cpus-per-task 1 -J jobname -t 00:10:00 --output=./output.out --error=./error.err < test.sh
A Slurm job contains multiple jobsteps, which are all accounted for (in terms of resource usage) separately by Slurm. Usually, these steps are created using srun/mpirun and enumerated starting from 0. But in addition to that, there are sometimes two special steps. For example, take the following job: sbatch -n 4 --wrap="srun hostname; srun echo Hello World" This resulted in the following sacct output: JobID JobName Partition Account AllocCPUS State ExitCode ------------ ---------- ---------- ---------- ---------- ---------- -------- 5163571 wrap medium admin 4 COMPLETED 0:0 5163571.bat+ batch admin 4 COMPLETED 0:0 5163571.ext+ extern admin 4 COMPLETED 0:0 5163571.0 hostname admin 4 COMPLETED 0:0 5163571.1 echo admin 4 COMPLETED 0:0 The two srun calls created the steps 5163571.0 and 5163571.1. The 5163571.bat+ accounts for the ressources needed by the batch script (which in this case is just srun hostname; srun echo Hello World. --wrap just puts that into a file and adds #!/bin/sh). Many non-MPI programs do a lot of calculations in the batch step, so the ressource usage is accoutned there. And now for 5163571.ext+: This step accounts for all resources usage by that job outside of slurm. This only shows up, if the PrologFlag contain is used. An example of a process belonging to a slurm job, but not directly controlled by slurm are ssh sessions. If you ssh into a node where one of your jobs runs, your session will be placed into the context of the job (and you will be limited to your available resources by cgroups, if that is set up). And all calculations you do in that ssh session will be accounted for in the .extern job step.
Slurm
52,447,602
22
I'm trying to obtain a value from another file and use this within a SLURM submission script. However, I get an error that the value is non-numerical, in other words, it is not being dereferenced. Here is the script: #!/bin/bash # This reads out the number of procs based on the decomposeParDict numProcs=`awk '/numberOfSubdomains/ {print $2}' ./meshModel/decomposeParDict` echo "NumProcs = $numProcs" #SBATCH --job-name=SnappyHexMesh #SBATCH --output=./logs/SnappyHexMesh.log # #SBATCH --ntasks=`$numProcs` #SBATCH --time=240:00 #SBATCH --mem-per-cpu=4000 #First run blockMesh blockMesh #Now decompose the mesh decomposePar #Now run snappy in parallel mpirun -np $numProcs snappyHexMesh -parallel -overwrite When I run this as a normal Bash shell script, it prints out the number of procs correctly and makes the correct mpirun call. Thus the awk command parses out the number of procs correctly and the variable is dereferenced as expected. However, when I submit this to SLURM using: sbatch myScript.sh I get the error: sbatch: error: Invalid numeric value "`$numProcs`" for number of tasks. Can anyone help with this?
This won't work. What happens when you run sbatch myscript.sh is that slurm parses the script for those special #SBATCH lines, generates a job record, stores the batch script somewhere. The batch script is executed only later when the job runs. So you need to structure you workflow in a slightly different way, and first calculate the number of procs you need before submitting the job. Note that you can use something like sbatch -n $numProcs myscript.sh , you don't need to autogenerate the script (also, mpirun should be able to get the number of procs in your allocation automatically, no need to use "-np").
Slurm
24,508,040
21
I could get access to a computing cluster, specifically one node with two 12-Core CPUs, which is running with Slurm Workload Manager. I would like to run TensorFlow on that system but unfortunately I were not able to find any information about how to do this or if this is even possible. I am new to this but as far as I understand it, I would have to run TensorFlow by creating a Slurm job and can not directly execute python/tensorflow via ssh. Has anyone an idea, tutorial or any kind of source on this topic?
It's relatively simple. Under the simplifying assumptions that you request one process per host, slurm will provide you with all the information you need in environment variables, specifically SLURM_PROCID, SLURM_NPROCS and SLURM_NODELIST. For example, you can initialize your task index, the number of tasks and the nodelist as follows: from hostlist import expand_hostlist task_index = int( os.environ['SLURM_PROCID'] ) n_tasks = int( os.environ['SLURM_NPROCS'] ) tf_hostlist = [ ("%s:22222" % host) for host in expand_hostlist( os.environ['SLURM_NODELIST']) ] Note that slurm gives you a host list in its compressed format (e.g., "myhost[11-99]"), that you need to expand. I do that with module hostlist by Kent Engström, available here https://pypi.python.org/pypi/python-hostlist At that point, you can go right ahead and create your TensorFlow cluster specification and server with the information you have available, e.g.: cluster = tf.train.ClusterSpec( {"your_taskname" : tf_hostlist } ) server = tf.train.Server( cluster.as_cluster_def(), job_name = "your_taskname", task_index = task_index ) And you're set! You can now perform TensorFlow node placement on a specific host of your allocation with the usual syntax: for idx in range(n_tasks): with tf.device("/job:your_taskname/task:%d" % idx ): ... A flaw with the code reported above is that all your jobs will instruct Tensorflow to install servers listening at fixed port 22222. If multiple such jobs happen to be scheduled to the same node, the second one will fail to listen to 22222. A better solution is to let slurm reserve ports for each job. You need to bring your slurm administrator on board and ask him to configure slurm so it allows you to ask for ports with the --resv-ports option. In practice, this requires asking them to add a line like the following in their slurm.conf: MpiParams=ports=15000-19999 Before you bug your slurm admin, check what options are already configured, e.g., with: scontrol show config | grep MpiParams If your site already uses an old version of OpenMPI, there's a chance an option like this is already in place. Then, amend my first snippet of code as follows: from hostlist import expand_hostlist task_index = int( os.environ['SLURM_PROCID'] ) n_tasks = int( os.environ['SLURM_NPROCS'] ) port = int( os.environ['SLURM_STEP_RESV_PORTS'].split('-')[0] ) tf_hostlist = [ ("%s:%s" % (host,port)) for host in expand_hostlist( os.environ['SLURM_NODELIST']) ] Good luck!
Slurm
34,826,736
21
I have the following script to submit job with slurm: #!/bin/sh #!/bin/bash #SBATCH -J $3 #job_name #SBATCH -n 1 #Number of processors #SBATCH -p CA nwchem $1 > $2 The first argument ($1) is my input, the second ($2) is my output and I would like the third ($3) to be my jobname. If I do like this, the job name is '$3'. How can I proceed to give the jobname as an argument of the script? Thanks
The SBATCH directives are seen as comments by the shell and it does not perform variable substitution on $3. There are several courses of action: Option 1: pass the -J argument on the command line: sbatch -J thejobname submission_script.sh input.data output.res Option 2: pass the script through stdin replacing the position arguments ($1, $2, etc. by named ones) IN=input.data OUT=output.res NAME=thejobname <submission_script.sh sbatch Option 3: write a wrapper #!/bin/bash sbatch <<EOT #!/bin/sh #SBATCH -J $3 #job_name #SBATCH -n 1 #Number of processors #SBATCH -p CA nwchem $1 > $2 EOT and use it like this: submit.sh input.data output.red thejobname Also note that the second shebang (#!/bin/bash) is useless and ignored by the (parent) shell.
Slurm
36,279,200
21
You need to run, say, 30 srun jobs, but ensure each of the jobs is run on a node from the particular list of nodes (that have the same performance, to fairly compare timings). How would you do it? What I tried: srun --nodelist=machineN[0-3] <some_cmd> : runs <some_cmd> on all the nodes simultaneously (what i need: to run <some_cmd> on one of the available nodes from the list) srun -p partition seems to work, but needs a partition that contains exactly machineN[0-3], which is not always the case. Ideas?
Update: Version 23.02 has fixed this, as can be read in the Release notes: Allow for --nodelist to contain more nodes than required by --nodes. You can go the opposite direction and use the --exclude option of sbatch: srun --exclude=machineN[4-XX] <some_cmd> Then slurm will only consider nodes that are not listed in the excluded list. If the list is long and complicated, it can be saved in a file. Another option is to check whether the Slurm configuration includes ''features'' with sinfo --format "%20N %20f" If the 'features' column shows a comma-delimited list of features each node has (might be CPU family, network connection type, etc.), you can select a subset of the nodes with a specific features using srun --constraint=<some_feature> <some_cmd>
Slurm
37,480,603
20
Is there a way in python 3 to log the memory (ram) usage, while some program is running? Some background info. I run simulations on a hpc cluster using slurm, where I have to reserve some memory before submitting a job. I know that my job require a lot of memory, but I am not sure how much. So I was wondering if there is a simple solution for logging the memory over time.
You can do that with the memory_profiler package. Just with adding a decorator @profile to a function, you will get an output like this: Line # Mem usage Increment Line Contents ============================================== 3 @profile 4 5.97 MB 0.00 MB def my_func(): 5 13.61 MB 7.64 MB a = [1] * (10 ** 6) 6 166.20 MB 152.59 MB b = [2] * (2 * 10 ** 7) 7 13.61 MB -152.59 MB del b 8 13.61 MB 0.00 MB return a Otherwise, the easiest way to do it is to ask Slurm afterwards with the sacct -l -j <JobId> command (look for the MaxRSS column) so that you can adapt for further jobs. Also, you can use the top command while running the program to get an idea of its memory consumption. Look for the RES column.
Slurm
47,410,932
20
as administrator I need to give the maximum priority to a given job. I have found that submission options like: --priority=<value> or --nice[=adjustment] could be useful, but I do not know which values I should assign them in order to provide the job with the highest priority. Another approach could be to set a low priority by default to all the jobs and to the special ones increase it. Any idea of how I could carry it out? EDIT: I am using sched/backfill policy and the default job priority policy (FIFO). Thank you.
I found a solution that works without the need of using PriorityType=priority/multifactor (as suggested by Bub Espinja): $ scontrol update job=<job-id> Priority=<any-integer> The above command will update the priority of the job and update the queue accordingly. The minimum priority needed to become the next one in line can be found by checking the priority of the next pending job and adding one to it. You can find the priority of a job using the following: $ scontrol show job=<job-id> (scontrol update can be used to change many aspects of a job, such as time limit and others.) EDIT: I just learned one can do $ scontrol top <job-id> to put a job at the top of their queue.
Slurm
39,787,477
19
I'm starting the SLURM job with script and script must work depending on it's location which is obtained inside of script itself with SCRIPT_LOCATION=$(realpath $0). But SLURM copies script to slurmd folder and starts job from there and it screws up further actions. Are there any option to get location of script used for slurm job before it has been moved/copied? Script is located in network shared folder /storage/software_folder/software_name/scripts/this_script.sh and it must to: get it's own location return the software_name folder copy the software_name folder to a local folder /node_folder on node run another script from copied folder /node_folder/software_name/scripts/launch.sh My script is #!/bin/bash #SBATCH --nodes=1 #SBATCH --partition=my_partition_name # getting location of software_name SHARED_PATH=$(dirname $(dirname $(realpath $0))) # separating the software_name from path SOFTWARE_NAME=$(basename $SHARED_PATH) # target location to copy project LOCAL_SOFTWARE_FOLDER='/node_folder' # corrected path for target LOCAL_PATH=$LOCAL_SOFTWARE_FOLDER/$SOFTWARE_NAME # Copying software folder from network storage to local cp -r $SHARED_PATH $LOCAL_SOFTWARE_FOLDER # running the script sh $LOCAL_PATH/scripts/launch.sh It runs perfectly, when I run it on the node itself (without using SLURM) via: sh /storage/software/scripts/this_script.sh. In case of running it with SLURM as sbatch /storage/software/scripts/this_script.sh it is assigned to one of nodes, but: before run it is copied to /var/spool/slurmd/job_number/slurm_script and it screws everything up since $(dirname $(dirname $(realpath $0))) returns /var/spool/slurmd Is it possible to get original location (/storage/software_folder/software_name/) inside of script when it is started with SLURM? P.S. All machines are running Fedora 30 (x64) UPDATE 1 There was a suggestion to run as sbatch -D /storage/software_folder/software_name ./scripts/this_script.sh and use the SHARED_PATH="${SLURM_SUBMIT_DIR}" inside of script itself. But it raise the error sbatch: error: Unable to open file ./scripts/this_script.sh. Also, I tried to use absolute paths: sbatch -D /storage/software_folder/software_name /storage/software_folder/software_name/scripts/this_script.sh. It tries to run, but: in such case it uses specified folder for creating output file only software still doesn't want to run attempt to use echo "${SLURM_SUBMIT_DIR}" inside of script prints /home/username_who_started_script instead of /storage/software_folder/software_name Any other suggestions? UPDATE 2: Also tried to use #SBATCH --chdir=/storage/software_folder/software_name inside of script, but in such case echo "${SLURM_SUBMIT_DIR}" returns /home/username_who_started_scriptor / (if run as root) UPDATE 3 Approach with ${SLURM_SUBMIT_DIR} worked only if task is ran as: cd /storage/software_folder/software_name sbatch ./scripts/this_script.sh But it doesn't seem to be a proper solution. Are there any other ways? SOLUTION #!/bin/bash #SBATCH --nodes=1 #SBATCH --partition=my_partition_name # check if script is started via SLURM or bash # if with SLURM: there variable '$SLURM_JOB_ID' will exist # `if [ -n $SLURM_JOB_ID ]` checks if $SLURM_JOB_ID is not an empty string if [ -n $SLURM_JOB_ID ]; then # check the original location through scontrol and $SLURM_JOB_ID SCRIPT_PATH=$(scontrol show job $SLURM_JOBID | awk -F= '/Command=/{print $2}') else # otherwise: started with bash. Get the real location. SCRIPT_PATH=$(realpath $0) fi # getting location of software_name SHARED_PATH=$(dirname $(dirname $(SCRIPT_PATH))) # separating the software_name from path SOFTWARE_NAME=$(basename $SHARED_PATH) # target location to copy project LOCAL_SOFTWARE_FOLDER='/node_folder' # corrected path for target LOCAL_PATH=$LOCAL_SOFTWARE_FOLDER/$SOFTWARE_NAME # Copying software folder from network storage to local cp -r $SHARED_PATH $LOCAL_SOFTWARE_FOLDER # running the script sh $LOCAL_PATH/scripts/launch.sh
You can get the initial (i.e. at submit time) location of the submission script from scontrol like this: scontrol show job "$SLURM_JOB_ID" | awk -F= '/Command=/{print $2}' So you can replace the realpath $0 part with the above. This will only work within a Slurm allocation of course. So if you want the script to work in any situation, you will need some logic like: if [ -n "${SLURM_JOB_ID:-}" ] ; then THEPATH=$(scontrol show job "$SLURM_JOB_ID" | awk -F= '/Command=/{print $2}') else THEPATH=$(realpath "$0") fi and then proceed with SHARED_PATH=$(dirname "$(dirname "${THEPATH}")")
Slurm
56,962,129
19
I am queuing multiple jobs in SLURM. Can I limit the number of parallel running jobs in slurm? Thanks in advance!
If you are not the administrator, your can hold some jobs if you do not want them all to start at the same time, with scontrol hold <JOBID>, and you can delay the submission of some jobs with sbatch --begin=YYYY-MM-DD. Also, if it is a job array, you can limit the number of jobs in the array that are concurrently running with for instance --array=1:100%25 to have 100 jobs in the array but only 25 of them running. Finally, you can use the --dependency=singleton option that will only allow one of a set of jobs with the same --job-name to be running at a time. If you choose three names and distribute those names to all your jobs and use that option, you are effectively restricting yourself to 3 running jobs max.
Slurm
42,812,425
18
I want to run a script on cluster (SBATCH file). How can activate my virtual environment (path/to/env_name/bin/activate). Does I need only to add the following code to my_script.sh file? module load python/2.7.14 source "/pathto/Python_directory/ENV2.7_new/bin/activate"
You mean to activate a specific Python environment as part of your submission to Slurm? This is what I add to my job script and it works well. Note that I use Anaconda, which by default adds the required paths to my .bashrc script after installation. Hope this helps. .... # define and create a unique scratch directory SCRATCH_DIRECTORY=/global/work/${USER}/kelp/${SLURM_JOBID} mkdir -p ${SCRATCH_DIRECTORY} cd ${SCRATCH_DIRECTORY} # Activate Anaconda work environment for OpenDrift source /home/${USER}/.bashrc source activate MyEnvironment # we execute the job and time it time mpirun python slurmscript.py
Slurm
53,545,690
18
I am using SLURM to dispatch jobs on a supercomputer. I have set the --output=log.out option to place the content from a job's stdout into a file (log.out). I'm finding that the file is updated every 30-60 minutes, making it difficult for me to check on the status of my jobs. Any idea why it takes so long to update this file? Is there a way to change settings so that this file is updated more frequently? Using SLURM 14.03.4-2
This may be related to buffering. Have you tried disabling output buffering as suggested in here? I would recommend the stdbuf option: stdbuf -o0 -e0 command But can't be sure without more information, as I've never experienced a behavior like that. Which filesystem are you using? Also if you are using srun to run your commands you can use the --unbuffered option which disables the output buffering.
Slurm
25,170,763
16
When using bsub with LSF, the -o option gave a lot of details such as when the job started and ended and how much memory and CPU time the job took. With SLURM, all I get is the same standard output that I'd get from running a script without LSF. For example, given this Perl 6 script: warn "standard error stream"; say "standard output stream"; Submitted thus: sbatch -o test.o%j -e test.e%j -J test_warn --wrap 'perl6 test.p6' Resulted in the file test.o34380: Testing standard output and the file test.e34380: Testing standard Error in block <unit> at test.p6:2 With LSF, I'd get all kinds of details in the standard output file, something like: Sender: LSF System <lsfadmin@my_node> Subject: Job 347511: <test> Done Job <test> was submitted from host <my_cluster> by user <username> in cluster <my_cluster_act>. Job was executed on host(s) <my_node>, in queue <normal>, as user <username> in cluster <my_cluster_act>. </home/username> was used as the home directory. </path/to/working/directory> was used as the working directory. Started at Mon Mar 16 13:10:23 2015 Results reported at Mon Mar 16 13:10:29 2015 Your job looked like: ------------------------------------------------------------ # LSBATCH: User input perl6 test.p6 ------------------------------------------------------------ Successfully completed. Resource usage summary: CPU time : 0.19 sec. Max Memory : 0.10 MB Max Swap : 0.10 MB Max Processes : 2 Max Threads : 3 The output (if any) follows: standard output stream PS: Read file <test.e_347511> for stderr output of this job. Update: One or more -v flags to sbatch gives more preliminary information, but doesn't change the standard output. Update 2: Use seff JOBID for the desired info (where JOBID is the actual number). Just be aware that it collects data once a minute, so it might say that your max memory usage was 2.2GB, even though your job was killed due to using more than the 4GB of memory you requested.
UPDATED ANSWER: Years after my original answer, a friend pointed out seff to me, which is by far the best way to get this info: seff JOBID Just be aware that memory consumption is not constantly monitored, so if your job gets killed due to using too much memory, then know that it really did go over what you requested even if seff reports less. ORIGINAL ANSWER: For recent jobs, try sacct -l Look under the "Job Accounting Fields" section of the documentation for descriptions of each of the three dozen or so columns in the output. For just the job ID, maximum RAM used, maximum virtual memory size, start time, end time, CPU time in seconds, and the list of nodes on which the jobs ran. By default this just gives info on jobs run the same day (see --starttime or --endtime options for getting info on jobs from other days): sacct --format=jobid,MaxRSS,MaxVMSize,start,end,CPUTimeRAW,NodeList This will give you output like: JobID MaxRSS MaxVMSize Start End CPUTimeRAW NodeList ------------ ------- ---------- ------------------- ------------------- ---------- -------- 36511 2015-04-29T11:34:37 2015-04-29T11:34:37 0 c50b-20 36511.batch 660K 181988K 2015-04-29T11:34:37 2015-04-29T11:34:37 0 c50b-20 36514 2015-04-29T12:18:46 2015-04-29T12:18:46 0 c50b-20 36514.batch 656K 181988K 2015-04-29T12:18:46 2015-04-29T12:18:46 0 c50b-20 Use --state COMPLETED for checking previously completed jobs. When checking a state other than RUNNING, you have to give a start or end time. sacct --starttime 08/01/15 --state COMPLETED --format=jobid,MaxRSS,MaxVMSize,start,end,CPUTImeRaw,NodeList,ReqCPUS,ReqMem,Elapsed,Timelimit You can also get work directory about the job using scontrol: scontrol show job 36514 Which will give you output like: JobId=36537 JobName=sbatch UserId=username(123456) GroupId=my_group(678) ...... WorkDir=/path/to/work/dir However, by default, scontrol can only access that information for about five minutes after the job finishes, after which it is purged from memory.
Slurm
29,928,925
16
In slurm, calling the command squeue -u <username> will list all the jobs that are pending or active for a given user. I am wondering if there was a quick way to tally them all so that I know how many outstanding jobs there are, including pending and actively running jobs. Thanks!
I would interprete "quick command" differently. Additionally I would add -r for cases when you are using job arrays: squeue -u <username> -h -t pending,running -r | wc -l option -h removes the header "wc -l" (word count) counts the line of the output. Eventually I am using it with watch watch 'squeue -u <username> -h -t pending,running -r | wc -l'
Slurm
53,037,185
16
I am trying to launch a large number of job steps using a batch script. The different steps can be completely different programs and do need exactly one CPU each. First I tried doing this using the --multi-prog argument to srun. Unfortunately, when using all CPUs assigned to my job in this manner, performance degrades massively. The run time increases to almost its serialized value. By undersubscribing I could ameliorate this a little. I couldn't find anything online regarding this problem, so I assumed it to be a configuration problem of the cluster I am using. So I tried going a different route. I implemented the following script (launched via sbatch my_script.slurm): #!/bin/bash #SBATCH -o $HOME/slurm/slurm_out/%j.%N.out #SBATCH --error=$HOME/slurm/slurm_out/%j.%N.err_out #SBATCH --get-user-env #SBATCH -J test #SBATCH -D $HOME/slurm #SBATCH --export=NONE #SBATCH --ntasks=48 NR_PROCS=$(($SLURM_NTASKS)) for PROC in $(seq 0 $(($NR_PROCS-1))); do #My call looks like this: #srun --exclusive -n1 bash $PROJECT/call_shells/call_"$PROC".sh & srun --exclusive -n1 hostname & pids[${PROC}]=$! #Save PID of this background process done for pid in ${pids[*]}; do wait ${pid} #Wait on all PIDs, this returns 0 if ANY process fails done I am aware, that the --exclusive argument is not really needed in my case. The shell scripts called contain the different binaries and their arguments. The remaining part of my script relies on the fact that all processes have finished hence the wait. I changed the calling line to make it a minimal working example. At first this seemed to be the solution. Unfortunately when increasing the number of nodes used in my job allocation (for example by increasing --ntasks to a number larger than the number of CPUs per node in my cluster), the script does not work as expected anymore, returning srun: Warning: can't run 1 processes on 2 nodes, setting nnodes to 1 and continuing using only one node (i.e. 48 CPUs in my case, which go through the job steps as fast as before, all processes on the other node(s) are subsequently killed). This seems to be the expected behaviour, but I can't really understand it. Why is it that every job step in a given allocation needs to include a minimum number of tasks equal to the number of nodes included in the allocation. I ordinarily really do not care at all about the number of nodes used in my allocation. How can I implement my batch script, so it can be used on multiple nodes reliably?
Found it! The nomenclature and the many command line options to slurm confused me. The solution is given by #!/bin/bash #SBATCH -o $HOME/slurm/slurm_out/%j.%N.out #SBATCH --error=$HOME/slurm/slurm_out/%j.%N.err_out #SBATCH --get-user-env #SBATCH -J test #SBATCH -D $HOME/slurm #SBATCH --export=NONE #SBATCH --ntasks=48 NR_PROCS=$(($SLURM_NTASKS)) for PROC in $(seq 0 $(($NR_PROCS-1))); do #My call looks like this: #srun --exclusive -N1 -n1 bash $PROJECT/call_shells/call_"$PROC".sh & srun --exclusive -N1 -n1 hostname & pids[${PROC}]=$! #Save PID of this background process done for pid in ${pids[*]}; do wait ${pid} #Wait on all PIDs, this returns 0 if ANY process fails done This specifies to run the job on exactly one node incorporating a single task only.
Slurm
24,056,961
15
I am trying to run some parallel code on slurm, where the different processes do not need to communicate. Naively I used python's slurm package. However, it seems that I am only using the cpu's on one node. For example, if I have 4 nodes with 5 cpu's each, I will only run 5 processes at the same time. How can I tell multiprocessing to run on different nodes? The python code looks like the following import multiprocessing def hello(): print("Hello World") pool = multiprocessing.Pool() jobs = [] for j in range(len(10)): p = multiprocessing.Process(target = run_rel) jobs.append(p) p.start() The problem is similar to this one, but there it has not been solved in detail.
Your current code will run 10 times on 5 processor, on a SINGLE node where you start it. It has nothing to do with SLURM now. You will have to SBATCH the script to SLURM. If you want to run this script on 5 cores with SLURM modify the script like this: #!/usr/bin/python3 #SBATCH --output=wherever_you_want_to_store_the_output.log #SBATCH --partition=whatever_the_name_of_your_SLURM_partition_is #SBATCH -n 5 # 5 cores import sys import os import multiprocessing # Necessary to add cwd to path when script run # by SLURM (since it executes a copy) sys.path.append(os.getcwd()) def hello(): print("Hello World") pool = multiprocessing.Pool() jobs = [] for j in range(len(10)): p = multiprocessing.Process(target = run_rel) jobs.append(p) p.start() And then execute the script with sbatch my_python_script.py On one of the nodes where SLURM is installed However this will allocate your job to a SINGLE node as well, so the speed will be the very same as you would just run it on a single node. I dont know why would you want to run it on different nodes when you have just 5 processes. It will be faster just to run on one node. If you allocate more then 5 cores, in the beginning of the python script, then SLURM will allocate more nodes for you.
Slurm
39,974,874
15
I have a problem where I need to launch the same script but with different input arguments. Say I have a script myscript.py -p <par_Val> -i <num_trial>, where I need to consider N different par_values (between x0 and x1) and M trials for each value of par_values. Each trial of M is such that almost reaches the time limits of the cluster where I am working on (and I don't have priviledges to change this). So in practice I need to run NxM independent jobs. Because each batch jobs has the same node/cpu configuration, and invokes the same python script, except for changing the input parameters, in principle, in pseudo-language I should have a sbatch script that should do something like: #!/bin/bash #SBATCH --job-name=cv_01 #SBATCH --output=cv_analysis_eis-%j.out #SBATCH --error=cv_analysis_eis-%j.err #SBATCH --partition=gpu2 #SBATCH --nodes=1 #SBATCH --cpus-per-task=4 for p1 in 0.05 0.075 0.1 0.25 0.5 do for i in {0..150..5} do python myscript.py -p p1 -v i done done where every call of the script is itself a batch job. Looking at the sbatch doc, the -a --array option seems promising. But in my case I need to change the input parameters for every script of the NxM that I have. How can I do this? I would like not to write NxM batch scripts and then list them in a txt file as suggested by this post. Nor the solution proposed here seems ideal, as this is the case imho of a job array. Moreover I would like to make sure that all the NxM scripts are launched at the same time, and the invoking above script is terminated right after, so that it won't clash with the time limit and my whole job will be terminated by the system and remain incomplete (whereas, since each of the NxM jobs is within such limit, if they are run together in parallel but independent, this won't happen).
The best approach is to use job arrays. One option is to pass the parameter p1 when submitting the job script, so you will only have one script, but will have to submit it multiple times, once for each p1 value. The code will be like this (untested): #!/bin/bash #SBATCH --job-name=cv_01 #SBATCH --output=cv_analysis_eis-%j-%a.out #SBATCH --error=cv_analysis_eis-%j-%a.err #SBATCH --partition=gpu2 #SBATCH --nodes=1 #SBATCH --cpus-per-task=4 #SBATCH -a 0-150:5 python myscript.py -p $1 -v $SLURM_ARRAY_TASK_ID and you will submit it with: sbatch my_jobscript.sh 0.05 sbatch my_jobscript.sh 0.075 ... Another approach is to define all the p1 parameters in a bash array and submit NxM jobs (untested) #!/bin/bash #SBATCH --job-name=cv_01 #SBATCH --output=cv_analysis_eis-%j-%a.out #SBATCH --error=cv_analysis_eis-%j-%a.err #SBATCH --partition=gpu2 #SBATCH --nodes=1 #SBATCH --cpus-per-task=4 #Make the array NxM #SBATCH -a 0-150 PARRAY=(0.05 0.075 0.1 0.25 0.5) #p1 is the element of the array found with ARRAY_ID mod P_ARRAY_LENGTH p1=${PARRAY[`expr $SLURM_ARRAY_TASK_ID % ${#PARRAY[@]}`]} #v is the integer division of the ARRAY_ID by the lenght of v=`expr $SLURM_ARRAY_TASK_ID / ${#PARRAY[@]}` python myscript.py -p $p1 -v $v
Slurm
41,900,600
15
I used to use a server with LSF but now I just transitioned to one with SLURM. What is the equivalent command of bpeek (for LSF) in SLURM? bpeek bpeek Displays the stdout and stderr output of an unfinished job I couldn't find the documentation anywhere. If you have some good references for SLURM, please let me know as well. Thanks!
I just learned that in SLURM there is no need to do bpeek to check the current standard output and standard error since they are printed in running time to the files specified for the stdout and stderr.
Slurm
19,062,153
14
We have a 4 GPU nodes with 2 36-core CPUs and 200 GB of RAM available at our local cluster. When I'm trying to submit a job with the follwoing configuration: #SBATCH --nodes=1 #SBATCH --ntasks=40 #SBATCH --cpus-per-task=1 #SBATCH --mem-per-cpu=1500MB #SBATCH --gres=gpu:4 #SBATCH --time=0-10:00:00 I'm getting the following error: sbatch: error: Batch job submission failed: Requested node configuration is not available What might be the reason for this error? The nodes have exactly the kind of hardware that I need...
The CPUs are most likely 36-threads not 36-cores and Slurm is probably configured to allocate cores and not threads. Check the output of scontrol show nodes to see what the nodes really offer.
Slurm
55,290,596
14
I have set of an array job as follows: sbatch --array=1:100%5 ... which will limit the number of simultaneously running tasks to 5. The job is now running, and I would like to change this number to 10 (i.e. I wish I'd run sbatch --array=1:100%10 ...). The documentation on array jobs mentions that you can use scontrol to change options after the job has started. Unfortunately, it's not clear what this option's variable name is, and I don't think it is listed in the documentation of the sbatch command here. Any pointers well received.
You can change the array throttling limit with the following command: scontrol update ArrayTaskThrottle=<count> JobId=<jobID>
Slurm
55,430,330
14
I'm using Slurm. When I run sinfo -Nel it is common to see a server designated as idle, but sometimes there is also a little asterisk near it (Like this: idle*). What does that mean? I couldn't find any info about that. (The server is up and running).
When an * appears after the state of a node it means that the node is unreachable Quoting the sinfo manpage under the NODE STATE CODES section: * The node is presently not responding and will not be allocated any new work. If the node remains non-responsive, it will be placed in the DOWN state (except in the case of COMPLETING, DRAINED, DRAINING, FAIL, FAILING nodes).
Slurm
31,903,407
13
In SGE/PBS, I can submit binary executables to the cluster just like I would locally. For example: qsub -b y -cwd echo hello would submit a job named echo, which writes the word "hello" to its output file. How can I submit a similar job to SLURM. It expects the file to have a hash-bang interpreter on the first line. On SLURM I get $ sbatch echo hello sbatch: error: This does not look like a batch script. The first sbatch: error: line must start with #! followed by the path to an interpreter. sbatch: error: For instance: #!/bin/sh or using the pseuodo qsub: $ qsub echo hello There was an error running the SLURM sbatch command. The command was: '/cm/shared/apps/slurm/14.11.3/bin/sbatch echo hello 2>&1' and the output was: 'sbatch: error: This does not look like a batch script. The first sbatch: error: line must start with #! followed by the path to an interpreter. sbatch: error: For instance: #!/bin/sh ' I don't want to write script, put #!/bin/bash at the top and my command in the next line and then submit them to sbatch. Is there a way to avoid this extra work? There has to be a more productive way.
you can use the --wrap parameter to automatically wrap the command in a script. something like: sbatch --wrap="echo hello"
Slurm
33,400,769
13
When I submit a SLURM job with the option --gres=gpu:1 to a node with two GPUs, how can I get the ID of the GPU which is allocated for the job? Is there an environment variable for this purpose? The GPUs I'm using are all nvidia GPUs. Thanks.
You can get the GPU id with the environment variable CUDA_VISIBLE_DEVICES. This variable is a comma separated list of the GPU ids assigned to the job.
Slurm
43,967,405
12
I am running a job array with SLURM, with the following job array script (that I run with sbatch job_array_script.sh [args]: #!/bin/bash #SBATCH ... other options ... #SBATCH --array=0-1000%200 srun ./job_slurm_script.py $1 $2 $3 $4 echo 'open' > status_file.txt To explain, I want job_slurm_script.py to be run as an array job 1000 times with 200 tasks maximum in parallel. And when all of those are done, I want to write 'open' to status_file.txt. This is because in reality I have more than 10,000 jobs, and this is above my cluster's MaxSubmissionLimit, so I need to split it into smaller chunks (at 1000-element job arrays) and run them one after the other (only when the previous one is finished). However, for this to work, the echo statement can only trigger once the entire job array is finished (outside of this, I have a loop which checks status_file.txt so see if the job is finished, i.e when the contents are the string 'open'). Up to now I thought that srun holds the script up until the whole job array is finished. However, sometimes srun "returns" and the script goes to the echo statement before the jobs are finished, so all the subsequent jobs bounce off the cluster since it goes above the submission limit. So how do I make srun "hold up" until the whole job array is finished?
You can add the flag --wait to sbatch. Check the manual page of sbatch for information about --wait.
Slurm
46,427,148
12
Edit What I am really looking for is a way to emulate SLURM, something interactive and reasonably user-friendly that I can install. Original post I want to test drive some minimal examples with SLURM, and I am trying to install it all on a local machine with Ubuntu 16.04. I am following the most recent slurm install guide I could find, and I got as far as "start slurmd with sudo /etc/init.d/slurmd start". [....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details. failed! I do not know how to interpret the systemctl log: ● slurmd.service - Slurm node daemon Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Thu 2017-10-26 22:49:27 EDT; 12s ago Process: 5951 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=1/FAILURE) Oct 26 22:49:27 Haggunenon systemd[1]: Starting Slurm node daemon... Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Control process exited, code=exited status=1 Oct 26 22:49:27 Haggunenon systemd[1]: Failed to start Slurm node daemon. Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Unit entered failed state. Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Failed with result 'exit-code'. lsb_release -a gives the following. (Yes, I know, KDE Neon is not exactly Ubuntu, strictly speaking.) o LSB modules are available. Distributor ID: neon Description: KDE neon User Edition 5.11 Release: 16.04 Codename: xenial Unlike the guide said, I used my own user name, wlandau, and I made sure to chown /var/lib/slurm-llnl and /var/run/slurm-llnl to me. Here is my /etc/slurm-llnl/slurm.conf. # slurm.conf file generated by configurator.html. # Put this file on all nodes of your cluster. # See the slurm.conf man page for more information. # ControlMachine=linux0 #ControlAddr= #BackupController= #BackupAddr= # AuthType=auth/munge CacheGroups=0 #CheckpointType=checkpoint/none CryptoType=crypto/munge #DisableRootJobs=NO #EnforcePartLimits=NO #Epilog= #EpilogSlurmctld= #FirstJobId=1 #MaxJobId=999999 #GresTypes= #GroupUpdateForce=0 #GroupUpdateTime=600 #JobCheckpointDir=/var/lib/slurm-llnl/checkpoint #JobCredentialPrivateKey= #JobCredentialPublicCertificate= #JobFileAppend=0 #JobRequeue=1 #JobSubmitPlugins=1 #KillOnBadExit=0 #LaunchType=launch/slurm #Licenses=foo*4,bar #MailProg=/usr/bin/mail #MaxJobCount=5000 #MaxStepCount=40000 #MaxTasksPerNode=128 MpiDefault=none #MpiParams=ports=#-# #PluginDir= #PlugStackConfig= #PrivateData=jobs ProctrackType=proctrack/pgid #Prolog= #PrologFlags= #PrologSlurmctld= #PropagatePrioProcess=0 #PropagateResourceLimits= #PropagateResourceLimitsExcept= #RebootProgram= ReturnToService=1 #SallocDefaultCommand= SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid SlurmctldPort=6817 SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid SlurmdPort=6818 SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd SlurmUser=wlandau #SlurmdUser=root #SrunEpilog= #SrunProlog= StateSaveLocation=/var/lib/slurm-llnl/slurmctld SwitchType=switch/none #TaskEpilog= TaskPlugin=task/none #TaskPluginParam= #TaskProlog= #TopologyPlugin=topology/tree #TmpFS=/tmp #TrackWCKey=no #TreeWidth= #UnkillableStepProgram= #UsePAM=0 # # # TIMERS #BatchStartTimeout=10 #CompleteWait=0 #EpilogMsgTime=2000 #GetEnvTimeout=2 #HealthCheckInterval=0 #HealthCheckProgram= InactiveLimit=0 KillWait=30 #MessageTimeout=10 #ResvOverRun=0 MinJobAge=300 #OverTimeLimit=0 SlurmctldTimeout=120 SlurmdTimeout=300 #UnkillableStepTimeout=60 #VSizeFactor=0 Waittime=0 # # # SCHEDULING #DefMemPerCPU=0 FastSchedule=1 #MaxMemPerCPU=0 #SchedulerRootFilter=1 #SchedulerTimeSlice=30 SchedulerType=sched/backfill SchedulerPort=7321 SelectType=select/linear #SelectTypeParameters= # # # JOB PRIORITY #PriorityFlags= #PriorityType=priority/basic #PriorityDecayHalfLife= #PriorityCalcPeriod= #PriorityFavorSmall= #PriorityMaxAge= #PriorityUsageResetPeriod= #PriorityWeightAge= #PriorityWeightFairshare= #PriorityWeightJobSize= #PriorityWeightPartition= #PriorityWeightQOS= # # # LOGGING AND ACCOUNTING #AccountingStorageEnforce=0 #AccountingStorageHost= #AccountingStorageLoc= #AccountingStoragePass= #AccountingStoragePort= AccountingStorageType=accounting_storage/none #AccountingStorageUser= AccountingStoreJobComment=YES ClusterName=cluster #DebugFlags= #JobCompHost= #JobCompLoc= #JobCompPass= #JobCompPort= JobCompType=jobcomp/none #JobCompUser= #JobContainerPlugin=job_container/none JobAcctGatherFrequency=30 JobAcctGatherType=jobacct_gather/none SlurmctldDebug=3 SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log SlurmdDebug=3 SlurmdLogFile=/var/log/slurm-llnl/slurmd.log #SlurmSchedLogFile= #SlurmSchedLogLevel= # # # POWER SAVE SUPPORT FOR IDLE NODES (optional) #SuspendProgram= #ResumeProgram= #SuspendTimeout= #ResumeTimeout= #ResumeRate= #SuspendExcNodes= #SuspendExcParts= #SuspendRate= #SuspendTime= # # # COMPUTE NODES NodeName=linux[1-32] CPUs=1 State=UNKNOWN PartitionName=debug Nodes=linux[1-32] Default=YES MaxTime=INFINITE State=UP Follow-up After rewriting my slurm.conf with the help of @damienfrancois, slurmd starts now. But unfortunately, sinfo hangs when I call it, and I get the same error message as before. $ sudo /etc/init.d/slurmctld stop [ ok ] Stopping slurmctld (via systemctl): slurmctld.service. $ sudo /etc/init.d/slurmctld start [ ok ] Starting slurmctld (via systemctl): slurmctld.service. $ sinfo slurm_load_partitions: Unable to contact slurm controller (connect failure) $ slurmd -Dvvv slurmd: fatal: Frontend not configured correctly in slurm.conf. See man slurm.conf look for frontendname. Then I tried restarting the daemons, and slurmd failed to start all over again. $ sudo /etc/init.d/slurmctld start [....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details. failed!
The value in front of ControlMachine has to match the output of hostname -s on the machine on which slurmctld starts. The same holds for NodeName ; it has to match the output of hostname -s on the machine on which slurmd starts. As you only have one machine and it appears to be called Haggunenon, the relevant lines in slurm.conf should be: ControlMachine=Haggunenon [...] NodeName=Haggunenon CPUs=1 State=UNKNOWN If you want to start several slurmd daemon to emulate a larger cluster, you will need to start slurmd with the -N option (but that requires that Slurm be built using the --enable-multiple-slurmd configure option) UPDATE. Here is a walkthrough. I setup a VM with Vagrant and VirtualBox (vagrant init ubuntu/xenial64 ; vagrant up) and then after vagrant ssh, I ran the following: ubuntu@ubuntu-xenial:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.3 LTS Release: 16.04 Codename: xenial ubuntu@ubuntu-xenial:~$ sudo apt-get update Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] [...] Get:35 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [3,060 B] Fetched 23.6 MB in 4s (4,783 kB/s) Reading package lists... Done ubuntu@ubuntu-xenial:~$ sudo apt-get install munge libmunge2 Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: libmunge2 munge 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 102 kB of archives. After this operation, 351 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 libmunge2 amd64 0.5.11-3ubuntu0.1 [18.4 kB] Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 munge amd64 0.5.11-3ubuntu0.1 [83.9 kB] Fetched 102 kB in 0s (290 kB/s) Selecting previously unselected package libmunge2. (Reading database ... 57914 files and directories currently installed.) Preparing to unpack .../libmunge2_0.5.11-3ubuntu0.1_amd64.deb ... Unpacking libmunge2 (0.5.11-3ubuntu0.1) ... Selecting previously unselected package munge. Preparing to unpack .../munge_0.5.11-3ubuntu0.1_amd64.deb ... Unpacking munge (0.5.11-3ubuntu0.1) ... Processing triggers for libc-bin (2.23-0ubuntu9) ... Processing triggers for man-db (2.7.5-1) ... Processing triggers for systemd (229-4ubuntu21) ... Processing triggers for ureadahead (0.100.0-19) ... Setting up libmunge2 (0.5.11-3ubuntu0.1) ... Setting up munge (0.5.11-3ubuntu0.1) ... Generating a pseudo-random key using /dev/urandom completed. Please refer to /usr/share/doc/munge/README.Debian for instructions to generate more secure key. Processing triggers for libc-bin (2.23-0ubuntu9) ... Processing triggers for systemd (229-4ubuntu21) ... Processing triggers for ureadahead (0.100.0-19) ... ubuntu@ubuntu-xenial:~$ sudo apt-get install slurm-wlm slurm-wlm-basic-plugins Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: fontconfig fontconfig-config fonts-dejavu-core freeipmi-common libcairo2 libdatrie1 libdbi1 libfontconfig1 libfreeipmi16 libgraphite2-3 [...] python-minimal python2.7 python2.7-minimal slurm-client slurm-wlm slurm-wlm-basic-plugins slurmctld slurmd 0 upgraded, 43 newly installed, 0 to remove and 0 not upgraded. Need to get 20.8 MB of archives. After this operation, 87.3 MB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 fonts-dejavu-core all 2.35-1 [1,039 kB] [...] Get:43 http://archive.ubuntu.com/ubuntu xenial/universe amd64 slurm-wlm amd64 15.08.7-1build1 [6,482 B] Fetched 20.8 MB in 3s (5,274 kB/s) Extracting templates from packages: 100% Selecting previously unselected package fonts-dejavu-core. (Reading database ... 57952 files and directories currently installed.) [...] Processing triggers for libc-bin (2.23-0ubuntu9) ... Processing triggers for systemd (229-4ubuntu21) ... Processing triggers for ureadahead (0.100.0-19) ... ubuntu@ubuntu-xenial:~$ sudo vim /etc/slurm-llnl/slurm.conf ubuntu@ubuntu-xenial:~$ grep -v \# /etc/slurm-llnl/slurm.conf ControlMachine=ubuntu-xenial AuthType=auth/munge CacheGroups=0 CryptoType=crypto/munge MpiDefault=none ProctrackType=proctrack/pgid ReturnToService=1 SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid SlurmctldPort=6817 SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid SlurmdPort=6818 SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd SlurmUser=ubuntu StateSaveLocation=/var/lib/slurm-llnl/slurmctld SwitchType=switch/none TaskPlugin=task/none InactiveLimit=0 KillWait=30 MinJobAge=300 SlurmctldTimeout=120 SlurmdTimeout=300 Waittime=0 FastSchedule=1 SchedulerType=sched/backfill SchedulerPort=7321 SelectType=select/linear AccountingStorageType=accounting_storage/none AccountingStoreJobComment=YES ClusterName=cluster JobCompType=jobcomp/none JobAcctGatherFrequency=30 JobAcctGatherType=jobacct_gather/none SlurmctldDebug=3 SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log SlurmdDebug=3 SlurmdLogFile=/var/log/slurm-llnl/slurmd.log NodeName=ubuntu-xenial CPUs=1 State=UNKNOWN PartitionName=debug Nodes=ubuntu-xenial Default=YES MaxTime=INFINITE State=UP ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/log/slurm-llnl ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/lib/slurm-llnl/slurmctld ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/run/slurm-llnl ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmctld start [ ok ] Starting slurmctld (via systemctl): slurmctld.service. ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmd start [ ok ] Starting slurmd (via systemctl): slurmd.service. And in the end, it gives me the expected result: ubuntu@ubuntu-xenial:~$ sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST debug* up infinite 1 idle ubuntu-denial If following the exact steps here does not help, try running: sudo slurmctld -Dvvv sudo slurmd -Dvvv The messages should be explicit enough.
Slurm
46,966,876
11
I have noticed a strange seq behavior on one of my computers (Ubuntu LTS 14.04): instead of using points as decimal separator it is using commas: seq 0. 0.1 0.2 0,0 0,1 0,2 The same version of seq (8.21) on my other PC gives the normal points (also same Ubuntu version). The strangest thing is that I am observing the same ill behavior on a remote machine when I ssh into it from the first machine. Even a bash script submitted from the conflictive machine to a job scheduler (slurm) on the remote machine is having this problem. I am very confused. Why (and how!) is this happening?
It's likely the LANG variable or some other locale-specific variable. On a computer where seq behaves "normally" try: $ LANG=fr_FR seq 0. 0.1 0.2 0,0 0,1 0,2 $ LANG=en_US seq 0. 0.1 0.2 0.0 0.1 0.2
Slurm
23,884,934
10
I want to see the status of one of my older jobs submitted using slurm. I have used sacct -j , but it does not give me information on exactly the date when the job was submitted/terminated etc. I want to check the date, time of the job submissio. I tried to use scontrol, but I suppose that only works for current running/pending jobs not for older jobs which are already finished. It will be great if someone could suggest me a slurm command for checking the job status along with job submission date and time etc for an already finished old job. Thanks in advance
As you mentioned that sacct -j is working but not providing the proper information, I'll assume that accounting is properly set and working. You can select the output of the sacct command with the -o flag, so to get exactly what you want you can use: sacct -j JOBID -o jobid,submit,start,end,state You can use sacct --helpformat to get the list of all the available fields for the output.
Slurm
46,973,921
10
I have a bunch of jobs running as an array job in slurm: 123_[1-500] PD my_job 0:00 me 123_2 R my_job 9:99 me 123_3 R my_job 9:99 me 123_4 R my_job 9:99 me 123_5 R my_job 9:99 me ... As I read the man page on scancel, it seems to indicate that if I execute scancel 123 it will stop everything Am I wrong, or is there another way to stop just the array job? I want the already running jobs to finish, I just don't want any more jobs created by 123, and I really don't want to have to figure out which jobs need to be re-run if I accidentally kill them mid-way
You can issue scancel with the additional --state tag: scancel --state=PENDING 123 or, in short: scancel -t PD 123 That will only cancel jobs of the 123 array that are pending and will leave the running the already started ones.
Slurm
47,318,252
10
I have a python script that should generate a bunch of inputs for an external program to be called. The calls to the external program will be through slurm. What I want is for my script to wait until all the generated calls to the external programs are finished (not the slurm command, the actual execution of the external program), to then parse the outputs generated by the external program, do some stuff with the data. I tried subprocess call, but it only waits the slurm submission command. Any suggestion?
You can run your sbatch commands asynchronously in subprocesses as you tried before, but use the -W or --wait command line options for sbatch. This will cause the subprocess to not return until the job has terminated. You can then block the execution of your main program until all of the subprocesses complete. As a bonus, this will also allow you to handle unexpected return values from your external program. See sbatch documentation for more information
Slurm
51,838,249
10
I have script for running my parallel program on cluster. I run it with usual command: sbatch -p PARTITION -t TIME -N NODES /full/path/to/my/script.sh PARAMETERS-LIST Inside that script.sh I need to source another bash script (which is located in the same directory where script.sh resides) to load some routines/variables. For my usual scripts which are executed on local computer I use the following: SCRIPTDIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null && pwd )" source "$SCRIPTDIR/funcs.sh" print_header "Some text" and it works just fine. However, on cluster this doesn't work and I get the following error (just for example): /var/tmp/slurmd/job1043319/slurm_script: line 9: /var/tmp/slurmd/jobID/funcs.sh: No such file or directory /var/tmp/slurmd/job1043319/slurm_script: line 13: print_header: command not found Seems like SLURM creates its own copy of script to be submitted and because of this I can't source any local scripts/files. What can be done in that situation? It would be great if I can avoid hard-coding absolute paths inside my scripts...
The problem is that the location of the sbatch shell script, and only this script, is different in the case you just run it from your desktop's command prompt form the case of slurmstepd running it on a node. This happens because sbatch physically copies your script to every head node of the allocation, and runs it from there, using Slurm's fast hierarchical network topology mechanism. The end effect of this is that while the current directory is propagated to the script execution environment, the path to script differs (and can be different on different nodes). Let me explain using your example. What is going on? Of course, the script that you are including must be seen as the same file at the same location in the filesystem tree (on an NFS mount, normally). In this example, I assume that your username is bob (simply because it's most certainly not), and that your home directory /home/bob is mounted from an NFS export on every node, as well as your own machine. Reading your code, I understand that the main script script.sh and the sourced file funcs.sh are located in the same directory. For simplicity, let's put them right into your home directory: $ pwd /home/bob $ ls script.sh funcs.sh Let me also modify the script.sh as follows: I'm going to add the pwd line to see where we are, and remove the rest past the failing . builtin, as that is irrelevant anyway. #!/bin/bash pwd SCRIPTDIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null && pwd )" The local run Whichever directory is current is irrelevant, so let's complicate our test a bit by specifying a relative path to the script, even though it is in the current directory: $ ../bob/script.sh PARAMETERS-LIST In this case, the script is evaluated by bash as follows (step-by step, with the command stdout, variable expansion result or variable assigned value shown at each other line prefixed with a =>. pwd => '/home/bob' # Evaluate: SCRIPTDIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null && pwd )" ${BASH_SOURCE[0]} => '../bob/script.sh' dirname '../bob/script.sh' => '../bob' cd '../bob' => Success, $? is 0 pwd => '/home/bob' SCRIPTDIR='/home/bob' # Evaluate: source "$SCRIPTDIR/funcs.sh" $SCRIPTDIR => '/home/bob' source '/home/bob/funcs.sh' => (Successfully sourced) Here, your intended behavior of sourcing funcs.sh from the same directory where script.sh lives worked just fine. The Slurm run Slurm copies your script.sh to the spool directory on a node, and then executes it from there. If you specify the -D switch to sbatch, the current directory will be set to that (or to the value of $TMPDIR if that fails; or to /tmp is that, in turn, fails). if you do not specify the -D, the current directory is used. For now, suppose that /home/bob is mounted on the node, and that you simply submit your script without the -D: $ sbatch -N1 ./script.sh PARAMETERS-LIST Slurm allocates a node machine for you, copies the contents of your script ./script.sh into a local file (it happened to be named /var/tmp/slurmd/job1043319/slurm_script in your example), sets the current directory to /home/bob and executes the script file /var/tmp/slurmd/job1043319/slurm_script. I think you already understand what is going to happen. pwd => '/home/bob' # Evaluate: SCRIPTDIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null && pwd )" ${BASH_SOURCE[0]} => '/var/tmp/slurmd/job1043319/slurm_script' dirname '/var/tmp/slurmd/job1043319/slurm_script' => '/var/tmp/slurmd/job1043319' cd '../bob' => Success, $? is 0 pwd => '/home/bob' SCRIPTDIR='/var/tmp/slurmd/job1043319' I think we should stop here. You already see that your assumed invariant of the main script and its sourced file being in the same directory is violated. Your script relies on this invariant, and therefore breaks. So how do I solve this? It depends on your requirements. You did not state any, but I can give a few suggestions that may align with your goals to a different degree each. This may have a positive side of my answer being useful to a wider SO audience. OPTION 1. Enter into a binding agreement with yourself (and, if any, other users of your script) to always launch your script while in a particular directory. In practice, this is the approach taken e. g. by a well-known speech recognition toolkit Kaldi¹: any script, any command you run, you must run from the experiment's root directory (link to example experiment). If this approach is feasible, then anything you source, you source from the current directory (and/or a well-known path under it); example 1, top-level ./run.sh in the main experiment directory² . ./cmd.sh . ./path.sh example 2, from a utility file utils/nnet/subset_data_tr_cv.sh in a directory that is itself soft-linked from the main experiment directory: . utils/parse_options.sh None of these . statements would work in any script invoked from an unconventional directory: $ pwd /home/bob/kaldi/egs/fisher_english/s5 $ utils/nnet/some_utility_script.sh # This works. $ cd utils/nnet $ ./some_utility_script.sh # This fails, by design. Pros: Readable code. When you have 3,000 bash files totaling 600,000 lines of code, as our case at point does, this is important. Pros: The code is very HPC-cluster-agnostic, and almost all scripts can run on your machine, with or without local multicore parallelization, or spreading your computation over a mini-cluster using plain ssh, or use Slurm, PBS, Sun GridEngine, you name it. Cons: Users must be aware of the requirement. To assess the bottom line of this approach, pros would outweigh the cons if you have a large number of interdependent script files, and your toolkit is complex and naturally has a moderate or high learning curve and/or numerous other conventions--which is true in the case of Kaldi, w.r.t data preparation and layout. The imposed requirement to cd to one directory and do everything from it could be just one of many in your case, comparatively non-burdensome. OPTION 2. Export a variable naming the root location of all files that your scripts source. Your script would then look like #!/bin/bash . "${ACME_TOOLKIT_COMMON_SCRIPTS:?}/funcs.sh" || exit print_header "Some text" You must ensure that this variable is defined in the environment, by hook or by crook. The :? suffix in the variable expansion makes the script end with a fatal error message if the variable is undefined or empty, and is preferred for (a) better error message and (b) a quite minor security risk of sourcing unintended code. Pros: Still pretty readable code. Cons: There should be an external mechanism to set the variable per installation, either per-user or machine-wide. Cons/Meh: Slurm must be allowed to propagate your environment to the job step. This is usually so, and is on by default, but there may be cluster setups that limit the user's environment propagation to a list of administrator-approved variables. Returning to Kaldi's example, if your workload is low, and you want to be able to parallelize to e. g. 5–10 machines on premises using ssh instead of Slurm, you'd have to either whitelist this specific environment variable in both the sshd and ssh client configurations, or make sure it is set to the same correct value on every machine. The bottom line here in general (i. e., nothing else considered) is approximately same as that of the Option 1: one more thing to troubleshoot; possible infrastructure configuration issues, but still quite fitting for a large program with more than a dozen or two of interdependent bash scripts. However, this option becomes more lucrative if you know you won't ever have to port your code to any other workload manager than Slurm, and even more lucrative if your WLM is one or few specific clusters, so you can rely on their unchanging configuration. OPTION 3. Write a "launcher" script to give to sbatch to launch any command. The launcher would take the name of the script (or any program, to that matter) to run as its first argument, and pass the rest of the arguments to the invoked script/commnd. The script can be one and same to wrap any of your scripts, and exists solely to make your sourced script discovery logic work. The launcher script is utterly trivial: $ cat ~/launcher #!/bin/bash prog=${1:?}; shift exec "$prog" "$@" Running the following script (from an NFS mount at /xa, naturally) $ cat '/xa/var/tmp/foo bar/myscript.sh' #!/bin/bash printf 'Current dir: '; pwd printf 'My command line:'; printf ' %q' "$0" "$@"; printf '\n' echo "BASH_SOURCE[0]='${BASH_SOURCE[0]}'" # The following line is the one that gave fits in your case. my_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null && pwd)" echo "my_dir='$my_dir'" with the current dir being /tmp with the sbatch command below (and testing proper quoting never hurts) $ pwd /tmp $ sbatch -o /xa/var/tmp/%x-%A.out -N1 ~/launcher \ '/xa/var/tmp/foo bar/myscript.sh' "The skies are painted with unnumber'd sparks" 1 2 '' "3 4" Submitted batch job 19740 yields this output file: $ cat /xa/var/tmp/launcher-19740.out Current dir: /tmp My command line: /xa/var/tmp/foo\ bar/myscript.sh The\ skies\ are\ painted\ with\ unnumber\'d\ sparks 1 2 '' 3\ 4 BASH_SOURCE[0]='/xa/var/tmp/foo bar/myscript.sh' my_dir='/xa/var/tmp/foo bar' Pros: You can run your existing script as is. Pros: The command you give to launcher does not have to be a shell script. Cons: And that's a big one. You cannot use #SBATCH directives in your script. In the end, you'll likely end up writing either an individual top-level script to simply call sbatch invoking your script via this common launcher with a buttload of sbatch switches, or write a customized launcher script for each of your computing scripts, listing all the required #SBATCH directives. Not much win here. Bottom line: if all your batch job submissions are very similar so that you can factor the absolute majority of sbatch options into #SBATCH directives in a single launcher script, this is an option to consider. Note though that all jobs will be named "launcher" unless you name them with the sbatch's -J switch, which means you either won't to be able to factor out all sbatch switches into a single file, or cope with this quite dull, at first sight, naming scheme³ and id your jobs some other way. So, in the end, pick you poison that seems the tastiest to you, and go with it. There is no perfect solution, but there should be an acceptable way to achieve what you want. ¹ Of which I happen to be both an active user and a contributor. ² A test of the form . ./cmd.sh || exit would have been more robust, and should always be used, but our top-level experiment scripts are usually pretty lax, compared to core scripts. ³ But as any one of nearly 10,000,001 people in the US named Smith, Johnson, Williams, Jones, Brown or Morris "Moe" Jette can confirm, it's not necessarily a big deal.
Slurm
57,696,302
10
I'm new to iOS and Objective-C and the whole MVC paradigm and I'm stuck with the following: I have a view that acts as a data entry form and I want to give the user the option to select multiple products. The products are listed on another view with a UITableViewController and I have enabled multiple selections. How do I transfer the data from one view to another? I will be holding the selections on the UITableView in an array, but how do I then pass that back to the previous data entry form view so it can be saved along with the other data to Core Data on submission of the form? I have surfed around and seen some people declare an array in the app delegate. I read something about singletons, but I don't understand what these are and I read something about creating a data model. What would be the correct way of performing this and how would I go about it?
This question seems to be very popular here on Stack Overflow so I thought I would try and give a better answer to help out people starting in the world of iOS like me. Passing Data Forward Passing data forward to a view controller from another view controller. You would use this method if you wanted to pass an object/value from one view controller to another view controller that you may be pushing on to a navigation stack. For this example, we will have ViewControllerA and ViewControllerB To pass a BOOL value from ViewControllerA to ViewControllerB we would do the following. in ViewControllerB.h create a property for the BOOL @property (nonatomic, assign) BOOL isSomethingEnabled; in ViewControllerA you need to tell it about ViewControllerB so use an #import "ViewControllerB.h" Then where you want to load the view, for example, didSelectRowAtIndex or some IBAction, you need to set the property in ViewControllerB before you push it onto the navigation stack. ViewControllerB *viewControllerB = [[ViewControllerB alloc] initWithNib:@"ViewControllerB" bundle:nil]; viewControllerB.isSomethingEnabled = YES; [self pushViewController:viewControllerB animated:YES]; This will set isSomethingEnabled in ViewControllerB to BOOL value YES. Passing Data Forward using Segues If you are using Storyboards you are most likely using segues and will need this procedure to pass data forward. This is similar to the above but instead of passing the data before you push the view controller, you use a method called -(void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender So to pass a BOOL from ViewControllerA to ViewControllerB we would do the following: in ViewControllerB.h create a property for the BOOL @property (nonatomic, assign) BOOL isSomethingEnabled; in ViewControllerA you need to tell it about ViewControllerB, so use an #import "ViewControllerB.h" Create the segue from ViewControllerA to ViewControllerB on the storyboard and give it an identifier. In this example we'll call it "showDetailSegue" Next, we need to add the method to ViewControllerA that is called when any segue is performed. Because of this we need to detect which segue was called and then do something. In our example, we will check for "showDetailSegue" and if that's performed, we will pass our BOOL value to ViewControllerB -(void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender{ if([segue.identifier isEqualToString:@"showDetailSegue"]){ ViewControllerB *controller = (ViewControllerB *)segue.destinationViewController; controller.isSomethingEnabled = YES; } } If you have your views embedded in a navigation controller, you need to change the method above slightly to the following -(void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender{ if([segue.identifier isEqualToString:@"showDetailSegue"]){ UINavigationController *navController = (UINavigationController *)segue.destinationViewController; ViewControllerB *controller = (ViewControllerB *)navController.topViewController; controller.isSomethingEnabled = YES; } } This will set isSomethingEnabled in ViewControllerB to BOOL value YES. Passing Data Back To pass data back from ViewControllerB to ViewControllerA you need to use Protocols and Delegates or Blocks, the latter can be used as a loosely coupled mechanism for callbacks. To do this we will make ViewControllerA a delegate of ViewControllerB. This allows ViewControllerB to send a message back to ViewControllerA enabling us to send data back. For ViewControllerA to be a delegate of ViewControllerB it must conform to ViewControllerB's protocol which we have to specify. This tells ViewControllerA which methods it must implement. In ViewControllerB.h, below the #import, but above @interface you specify the protocol. @class ViewControllerB; @protocol ViewControllerBDelegate <NSObject> - (void)addItemViewController:(ViewControllerB *)controller didFinishEnteringItem:(NSString *)item; @end Next still in the ViewControllerB.h, you need to set up a delegate property and synthesize in ViewControllerB.m @property (nonatomic, weak) id <ViewControllerBDelegate> delegate; In ViewControllerB we call a message on the delegate when we pop the view controller. NSString *itemToPassBack = @"Pass this value back to ViewControllerA"; [self.delegate addItemViewController:self didFinishEnteringItem:itemToPassBack]; That's it for ViewControllerB. Now in ViewControllerA.h, tell ViewControllerA to import ViewControllerB and conform to its protocol. #import "ViewControllerB.h" @interface ViewControllerA : UIViewController <ViewControllerBDelegate> In ViewControllerA.m implement the following method from our protocol - (void)addItemViewController:(ViewControllerB *)controller didFinishEnteringItem:(NSString *)item { NSLog(@"This was returned from ViewControllerB %@", item); } Before pushing viewControllerB to navigation stack we need to tell ViewControllerB that ViewControllerA is its delegate, otherwise we will get an error. ViewControllerB *viewControllerB = [[ViewControllerB alloc] initWithNib:@"ViewControllerB" bundle:nil]; viewControllerB.delegate = self [[self navigationController] pushViewController:viewControllerB animated:YES]; References Using Delegation to Communicate With Other View Controllers in the View Controller Programming Guide Delegate Pattern NSNotification center It's another way to pass data. // Add an observer in controller(s) where you want to receive data [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(handleDeepLinking:) name:@"handleDeepLinking" object:nil]; -(void) handleDeepLinking:(NSNotification *) notification { id someObject = notification.object // Some custom object that was passed with notification fire. } // Post notification id someObject; [NSNotificationCenter.defaultCenter postNotificationName:@"handleDeepLinking" object:someObject]; Passing Data back from one class to another (A class can be any controller, Network/session manager, UIView subclass or any other class) Blocks are anonymous functions. This example passes data from Controller B to Controller A Define a block @property void(^selectedVoucherBlock)(NSString *); // in ContollerA.h Add block handler (listener) Where you need a value (for example, you need your API response in ControllerA or you need ContorllerB data on A) // In ContollerA.m - (void)viewDidLoad { [super viewDidLoad]; __unsafe_unretained typeof(self) weakSelf = self; self.selectedVoucherBlock = ^(NSString *voucher) { weakSelf->someLabel.text = voucher; }; } Go to Controller B UIStoryboard *storyboard = [UIStoryboard storyboardWithName:@"Main" bundle:nil]; ControllerB *vc = [storyboard instantiateViewControllerWithIdentifier:@"ControllerB"]; vc.sourceVC = self; [self.navigationController pushViewController:vc animated:NO]; Fire block -(void)tableView:(UITableView *)tableView didSelectRowAtIndexPath: (NSIndexPath *)indexPath { NSString *voucher = vouchersArray[indexPath.row]; if (sourceVC.selectVoucherBlock) { sourceVC.selectVoucherBlock(voucher); } [self.navigationController popToViewController:sourceVC animated:YES]; } Another Working Example for Blocks
Swift
5,210,535
1,499
In Swift, how does one call Objective-C code? Apple mentioned that they could co-exist in one application, but does this mean that one could technically re-use old classes made in Objective-C whilst building new classes in Swift?
Using Objective-C Classes in Swift If you have an existing class that you'd like to use, perform Step 2 and then skip to Step 5. (For some cases, I had to add an explicit #import <Foundation/Foundation.h to an older Objective-C File.) Step 1: Add Objective-C Implementation -- .m Add a .m file to your class, and name it CustomObject.m. Step 2: Add Bridging Header When adding your .m file, you'll likely be hit with a prompt that looks like this: Click Yes! If you did not see the prompt, or accidentally deleted your bridging header, add a new .h file to your project and name it <#YourProjectName#>-Bridging-Header.h. In some situations, particularly when working with Objective-C frameworks, you don't add an Objective-C class explicitly and Xcode can't find the linker. In this case, create your .h file named as mentioned above, then make sure you link its path in your target's project settings like so: Note: It's best practice to link your project using the $(SRCROOT) macro so that if you move your project, or work on it with others using a remote repository, it will still work. $(SRCROOT) can be thought of as the directory that contains your .xcodeproj file. It might look like this: $(SRCROOT)/Folder/Folder/<#YourProjectName#>-Bridging-Header.h Step 3: Add Objective-C Header -- .h Add another .h file and name it CustomObject.h. Step 4: Build your Objective-C Class In CustomObject.h #import <Foundation/Foundation.h> @interface CustomObject : NSObject @property (strong, nonatomic) id someProperty; - (void) someMethod; @end In CustomObject.m #import "CustomObject.h" @implementation CustomObject - (void) someMethod { NSLog(@"SomeMethod Ran"); } @end Step 5: Add Class to Bridging-Header In YourProject-Bridging-Header.h: #import "CustomObject.h" Step 6: Use your Object In SomeSwiftFile.swift: var instanceOfCustomObject = CustomObject() instanceOfCustomObject.someProperty = "Hello World" print(instanceOfCustomObject.someProperty) instanceOfCustomObject.someMethod() There is no need to import explicitly; that's what the bridging header is for. Using Swift Classes in Objective-C Step 1: Create New Swift Class Add a .swift file to your project, and name it MySwiftObject.swift. In MySwiftObject.swift: import Foundation @objc(MySwiftObject) class MySwiftObject : NSObject { @objc var someProperty: AnyObject = "Some Initializer Val" as NSString init() {} @objc func someFunction(someArg: Any) -> NSString { return "You sent me \(someArg)" } } Step 2: Import Swift Files to ObjC Class In SomeRandomClass.m: #import "<#YourProjectName#>-Swift.h" The file:<#YourProjectName#>-Swift.h should already be created automatically in your project, even if you can not see it. Step 3: Use your class MySwiftObject * myOb = [MySwiftObject new]; NSLog(@"MyOb.someProperty: %@", myOb.someProperty); myOb.someProperty = @"Hello World"; NSLog(@"MyOb.someProperty: %@", myOb.someProperty); NSString * retString = [myOb someFunctionWithSomeArg:@"Arg"]; NSLog(@"RetString: %@", retString); Notes: If Code Completion isn't behaving as you expect, try running a quick build with ⌘⇧R to help Xcode find some of the Objective-C code from a Swift context and vice versa. If you add a .swift file to an older project and get the error dyld: Library not loaded: @rpath/libswift_stdlib_core.dylib, try completely restarting Xcode. While it was originally possible to use pure Swift classes (Not descendents of NSObject) which are visible to Objective-C by using the @objc prefix, this is no longer possible. Now, to be visible in Objective-C, the Swift object must either be a class conforming to NSObjectProtocol (easiest way to do this is to inherit from NSObject), or to be an enum marked @objc with a raw value of some integer type like Int. You may view the edit history for an example of Swift 1.x code using @objc without these restrictions.
Swift
24,002,369
1,137
My application has a dark background, but in iOS 7 the status bar became transparent. So I can't see anything there, only the green battery indicator in the corner. How can I change the status bar text color to white like it is on the home screen?
Set the UIViewControllerBasedStatusBarAppearance to YES in the .plist file. In the viewDidLoad do a [self setNeedsStatusBarAppearanceUpdate]; Add the following method: - (UIStatusBarStyle)preferredStatusBarStyle { return UIStatusBarStyleLightContent; } Note: This does not work for controllers inside UINavigationController, please see Tyson's comment below :) Swift 3 - This will work controllers inside UINavigationController. Add this code inside your controller. // Preferred status bar style lightContent to use on dark background. // Swift 3 override var preferredStatusBarStyle: UIStatusBarStyle { return .lightContent } Swift 5 and SwiftUI For SwiftUI create a new swift file called HostingController.swift import Foundation import UIKit import SwiftUI class HostingController: UIHostingController<ContentView> { override var preferredStatusBarStyle: UIStatusBarStyle { return .lightContent } } Then change the following lines of code in the SceneDelegate.swift window.rootViewController = UIHostingController(rootView: ContentView()) to window.rootViewController = HostingController(rootView: ContentView())
Swift
17,678,881
1,088
In Objective C, I can use #pragma mark to mark sections of my code in the symbol navigator. Since this is a C preprocessor command, it's not available in Swift. Is there a stand-in for this in Swift, or do I have to use ugly comments?
You can use // MARK: There has also been discussion that liberal use of class extensions might be a better practice anyway. Since extensions can implement protocols, you can e.g. put all of your table view delegate methods in an extension and group your code at a more semantic level than #pragma mark is capable of.
Swift
24,017,316
1,019
I was implementing an algorithm in Swift Beta and noticed that the performance was very poor. After digging deeper I realized that one of the bottlenecks was something as simple as sorting arrays. The relevant part is here: let n = 1000000 var x = [Int](repeating: 0, count: n) for i in 0..<n { x[i] = random() } // start clock here let y = sort(x) // stop clock here In C++, a similar operation takes 0.06s on my computer. In Python, it takes 0.6s (no tricks, just y = sorted(x) for a list of integers). In Swift it takes 6s if I compile it with the following command: xcrun swift -O3 -sdk `xcrun --show-sdk-path --sdk macosx` And it takes as much as 88s if I compile it with the following command: xcrun swift -O0 -sdk `xcrun --show-sdk-path --sdk macosx` Timings in Xcode with "Release" vs. "Debug" builds are similar. What is wrong here? I could understand some performance loss in comparison with C++, but not a 10-fold slowdown in comparison with pure Python. Edit: weather noticed that changing -O3 to -Ofast makes this code run almost as fast as the C++ version! However, -Ofast changes the semantics of the language a lot — in my testing, it disabled the checks for integer overflows and array indexing overflows. For example, with -Ofast the following Swift code runs silently without crashing (and prints out some garbage): let n = 10000000 print(n*n*n*n*n) let x = [Int](repeating: 10, count: n) print(x[n]) So -Ofast is not what we want; the whole point of Swift is that we have the safety nets in place. Of course, the safety nets have some impact on the performance, but they should not make the programs 100 times slower. Remember that Java already checks for array bounds, and in typical cases, the slowdown is by a factor much less than 2. And in Clang and GCC we have got -ftrapv for checking (signed) integer overflows, and it is not that slow, either. Hence the question: how can we get reasonable performance in Swift without losing the safety nets? Edit 2: I did some more benchmarking, with very simple loops along the lines of for i in 0..<n { x[i] = x[i] ^ 12345678 } (Here the xor operation is there just so that I can more easily find the relevant loop in the assembly code. I tried to pick an operation that is easy to spot but also "harmless" in the sense that it should not require any checks related to integer overflows.) Again, there was a huge difference in the performance between -O3 and -Ofast. So I had a look at the assembly code: With -Ofast I get pretty much what I would expect. The relevant part is a loop with 5 machine language instructions. With -O3 I get something that was beyond my wildest imagination. The inner loop spans 88 lines of assembly code. I did not try to understand all of it, but the most suspicious parts are 13 invocations of "callq _swift_retain" and another 13 invocations of "callq _swift_release". That is, 26 subroutine calls in the inner loop! Edit 3: In comments, Ferruccio asked for benchmarks that are fair in the sense that they do not rely on built-in functions (e.g. sort). I think the following program is a fairly good example: let n = 10000 var x = [Int](repeating: 1, count: n) for i in 0..<n { for j in 0..<n { x[i] = x[j] } } There is no arithmetic, so we do not need to worry about integer overflows. The only thing that we do is just lots of array references. And the results are here—Swift -O3 loses by a factor almost 500 in comparison with -Ofast: C++ -O3: 0.05 s C++ -O0: 0.4 s Java: 0.2 s Python with PyPy: 0.5 s Python: 12 s Swift -Ofast: 0.05 s Swift -O3: 23 s Swift -O0: 443 s (If you are concerned that the compiler might optimize out the pointless loops entirely, you can change it to e.g. x[i] ^= x[j], and add a print statement that outputs x[0]. This does not change anything; the timings will be very similar.) And yes, here the Python implementation was a stupid pure Python implementation with a list of ints and nested for loops. It should be much slower than unoptimized Swift. Something seems to be seriously broken with Swift and array indexing. Edit 4: These issues (as well as some other performance issues) seems to have been fixed in Xcode 6 beta 5. For sorting, I now have the following timings: clang++ -O3: 0.06 s swiftc -Ofast: 0.1 s swiftc -O: 0.1 s swiftc: 4 s For nested loops: clang++ -O3: 0.06 s swiftc -Ofast: 0.3 s swiftc -O: 0.4 s swiftc: 540 s It seems that there is no reason anymore to use the unsafe -Ofast (a.k.a. -Ounchecked); plain -O produces equally good code.
tl;dr Swift 1.0 is now as fast as C by this benchmark using the default release optimisation level [-O]. Here is an in-place quicksort in Swift Beta: func quicksort_swift(inout a:CInt[], start:Int, end:Int) { if (end - start < 2){ return } var p = a[start + (end - start)/2] var l = start var r = end - 1 while (l <= r){ if (a[l] < p){ l += 1 continue } if (a[r] > p){ r -= 1 continue } var t = a[l] a[l] = a[r] a[r] = t l += 1 r -= 1 } quicksort_swift(&a, start, r + 1) quicksort_swift(&a, r + 1, end) } And the same in C: void quicksort_c(int *a, int n) { if (n < 2) return; int p = a[n / 2]; int *l = a; int *r = a + n - 1; while (l <= r) { if (*l < p) { l++; continue; } if (*r > p) { r--; continue; } int t = *l; *l++ = *r; *r-- = t; } quicksort_c(a, r - a + 1); quicksort_c(l, a + n - l); } Both work: var a_swift:CInt[] = [0,5,2,8,1234,-1,2] var a_c:CInt[] = [0,5,2,8,1234,-1,2] quicksort_swift(&a_swift, 0, a_swift.count) quicksort_c(&a_c, CInt(a_c.count)) // [-1, 0, 2, 2, 5, 8, 1234] // [-1, 0, 2, 2, 5, 8, 1234] Both are called in the same program as written. var x_swift = CInt[](count: n, repeatedValue: 0) var x_c = CInt[](count: n, repeatedValue: 0) for var i = 0; i < n; ++i { x_swift[i] = CInt(random()) x_c[i] = CInt(random()) } let swift_start:UInt64 = mach_absolute_time(); quicksort_swift(&x_swift, 0, x_swift.count) let swift_stop:UInt64 = mach_absolute_time(); let c_start:UInt64 = mach_absolute_time(); quicksort_c(&x_c, CInt(x_c.count)) let c_stop:UInt64 = mach_absolute_time(); This converts the absolute times to seconds: static const uint64_t NANOS_PER_USEC = 1000ULL; static const uint64_t NANOS_PER_MSEC = 1000ULL * NANOS_PER_USEC; static const uint64_t NANOS_PER_SEC = 1000ULL * NANOS_PER_MSEC; mach_timebase_info_data_t timebase_info; uint64_t abs_to_nanos(uint64_t abs) { if ( timebase_info.denom == 0 ) { (void)mach_timebase_info(&timebase_info); } return abs * timebase_info.numer / timebase_info.denom; } double abs_to_seconds(uint64_t abs) { return abs_to_nanos(abs) / (double)NANOS_PER_SEC; } Here is a summary of the compiler's optimazation levels: [-Onone] no optimizations, the default for debug. [-O] perform optimizations, the default for release. [-Ofast] perform optimizations and disable runtime overflow checks and runtime type checks. Time in seconds with [-Onone] for n=10_000: Swift: 0.895296452 C: 0.001223848 Here is Swift's builtin sort() for n=10_000: Swift_builtin: 0.77865783 Here is [-O] for n=10_000: Swift: 0.045478346 C: 0.000784666 Swift_builtin: 0.032513488 As you can see, Swift's performance improved by a factor of 20. As per mweathers' answer, setting [-Ofast] makes the real difference, resulting in these times for n=10_000: Swift: 0.000706745 C: 0.000742374 Swift_builtin: 0.000603576 And for n=1_000_000: Swift: 0.107111846 C: 0.114957179 Swift_sort: 0.092688548 For comparison, this is with [-Onone] for n=1_000_000: Swift: 142.659763258 C: 0.162065333 Swift_sort: 114.095478272 So Swift with no optimizations was almost 1000x slower than C in this benchmark, at this stage in its development. On the other hand with both compilers set to [-Ofast] Swift actually performed at least as well if not slightly better than C. It has been pointed out that [-Ofast] changes the semantics of the language, making it potentially unsafe. This is what Apple states in the Xcode 5.0 release notes: A new optimization level -Ofast, available in LLVM, enables aggressive optimizations. -Ofast relaxes some conservative restrictions, mostly for floating-point operations, that are safe for most code. It can yield significant high-performance wins from the compiler. They all but advocate it. Whether that's wise or not I couldn't say, but from what I can tell it seems reasonable enough to use [-Ofast] in a release if you're not doing high-precision floating point arithmetic and you're confident no integer or array overflows are possible in your program. If you do need high performance and overflow checks / precise arithmetic then choose another language for now. BETA 3 UPDATE: n=10_000 with [-O]: Swift: 0.019697268 C: 0.000718064 Swift_sort: 0.002094721 Swift in general is a bit faster and it looks like Swift's built-in sort has changed quite significantly. FINAL UPDATE: [-Onone]: Swift: 0.678056695 C: 0.000973914 [-O]: Swift: 0.001158492 C: 0.001192406 [-Ounchecked]: Swift: 0.000827764 C: 0.001078914
Swift
24,101,718
986
Is there a function that I can use to iterate over an array and have both index and element, like Python's enumerate? for index, element in enumerate(list): ...
Yes. As of Swift 3.0, if you need the index for each element along with its value, you can use the enumerated() method to iterate over the array. It returns a sequence of pairs composed of the index and the value for each item in the array. For example: for (index, element) in list.enumerated() { print("Item \(index): \(element)") } Before Swift 3.0 and after Swift 2.0, the function was called enumerate(): for (index, element) in list.enumerate() { print("Item \(index): \(element)") } Prior to Swift 2.0, enumerate was a global function. for (index, element) in enumerate(list) { println("Item \(index): \(element)") }
Swift
24,028,421
964
How do you get the length of a String? For example, I have a variable defined like: var test1: String = "Scott" However, I can't seem to find a length method on the string.
As of Swift 4+ It's just: test1.count for reasons. (Thanks to Martin R) As of Swift 2: With Swift 2, Apple has changed global functions to protocol extensions, extensions that match any type conforming to a protocol. Thus the new syntax is: test1.characters.count (Thanks to JohnDifool for the heads up) As of Swift 1 Use the count characters method: let unusualMenagerie = "Koala &#128040;, Snail &#128012;, Penguin &#128039;, Dromedary &#128042;" println("unusualMenagerie has \(count(unusualMenagerie)) characters") // prints "unusualMenagerie has 40 characters" right from the Apple Swift Guide (note, for versions of Swift earlier than 1.2, this would be countElements(unusualMenagerie) instead) for your variable, it would be length = count(test1) // was countElements in earlier versions of Swift Or you can use test1.utf16count
Swift
24,037,711
841
Say I have a string here: var fullName: String = "First Last" I want to split the string based on whitespace and assign the values to their respective variables var fullNameArr = // something like: fullName.explode(" ") var firstName: String = fullNameArr[0] var lastName: String? = fullnameArr[1] Also, sometimes users might not have a last name.
Just call componentsSeparatedByString method on your fullName import Foundation var fullName: String = "First Last" let fullNameArr = fullName.componentsSeparatedByString(" ") var firstName: String = fullNameArr[0] var lastName: String = fullNameArr[1] Update for Swift 3+ import Foundation let fullName = "First Last" let fullNameArr = fullName.components(separatedBy: " ") let name = fullNameArr[0] let surname = fullNameArr[1]
Swift
25,678,373
833
I just created a new Swift project within Xcode. I am wondering which version of Swift it's using. How can I see, in Xcode or the terminal, what version of Swift I am using inside my project?
What I do is say in the Terminal: $ xcrun swift -version Output for Xcode 6.3.2 is: Apple Swift version 1.2 (swiftlang-602.0.53.1 clang-602.0.53) Of course that assumes that your xcrun is pointing at your copy of Xcode correctly. If, like me, you're juggling several versions of Xcode, that can be a worry! To make sure that it is, say $ xcrun --find swift and look at the path to Xcode that it shows you. For example: /Applications/Xcode.app/... If that's your Xcode, then the output from -version is accurate. If you need to repoint xcrun, use the Command Line Tools pop-up menu in Xcode's Locations preference pane.
Swift
30,790,188
747
I'm trying to create an NSTimer in Swift but I'm having some trouble. NSTimer(timeInterval: 1, target: self, selector: test(), userInfo: nil, repeats: true) test() is a function in the same class. I get an error in the editor: Could not find an overload for 'init' that accepts the supplied arguments When I change selector: test() to selector: nil the error disappears. I've tried: selector: test() selector: test selector: Selector(test()) But nothing works and I can't find a solution in the references.
Swift itself doesn't use selectors — several design patterns that in Objective-C make use of selectors work differently in Swift. (For example, use optional chaining on protocol types or is/as tests instead of respondsToSelector:, and use closures wherever you can instead of performSelector: for better type/memory safety.) But there are still a number of important ObjC-based APIs that use selectors, including timers and the target/action pattern. Swift provides the Selector type for working with these. (Swift automatically uses this in place of ObjC's SEL type.) In Swift 2.2 (Xcode 7.3) and later (including Swift 3 / Xcode 8 and Swift 4 / Xcode 9): You can construct a Selector from a Swift function type using the #selector expression. let timer = Timer(timeInterval: 1, target: object, selector: #selector(MyClass.test), userInfo: nil, repeats: false) button.addTarget(object, action: #selector(MyClass.buttonTapped), for: .touchUpInside) view.perform(#selector(UIView.insertSubview(_:aboveSubview:)), with: button, with: otherButton) The great thing about this approach? A function reference is checked by the Swift compiler, so you can use the #selector expression only with class/method pairs that actually exist and are eligible for use as selectors (see "Selector availability" below). You're also free to make your function reference only as specific as you need, as per the Swift 2.2+ rules for function-type naming. (This is actually an improvement over ObjC's @selector() directive, because the compiler's -Wundeclared-selector check verifies only that the named selector exists. The Swift function reference you pass to #selector checks existence, membership in a class, and type signature.) There are a couple of extra caveats for the function references you pass to the #selector expression: Multiple functions with the same base name can be differentiated by their parameter labels using the aforementioned syntax for function references (e.g. insertSubview(_:at:) vs insertSubview(_:aboveSubview:)). But if a function has no parameters, the only way to disambiguate it is to use an as cast with the function's type signature (e.g. foo as () -> () vs foo(_:)). There's a special syntax for property getter/setter pairs in Swift 3.0+. For example, given a var foo: Int, you can use #selector(getter: MyClass.foo) or #selector(setter: MyClass.foo). General notes: Cases where #selector doesn't work, and naming: Sometimes you don't have a function reference to make a selector with (for example, with methods dynamically registered in the ObjC runtime). In that case, you can construct a Selector from a string: e.g. Selector("dynamicMethod:") — though you lose the compiler's validity checking. When you do that, you need to follow ObjC naming rules, including colons (:) for each parameter. Selector availability: The method referenced by the selector must be exposed to the ObjC runtime. In Swift 4, every method exposed to ObjC must have its declaration prefaced with the @objc attribute. (In previous versions you got that attribute for free in some cases, but now you have to explicitly declare it.) Remember that private symbols aren't exposed to the runtime, too — your method needs to have at least internal visibility. Key paths: These are related to but not quite the same as selectors. There's a special syntax for these in Swift 3, too: e.g. chris.valueForKeyPath(#keyPath(Person.friends.firstName)). See SE-0062 for details. And even more KeyPath stuff in Swift 4, so make sure you're using the right KeyPath-based API instead of selectors if appropriate. You can read more about selectors under Interacting with Objective-C APIs in Using Swift with Cocoa and Objective-C. Note: Before Swift 2.2, Selector conformed to StringLiteralConvertible, so you might find old code where bare strings are passed to APIs that take selectors. You'll want to run "Convert to Current Swift Syntax" in Xcode to get those using #selector.
Swift
24,007,650
698
I have an app where the UITableView's separator inset is set to custom values - Right 0, Left 0. This works perfectly in iOS 7.x, however in iOS 8.0 I see that the separator inset is set to the default of 15 on the right. Even though in the xib files it set to 0, it still shows up incorrectly. How do I remove the UITableViewCell separator margins?
iOS 8.0 introduces the layoutMargins property on cells AND table views. This property isn't available on iOS 7.0 so you need to make sure you check before assigning it! The easy fix is to subclass your cell and override the layout margins property as suggested by @user3570727. However you will lose any system behavior like inheriting margins from the Safe Area so I do not recommend the below solution: (ObjectiveC) -(UIEdgeInsets)layoutMargins { return UIEdgeInsetsZero // override any margins inc. safe area } (swift 4.2): override var layoutMargins: UIEdgeInsets { get { return .zero } set { } } If you don't want to override the property, or need to set it conditionally, keep reading. In addition to the layoutMargins property, Apple has added a property to your cell that will prevent it from inheriting your Table View's margin settings. When this property is set, your cells are allowed to configure their own margins independently of the table view. Think of it as an override. This property is called preservesSuperviewLayoutMargins, and setting it to NO will allow the cell's layoutMargin setting to override whatever layoutMargin is set on your TableView. It both saves time (you don't have to modify the Table View's settings), and is more concise. Please refer to Mike Abdullah's answer for a detailed explanation. NOTE: what follows is a clean implementation for a cell-level margin setting, as expressed in Mike Abdullah's answer. Setting your cell's preservesSuperviewLayoutMargins=NO will ensure that your Table View does not override the cell settings. If you actually want your entire table view to have consistent margins, please adjust your code accordingly. Setup your cell margins: -(void)tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath { // Remove seperator inset if ([cell respondsToSelector:@selector(setSeparatorInset:)]) { [cell setSeparatorInset:UIEdgeInsetsZero]; } // Prevent the cell from inheriting the Table View's margin settings if ([cell respondsToSelector:@selector(setPreservesSuperviewLayoutMargins:)]) { [cell setPreservesSuperviewLayoutMargins:NO]; } // Explictly set your cell's layout margins if ([cell respondsToSelector:@selector(setLayoutMargins:)]) { [cell setLayoutMargins:UIEdgeInsetsZero]; } } Swift 4: func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) { // Remove seperator inset if cell.responds(to: #selector(setter: UITableViewCell.separatorInset)) { cell.separatorInset = .zero } // Prevent the cell from inheriting the Table View's margin settings if cell.responds(to: #selector(setter: UITableViewCell.preservesSuperviewLayoutMargins)) { cell.preservesSuperviewLayoutMargins = false } // Explictly set your cell's layout margins if cell.responds(to: #selector(setter: UITableViewCell.layoutMargins)) { cell.layoutMargins = .zero } } Setting the preservesSuperviewLayoutMargins property on your cell to NO should prevent your table view from overriding your cell margins. In some cases, it seems to not function properly. If all fails, you may brute-force your Table View margins: -(void)viewDidLayoutSubviews { [super viewDidLayoutSubviews]; // Force your tableview margins (this may be a bad idea) if ([self.tableView respondsToSelector:@selector(setSeparatorInset:)]) { [self.tableView setSeparatorInset:UIEdgeInsetsZero]; } if ([self.tableView respondsToSelector:@selector(setLayoutMargins:)]) { [self.tableView setLayoutMargins:UIEdgeInsetsZero]; } } Swift 4: func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() // Force your tableview margins (this may be a bad idea) if tableView.responds(to: #selector(setter: UITableView.separatorInset)) { tableView.separatorInset = .zero } if tableView.responds(to: #selector(setter: UITableView.layoutMargins)) { tableView.layoutMargins = .zero } } ...and there you go! This should work on iOS 7 and 8. EDIT: Mohamed Saleh brought to my attention a possible change in iOS 9. You may need to set the Table View's cellLayoutMarginsFollowReadableWidth to NO if you want to customize insets or margins. Your mileage may vary, this is not documented very well. This property only exists in iOS 9 so be sure to check before setting. if([myTableView respondsToSelector:@selector(setCellLayoutMarginsFollowReadableWidth:)]) { myTableView.cellLayoutMarginsFollowReadableWidth = NO; } Swift 4: if myTableView.responds(to: #selector(setter: self.cellLayoutMarginsFollowReadableWidth)) { myTableView.cellLayoutMarginsFollowReadableWidth = false } (above code from iOS 8 UITableView separator inset 0 not working) EDIT: Here's a pure Interface Builder approach: NOTE: iOS 11 changes & simplifies much of this behavior, an update will be forthcoming...
Swift
25,770,119
686
I'm trying to work out how to cast an Int into a String in Swift. I figure out a workaround, using NSNumber but I'd love to figure out how to do it all in Swift. let x : Int = 45 let xNSNumber = x as NSNumber let xString : String = xNSNumber.stringValue
Converting Int to String: let x : Int = 42 var myString = String(x) And the other way around - converting String to Int: let myString : String = "42" let x: Int? = myString.toInt() if (x != nil) { // Successfully converted String to Int } Or if you're using Swift 2 or 3: let x: Int? = Int(myString)
Swift
24,161,336
672
This crash has been a blocking issue I used the following steps to reproduce the issue: Create a Cocoa Touch Framework project Add a swift file and a class Dog Build a framework for device Create a Single View application in Swift Import framework into app project Instantiate swift class from the framework in ViewController Build and run an app on the device The app immediate crashed upon launching, here is console log: dyld: Library not loaded: @rpath/FrameworkTest03.framework/FrameworkTest03 Referenced from: /var/mobile/Applications/FA6BAAC8-1AAD-49B4-8326-F30F66458CB6/FrameworkTest03App.app/FrameworkTest03App Reason: image not found I have tried to build on iOS 7.1 and 8.0 devices, they both have the same crash. However, I can build an app and run on the simulator fine. Also, I am aware that I can change the framework to form Required to Optional in Link Binary With Libraries, but it did not completely resolve the problem, the app crashed when I create an instance of Dog. The behavior is different on the device and simulator, I suspect that we can't distribute a framework for the device using a beta version of Xcode. Can anyone shed light on this?
In the target's General tab, there is an Embedded Binaries field. When you add the framework there the crash is resolved. Reference is here on Apple Developer Forums.
Swift
24,333,981
670
In Objective-C the code to check for a substring in an NSString is: NSString *string = @"hello Swift"; NSRange textRange =[string rangeOfString:@"Swift"]; if(textRange.location != NSNotFound) { NSLog(@"exists"); } But how do I do this in Swift?
You can do exactly the same call with Swift: Swift 4 & Swift 5 In Swift 4 String is a collection of Character values, it wasn't like this in Swift 2 and 3, so you can use this more concise code1: let string = "hello Swift" if string.contains("Swift") { print("exists") } Swift 3.0+ var string = "hello Swift" if string.range(of:"Swift") != nil { print("exists") } // alternative: not case sensitive if string.lowercased().range(of:"swift") != nil { print("exists") } Older Swift var string = "hello Swift" if string.rangeOfString("Swift") != nil{ println("exists") } // alternative: not case sensitive if string.lowercaseString.rangeOfString("swift") != nil { println("exists") } I hope this is a helpful solution since some people, including me, encountered some strange problems by calling containsString().1 PS. Don't forget to import Foundation Footnotes Just remember that using collection functions on Strings has some edge cases which can give you unexpected results, e. g. when dealing with emojis or other grapheme clusters like accented letters.
Swift
24,034,043
659
Let's say we have a custom class named ImageFile and this class contains two properties: class ImageFile { var fileName = String() var fileID = Int() } Lots of them are stored in an Array: var images : Array = [] var aImage = ImageFile() aImage.fileName = "image1.png" aImage.fileID = 101 images.append(aImage) aImage = ImageFile() aImage.fileName = "image1.png" aImage.fileID = 202 images.append(aImage) How can I sort the images array by 'fileID' in ascending or descending order?
First, declare your Array as a typed array so that you can call methods when you iterate: var images : [imageFile] = [] Then you can simply do: Swift 2 images.sorted({ $0.fileID > $1.fileID }) Swift 3 images.sorted(by: { $0.fileID > $1.fileID }) Swift 5 images.sorted { $0.fileId > $1.fileID } The example above gives the results in descending order.
Swift
24,130,026
655
How can I detect any text changes in a textField? The delegate method shouldChangeCharactersInRange works for something, but it did not fulfill my need exactly. Since until it returns YES, the textField texts are not available to other observer methods. e.g. in my code calculateAndUpdateTextFields did not get the updated text, the user has typed. Is their any way to get something like textChanged Java event handler. - (BOOL)textField:(UITextField *)textField shouldChangeCharactersInRange:(NSRange)range replacementString:(NSString *)string { if (textField.tag == kTextFieldTagSubtotal || textField.tag == kTextFieldTagSubtotalDecimal || textField.tag == kTextFieldTagShipping || textField.tag == kTextFieldTagShippingDecimal) { [self calculateAndUpdateTextFields]; } return YES; }
From proper way to do uitextfield text change call back: I catch the characters sent to a UITextField control something like this: // Add a "textFieldDidChange" notification method to the text field control. In Objective-C: [textField addTarget:self action:@selector(textFieldDidChange:) forControlEvents:UIControlEventEditingChanged]; In Swift: textField.addTarget(self, action: #selector(textFieldDidChange), for: .editingChanged) Then in the textFieldDidChange method you can examine the contents of the textField, and reload your table view as needed. You could use that and put calculateAndUpdateTextFields as your selector.
Swift
7,010,547
642
Will Swift-based applications work on OS X 10.9 (Mavericks)/iOS 7 and lower? For example, I have a machine running OS X 10.8 (Mountain Lion), and I am wondering if an application I write in Swift will run on it. Or what should I have to create a Swift application using Mac OS?
I tested it for you. Swift applications compile into standard binaries and can be run on OS X 10.9 and iOS 7. Simple Swift application used for testing: func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: NSDictionary?) -> Bool { self.window = UIWindow(frame: UIScreen.mainScreen().bounds) var controller = UIViewController() var view = UIView(frame: CGRectMake(0, 0, 320, 568)) view.backgroundColor = UIColor.redColor() controller.view = view var label = UILabel(frame: CGRectMake(0, 0, 200, 21)) label.center = CGPointMake(160, 284) label.textAlignment = NSTextAlignment.Center label.text = "I am a test label." controller.view.addSubview(label) self.window!.rootViewController = controller self.window!.makeKeyAndVisible() return true }
Swift
24,001,778
629
Fellow devs, I am having trouble with AutoLayout in Interface Builder (Xcode 5 / iOS 7). It's very basic and important so I think everyone should know how this properly works. If this is a bug in Xcode, it is a critical one! So, whenever I have a view hierarchy such as this I run into trouble: >UIViewController >> UIView >>>UIScrollView >>>>UILabel (or any other comparable UIKit Element) The UIScrollView has solid constraints, e.g., 50 px from every side (no problem). Then I add a Top Space constraint to the UILabel (no problem) (and I can even pin height / width of the label, changes nothing, but should be unneccessary due to the Label's intrinsic size) The trouble starts when I add a trailing constraint to the UILabel: E.g., Trailing Space to: Superview Equals: 25 Now two warnings occur - and I don't understand why: A) Scrollable Content Size Ambiguity (Scroll View has ambiguous scrollable content height/width) B) Misplaced Views (Label Expected: x= -67 Actual: x= 207 I did this minimal example in a freshly new project which you can download and I attached a screenshot. As you can see, Interface Builder expects the Label to sit outside of the UIScrollView's boundary (the orange dashed rectangle). Updating the Label's frame with the Resolve Issues Tool moves it right there. Please note: If you replace the UIScrollView with a UIView, the behaviour is as expected (the Label's frame is correct and according to the constraint). So there seems to either be an issue with UIScrollView or I am missing out on something important. When I run the App without updating the Label's frame as suggested by IB it is positioned just fine, exactly where it's supposed to be and the UIScrollView is scrollable. If I DO update the frame the Label is out of sight and the UIScrollView does not scroll. Help me Obi-Wan Kenobi! Why the ambiguous layout? Why the misplaced view? You can download the sample project here and try if you can figure out what's going on: https://github.com/Wirsing84/AutoLayoutProblem
Updated Nowadays, Apple realized the problem we solved many years ago (lol_face) and provides Content Layout Guide and Frame Layout Guide as part of the UIScrollView. Therefore you need to go through the following steps: Same as original response below; For this contentView, set top, bottom, left, and right margins to 0 pinning them to the Content Layout Guide of the scroll view; Now set the contentView's height equal to the Frame Layout Guide's height. Do the same for the width; Finally, set the priority of the equal height constraints to 250 (if you need the view to scroll vertically, the width to scroll horizzontally). Finished. Now you can add all your views in that contentView, and the contentSize of the scrollView will be automatically resized according with the contentView. Don't forget to set the constraint from the bottom of the last object in your contentView to the contentView's margin. Original [Deprecated] So I just sorted out in this way: Inside the UIScrollView add a UIView (we can call that contentView); In this contentView, set top, bottom, left and right margins to 0 (of course from the scrollView which is the superView); Set also align center horizontally and vertically;
Swift
19,036,228
625
enum Suit: String { case spades = "♠" case hearts = "♥" case diamonds = "♦" case clubs = "♣" } For example, how can I do something like: for suit in Suit { // do something with suit print(suit.rawValue) } Resulting example: ♠ ♥ ♦ ♣
This post is relevant here https://www.swift-studies.com/blog/2014/6/10/enumerating-enums-in-swift Essentially the proposed solution is enum ProductCategory : String { case Washers = "washers", Dryers = "dryers", Toasters = "toasters" static let allValues = [Washers, Dryers, Toasters] } for category in ProductCategory.allValues{ //Do something }
Swift
24,007,461
610
weak references don't seem to work in Swift unless a protocol is declared as @objc, which I don't want in a pure Swift app. This code gives a compile error (weak cannot be applied to non-class type MyClassDelegate): class MyClass { weak var delegate: MyClassDelegate? } protocol MyClassDelegate { } I need to prefix the protocol with @objc, then it works. Question: What is the 'pure' Swift way to accomplish a weak delegate?
You need to declare the type of the protocol as AnyObject. protocol ProtocolNameDelegate: AnyObject { // Protocol stuff goes here } class SomeClass { weak var delegate: ProtocolNameDelegate? } Using AnyObject you say that only classes can conform to this protocol, whereas structs or enums can't.
Swift
24,066,304
602
I'm trying to work out an appropriate singleton model for usage in Swift. So far, I've been able to get a non-thread safe model working as: class var sharedInstance: TPScopeManager { get { struct Static { static var instance: TPScopeManager? = nil } if !Static.instance { Static.instance = TPScopeManager() } return Static.instance! } } Wrapping the singleton instance in the Static struct should allow a single instance that doesn't collide with singleton instances without complex naming schemings, and it should make things fairly private. Obviously though, this model isn't thread-safe. So I tried to add dispatch_once to the whole thing: class var sharedInstance: TPScopeManager { get { struct Static { static var instance: TPScopeManager? = nil static var token: dispatch_once_t = 0 } dispatch_once(Static.token) { Static.instance = TPScopeManager() } return Static.instance! } } But I get a compiler error on the dispatch_once line: Cannot convert the expression's type 'Void' to type '()' I've tried several different variants of the syntax, but they all seem to have the same results: dispatch_once(Static.token, { Static.instance = TPScopeManager() }) What is the proper usage of dispatch_once using Swift? I initially thought the problem was with the block due to the () in the error message, but the more I look at it, the more I think it may be a matter of getting the dispatch_once_t correctly defined.
tl;dr: Use the class constant approach if you are using Swift 1.2 or above and the nested struct approach if you need to support earlier versions. From my experience with Swift there are three approaches to implement the Singleton pattern that support lazy initialization and thread safety. Class constant class Singleton { static let sharedInstance = Singleton() } This approach supports lazy initialization because Swift lazily initializes class constants (and variables), and is thread safe by the definition of let. This is now officially recommended way to instantiate a singleton. Class constants were introduced in Swift 1.2. If you need to support an earlier version of Swift, use the nested struct approach below or a global constant. Nested struct class Singleton { class var sharedInstance: Singleton { struct Static { static let instance: Singleton = Singleton() } return Static.instance } } Here we are using the static constant of a nested struct as a class constant. This is a workaround for the lack of static class constants in Swift 1.1 and earlier, and still works as a workaround for the lack of static constants and variables in functions. dispatch_once The traditional Objective-C approach ported to Swift. I'm fairly certain there's no advantage over the nested struct approach but I'm putting it here anyway as I find the differences in syntax interesting. class Singleton { class var sharedInstance: Singleton { struct Static { static var onceToken: dispatch_once_t = 0 static var instance: Singleton? = nil } dispatch_once(&Static.onceToken) { Static.instance = Singleton() } return Static.instance! } } See this GitHub project for unit tests.
Swift
24,024,549
596
I am looking for a way to replace characters in a Swift String. Example: "This is my string" I would like to replace " " with "+" to get "This+is+my+string". How can I achieve this?
This answer has been updated for Swift 4 & 5. If you're still using Swift 1, 2 or 3 see the revision history. You have a couple of options. You can do as @jaumard suggested and use replacingOccurrences() let aString = "This is my string" let newString = aString.replacingOccurrences(of: " ", with: "+", options: .literal, range: nil) And as noted by @cprcrack below, the options and range parameters are optional, so if you don't want to specify string comparison options or a range to do the replacement within, you only need the following. let aString = "This is my string" let newString = aString.replacingOccurrences(of: " ", with: "+") Or, if the data is in a specific format like this, where you're just replacing separation characters, you can use components() to break the string into and array, and then you can use the join() function to put them back to together with a specified separator. let toArray = aString.components(separatedBy: " ") let backToString = toArray.joined(separator: "+") Or if you're looking for a more Swifty solution that doesn't utilize API from NSString, you could use this. let aString = "Some search text" let replaced = String(aString.map { $0 == " " ? "+" : $0 })
Swift
24,200,888
595
I've gone through the iBook from Apple, and couldn't find any definition of it: Can someone explain the structure of dispatch_after? dispatch_after(<#when: dispatch_time_t#>, <#queue: dispatch_queue_t?#>, <#block: dispatch_block_t?#>)
I use dispatch_after so often that I wrote a top-level utility function to make the syntax simpler: func delay(delay:Double, closure:()->()) { dispatch_after( dispatch_time( DISPATCH_TIME_NOW, Int64(delay * Double(NSEC_PER_SEC)) ), dispatch_get_main_queue(), closure) } And now you can talk like this: delay(0.4) { // do stuff } Wow, a language where you can improve the language. What could be better? Update for Swift 3, Xcode 8 Seed 6 Seems almost not worth bothering with, now that they've improved the calling syntax: func delay(_ delay:Double, closure:@escaping ()->()) { let when = DispatchTime.now() + delay DispatchQueue.main.asyncAfter(deadline: when, execute: closure) }
Swift
24,034,544
584
The character 👩‍👩‍👧‍👦 (family with two women, one girl, and one boy) is encoded as such: U+1F469 WOMAN, ‍U+200D ZWJ, U+1F469 WOMAN, U+200D ZWJ, U+1F467 GIRL, U+200D ZWJ, U+1F466 BOY So it's very interestingly-encoded; the perfect target for a unit test. However, Swift doesn't seem to know how to treat it. Here's what I mean: "👩‍👩‍👧‍👦".contains("👩‍👩‍👧‍👦") // true "👩‍👩‍👧‍👦".contains("👩") // false "👩‍👩‍👧‍👦".contains("\u{200D}") // false "👩‍👩‍👧‍👦".contains("👧") // false "👩‍👩‍👧‍👦".contains("👦") // true So, Swift says it contains itself (good) and a boy (good!). But it then says it does not contain a woman, girl, or zero-width joiner. What's happening here? Why does Swift know it contains a boy but not a woman or girl? I could understand if it treated it as a single character and only recognized it containing itself, but the fact that it got one subcomponent and no others baffles me. This does not change if I use something like "👩".characters.first!. Even more confounding is this: let manual = "\u{1F469}\u{200D}\u{1F469}\u{200D}\u{1F467}\u{200D}\u{1F466}" Array(manual.characters) // ["👩‍", "👩‍", "👧‍", "👦"] Even though I placed the ZWJs in there, they aren't reflected in the character array. What followed was a little telling: manual.contains("👩") // false manual.contains("👧") // false manual.contains("👦") // true So I get the same behavior with the character array... which is supremely annoying, since I know what the array looks like. This also does not change if I use something like "👩".characters.first!.
This has to do with how the String type works in Swift, and how the contains(_:) method works. The '👩‍👩‍👧‍👦 ' is what's known as an emoji sequence, which is rendered as one visible character in a string. The sequence is made up of Character objects, and at the same time it is made up of UnicodeScalar objects. If you check the character count of the string, you'll see that it is made up of four characters, while if you check the unicode scalar count, it will show you a different result: print("👩‍👩‍👧‍👦".characters.count) // 4 print("👩‍👩‍👧‍👦".unicodeScalars.count) // 7 Now, if you parse through the characters and print them, you'll see what seems like normal characters, but in fact the three first characters contain both an emoji as well as a zero-width joiner in their UnicodeScalarView: for char in "👩‍👩‍👧‍👦".characters { print(char) let scalars = String(char).unicodeScalars.map({ String($0.value, radix: 16) }) print(scalars) } // 👩‍ // ["1f469", "200d"] // 👩‍ // ["1f469", "200d"] // 👧‍ // ["1f467", "200d"] // 👦 // ["1f466"] As you can see, only the last character does not contain a zero-width joiner, so when using the contains(_:) method, it works as you'd expect. Since you aren't comparing against emoji containing zero-width joiners, the method won't find a match for any but the last character. To expand on this, if you create a String which is composed of an emoji character ending with a zero-width joiner, and pass it to the contains(_:) method, it will also evaluate to false. This has to do with contains(_:) being the exact same as range(of:) != nil, which tries to find an exact match to the given argument. Since characters ending with a zero-width joiner form an incomplete sequence, the method tries to find a match for the argument while combining characters ending with a zero-width joiners into a complete sequence. This means that the method won't ever find a match if: the argument ends with a zero-width joiner, and the string to parse doesn't contain an incomplete sequence (i.e. ending with a zero-width joiner and not followed by a compatible character). To demonstrate: let s = "\u{1f469}\u{200d}\u{1f469}\u{200d}\u{1f467}\u{200d}\u{1f466}" // 👩‍👩‍👧‍👦 s.range(of: "\u{1f469}\u{200d}") != nil // false s.range(of: "\u{1f469}\u{200d}\u{1f469}") != nil // false However, since the comparison only looks ahead, you can find several other complete sequences within the string by working backwards: s.range(of: "\u{1f466}") != nil // true s.range(of: "\u{1f467}\u{200d}\u{1f466}") != nil // true s.range(of: "\u{1f469}\u{200d}\u{1f467}\u{200d}\u{1f466}") != nil // true // Same as the above: s.contains("\u{1f469}\u{200d}\u{1f467}\u{200d}\u{1f466}") // true The easiest solution would be to provide a specific compare option to the range(of:options:range:locale:) method. The option String.CompareOptions.literal performs the comparison on an exact character-by-character equivalence. As a side note, what's meant by character here is not the Swift Character, but the UTF-16 representation of both the instance and comparison string – however, since String doesn't allow malformed UTF-16, this is essentially equivalent to comparing the Unicode scalar representation. Here I've overloaded the Foundation method, so if you need the original one, rename this one or something: extension String { func contains(_ string: String) -> Bool { return self.range(of: string, options: String.CompareOptions.literal) != nil } } Now the method works as it "should" with each character, even with incomplete sequences: s.contains("👩") // true s.contains("👩\u{200d}") // true s.contains("\u{200d}") // true
Swift
43,618,487
584
In Swift, how can I check if an element exists in an array? Xcode does not have any suggestions for contain, include, or has, and a quick search through the book turned up nothing. Any idea how to check for this? I know that there is a method find that returns the index number, but is there a method that returns a boolean like ruby's #include?? Example of what I need: var elements = [1,2,3,4,5] if elements.contains(5) { //do something }
Swift 2, 3, 4, 5: let elements = [1, 2, 3, 4, 5] if elements.contains(5) { print("yes") } contains() is a protocol extension method of SequenceType (for sequences of Equatable elements) and not a global method as in earlier releases. Remarks: This contains() method requires that the sequence elements adopt the Equatable protocol, compare e.g. Andrews's answer. If the sequence elements are instances of a NSObject subclass then you have to override isEqual:, see NSObject subclass in Swift: hash vs hashValue, isEqual vs ==. There is another – more general – contains() method which does not require the elements to be equatable and takes a predicate as an argument, see e.g. Shorthand to test if an object exists in an array for Swift?. Swift older versions: let elements = [1,2,3,4,5] if contains(elements, 5) { println("yes") }
Swift
24,102,024
575
I have an IOS app with an Azure back-end, and would like to log certain events, like logins and which versions of the app users are running. How can I return the version and build number using Swift?
EDIT Updated for Swift 4.2 let appVersion = Bundle.main.infoDictionary?["CFBundleShortVersionString"] as? String EDIT As pointed out by @azdev on the new version of Xcode you will get a compile error for trying my previous solution, to solve this just edit it as suggested to unwrap the bundle dictionary using a ! let nsObject: AnyObject? = Bundle.main.infoDictionary!["CFBundleShortVersionString"] End Edit Just use the same logic than in Objective-C but with some small changes //First get the nsObject by defining as an optional anyObject let nsObject: AnyObject? = NSBundle.mainBundle().infoDictionary["CFBundleShortVersionString"] //Then just cast the object as a String, but be careful, you may want to double check for nil let version = nsObject as! String
Swift
25,965,239
568
What's the difference between print, NSLog and println and when should I use each? For example, in Python if I wanted to print a dictionary, I'd just print myDict, but now I have 2 other options. How and when should I use each?
A few differences: print vs println: The print function prints messages in the Xcode console when debugging apps. The println is a variation of this that was removed in Swift 2 and is not used any more. If you see old code that is using println, you can now safely replace it with print. Back in Swift 1.x, print did not add newline characters at the end of the printed string, whereas println did. But nowadays, print always adds the newline character at the end of the string, and if you don't want it to do that, supply a terminator parameter of "". NSLog: NSLog adds a timestamp and identifier to the output, whereas print will not. NSLog statements appear in both the device’s console and debugger’s console whereas print only appears in the debugger console. NSLog in iOS 10-13/macOS 10.12-10.x uses printf-style format strings, e.g. NSLog("%0.4f", CGFloat.pi) that will produce: 2017-06-09 11:57:55.642328-0700 MyApp[28937:1751492] 3.1416 NSLog from iOS 14/macOS 11 can use string interpolation. (Then, again, in iOS 14 and macOS 11, we would generally favor Logger over NSLog. See next point.) Nowadays, while NSLog still works, we would generally use “unified logging” (see below) rather than NSLog. Effective iOS 14/macOS 11, we have Logger interface to the “unified logging” system. For an introduction to Logger, see WWDC 2020 Explore logging in Swift. To use Logger, you must import os: import os Like NSLog, unified logging will output messages to both the Xcode debugging console and the device console, too. Create a Logger and log a message to it: let logger = Logger(subsystem: Bundle.main.bundleIdentifier!, category: "network") logger.log("url = \(url)") When you observe the app via the external Console app, you can filter on the basis of the subsystem and category. It is very useful to differentiate your debugging messages from (a) those generated by other subsystems on behalf of your app, or (b) messages from other categories or types. You can specify different types of logging messages, either .info, .debug, .error, .fault, .critical, .notice, .trace, etc.: logger.error("web service did not respond \(error.localizedDescription)") So, if using the external Console app, you can choose to only see messages of certain categories (e.g. only show debugging messages if you choose “Include Debug Messages” on the Console “Action” menu). These settings also dictate many subtle issues details about whether things are logged to disk or not. See WWDC video for more details. By default, non-numeric data is redacted in the logs. In the example where you logged the URL, if the app were invoked from the device itself and you were watching from your macOS Console app, you would see the following in the macOS Console: url = <private> If you are confident that this message will not include user confidential data and you wanted to see the strings in your macOS console, you would have to do: logger.log("url = \(url, privacy: .public)") Note, that in Xcode 15 and later, you can now filter the log by type, subsystem, category, or whatever. Personally, in large projects, I find it useful to use a separate Logger for each source file, then I can filter a voluminous log to something more specific (e.g., just log messages of type “error” for a particular “category”, etc.). For more information, see WWDC 2023 video Debug with structured logging. Also, unlike print and NSLog, when you use Logger with Xcode 15 or later, you can control-click (or right-click, if you have enabled the right mouse button) on a log message in the Xcode console, and choose “Jump to source” to jump to the relevant line of code. Prior to iOS 14/macOS 11, iOS 10/macOS 10.12 introduced os_log for “unified logging”. Import os.log: import os.log You should define the subsystem and category: let log = OSLog(subsystem: Bundle.main.bundleIdentifier!, category: "network") When using os_log, you would use a printf-style pattern rather than string interpolation: os_log("url = %@", log: log, url.absoluteString) You can specify different types of logging messages, either .info, .debug, .error, .fault (or .default): os_log("web service did not respond", type: .error) You cannot use string interpolation when using os_log. For example with print and Logger you do: logger.log("url = \(url)") But with os_log, you would have to do: os_log("url = %@", url.absoluteString) The os_log enforces the same data privacy, but you specify the public visibility in the printf formatter (e.g. %{public}@ rather than %@). E.g., if you wanted to see it from an external device, you'd have to do: os_log("url = %{public}@", url.absoluteString) You can also use the “Points of Interest” log if you want to watch ranges of activities from Instruments: let pointsOfInterest = OSLog(subsystem: Bundle.main.bundleIdentifier!, category: .pointsOfInterest) And start a range with: os_signpost(.begin, log: pointsOfInterest, name: "Network request") And end it with: os_signpost(.end, log: pointsOfInterest, name: "Network request") For more information, see https://stackoverflow.com/a/39416673/1271826. Like Logger, for OSLog you can control-click ( or right-click) on the message and jump to the relevant line of code in Xcode 15 and later. Bottom line, print is sufficient for simple logging with Xcode, but unified logging (whether Logger or os_log) achieves the same thing but offers far greater capabilities. The power of unified logging comes into stark relief when debugging iOS apps that have to be tested outside of Xcode. For example, when testing background iOS app processes like background fetch, being connected to the Xcode debugger changes the app lifecycle. So, you frequently will want to test on a physical device, running the app from the device itself, not starting the app from Xcode’s debugger. Unified logging lets you still watch your iOS device log statements from the macOS Console app.
Swift
25,951,195
567
Playing around with Swift, coming from a Java background, why would you want to choose a Struct instead of a Class? Seems like they are the same thing, with a Struct offering less functionality. Why choose it then?
According to the very popular WWDC 2015 talk Protocol Oriented Programming in Swift (video, transcript), Swift provides a number of features that make structs better than classes in many circumstances. Structs are preferable if they are relatively small and copiable because copying is way safer than having multiple references to the same instance as happens with classes. This is especially important when passing around a variable to many classes and/or in a multithreaded environment. If you can always send a copy of your variable to other places, you never have to worry about that other place changing the value of your variable underneath you. With Structs, there is much less need to worry about memory leaks or multiple threads racing to access/modify a single instance of a variable. (For the more technically minded, the exception to that is when capturing a struct inside a closure because then it is actually capturing a reference to the instance unless you explicitly mark it to be copied). Classes can also become bloated because a class can only inherit from a single superclass. That encourages us to create huge superclasses that encompass many different abilities that are only loosely related. Using protocols, especially with protocol extensions where you can provide implementations to protocols, allows you to eliminate the need for classes to achieve this sort of behavior. The talk lays out these scenarios where classes are preferred: Copying or comparing instances doesn't make sense (e.g., Window) Instance lifetime is tied to external effects (e.g., TemporaryFile) Instances are just "sinks"--write-only conduits to external state (e.g., CGContext) It implies that structs should be the default and classes should be a fallback. On the other hand, The Swift Programming Language documentation is somewhat contradictory: Structure instances are always passed by value, and class instances are always passed by reference. This means that they are suited to different kinds of tasks. As you consider the data constructs and functionality that you need for a project, decide whether each data construct should be defined as a class or as a structure. As a general guideline, consider creating a structure when one or more of these conditions apply: The structure’s primary purpose is to encapsulate a few relatively simple data values. It is reasonable to expect that the encapsulated values will be copied rather than referenced when you assign or pass around an instance of that structure. Any properties stored by the structure are themselves value types, which would also be expected to be copied rather than referenced. The structure does not need to inherit properties or behavior from another existing type. Examples of good candidates for structures include: The size of a geometric shape, perhaps encapsulating a width property and a height property, both of type Double. A way to refer to ranges within a series, perhaps encapsulating a start property and a length property, both of type Int. A point in a 3D coordinate system, perhaps encapsulating x, y and z properties, each of type Double. In all other cases, define a class, and create instances of that class to be managed and passed by reference. In practice, this means that most custom data constructs should be classes, not structures. Here it is claiming that we should default to using classes and use structures only in specific circumstances. Ultimately, you need to understand the real world implication of value types vs. reference types and then you can make an informed decision about when to use structs or classes. Also, keep in mind that these concepts are always evolving and The Swift Programming Language documentation was written before the Protocol Oriented Programming talk was given.
Swift
24,232,799
566
I have been working to create a UIAlertView in Swift, but for some reason I can't get the statement right because I'm getting this error: Could not find an overload for 'init' that accepts the supplied arguments Here is how I have it written: let button2Alert: UIAlertView = UIAlertView(title: "Title", message: "message", delegate: self, cancelButtonTitle: "OK", otherButtonTitles: nil) Then to call it I'm using: button2Alert.show() As of right now it is crashing and I just can't seem to get the syntax right.
From the UIAlertView class: // UIAlertView is deprecated. Use UIAlertController with a preferredStyle of UIAlertControllerStyleAlert instead On iOS 8, you can do this: let alert = UIAlertController(title: "Alert", message: "Message", preferredStyle: UIAlertControllerStyle.Alert) alert.addAction(UIAlertAction(title: "Click", style: UIAlertActionStyle.Default, handler: nil)) self.presentViewController(alert, animated: true, completion: nil) Now UIAlertController is a single class for creating and interacting with what we knew as UIAlertViews and UIActionSheets on iOS 8. Edit: To handle actions: alert.addAction(UIAlertAction(title: "OK", style: .Default, handler: { action in switch action.style{ case .Default: print("default") case .Cancel: print("cancel") case .Destructive: print("destructive") } }})) Edit for Swift 3: let alert = UIAlertController(title: "Alert", message: "Message", preferredStyle: UIAlertControllerStyle.alert) alert.addAction(UIAlertAction(title: "Click", style: UIAlertActionStyle.default, handler: nil)) self.present(alert, animated: true, completion: nil) Edit for Swift 4.x: let alert = UIAlertController(title: "Alert", message: "Message", preferredStyle: .alert) alert.addAction(UIAlertAction(title: "OK", style: .default, handler: { action in switch action.style{ case .default: print("default") case .cancel: print("cancel") case .destructive: print("destructive") } })) self.present(alert, animated: true, completion: nil)
Swift
24,022,479
564