question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I'm transforming a Spring Boot application from Spring Boot 1 (with the Prometheus Simpleclient) to Spring Boot 2 (which uses Micrometer).
I'm stumped at transforming the labels we have with Spring Boot 1 and Prometheus to concepts in Micrometer. For example (with Prometheus):
private static Counter requestCounter =
Counter.build()
.name("sent_requests_total")
.labelNames("method", "path")
.help("Total number of rest requests sent")
.register();
...
requestCounter.labels(request.getMethod().name(), path).inc();
The tags of Micrometer seem to be something different than the labels of Prometheus: All values have to be predeclared, not only the keys.
Can one use Prometheus' labels with Spring (Boot) and Micrometer?
| Further digging showed that only the keys of micrometer tags have to be predeclared - but the constructor really takes pairs of key/values; the values don't matter. And the keys have to be specified when using the metric.
This works:
private static final String COUNTER_BATCHMANAGER_SENT_REQUESTS = "batchmanager.sent.requests";
private static final String METHOD_TAG = "method";
private static final String PATH_TAG = "path";
private final Counter requestCounter;
...
requestCounter = Counter.builder(COUNTER_BATCHMANAGER_SENT_REQUESTS)
.description("Total number of rest requests sent")
.tags(METHOD_TAG, "", PATH_TAG, "")
.register(meterRegistry);
...
Metrics.counter(COUNTER_BATCHMANAGER_SENT_REQUESTS, METHOD_TAG, methodName, PATH_TAG, path)
.increment();
| Micrometer | 49,170,093 | 13 |
I tried to obtains these measurements from prometheus:
increase(http_server_requests_seconds_count{uri="myURI"}[10s])
increase(http_server_requests_seconds_count{uri="myURI"}[30s])
rate(http_server_requests_seconds_count{uri="myURI"}[10s])
rate(http_server_requests_seconds_count{uri="myURI"}[30s])
Then I run a python script where 5 threads are created, each of them hitting this myURI endpoint:
What I see on Grafana is:
I received these values:
0
6
0
0.2
I expected to receive these (but didn't):
5 (as in the last 10 seconds this endpoint received 5 calls)
5 (as in the last 30 seconds this endpoint received 5 calls)
0.5 (the endpoint received 5 calls in 10 seconds 5/10)
0.167 (the endpoint received 5 calls in 30 seconds 5/30)
Can someone explain with my example the formula behind this function and a way to achieve the metrics/value I expect?
| Prometheus calculates increase(m[d]) at timestamp t in the following way:
It fetches raw samples stored in the database for time series matching m on a time range (t-d .. t]. Note that samples at timestamp t-d aren't included in the time range, while samples at t are included. It is expected that every selected time series is a counter, since increase() works only with counters.
It calculates the difference between the last and the first raw sample value on the selected time range individually per each time series matching m. Note that Prometheus doesn't take into account the difference between the last raw sample just before the (t-d ... t] time range and the first raw samples at this time range. This may lead to lower than expected results in some cases.
It extrapolates results obtained at step 2 if the first and/or the last raw samples are located too far from time range boundaries (t-d .. t]. This may lead to unexpected results. For example, fractional results for integer counters. See this issue for details.
Prometheus calculates rate(m[d]) as increase(m[d]) / d, so rate() results may be also unexpected sometimes. Prometheus developers are aware of these issues and are going to fix them eventually - see these design docs.
In the meantime you can use VictoriaMetrics - this is Prometheus-like monitoring solution I work on. It provides increase() and rate() functions, which are free from issues mentioned above.
| Micrometer | 70,835,778 | 12 |
In Prometheus I've got 14 seconds for http_server_requests_seconds_max.
http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/**",} 14.3
Does this mean the total time for the request from the server to the client or does it only measure the time in the Spring container?
I'm also measuring the time inside spring to process the data and it only takes 2.5 seconds.
What I want to know is if it is a problem in Spring or if it is because of a slow network.
Any Ideas?
| From the copy of Spring documentation at Archive.org (or current Micrometer.io page), when @Timed attribute is used on a function or a Controller, it produces the metrics http_server_requests.
which by default contains dimensions for the HTTP status of the
response, HTTP method, exception type if the request fails, and the
pre-variable substitution parameterized endpoint URI.
The http_server_requests_seconds_max is then computed as explained in this discussion
public static final Statistic MAX
The maximum amount recorded. When this represents a time, it is
reported in the monitoring system's base unit of time.
In your case, it means that one of you endpoint in the range /v1/** (i.e. one of all of them) called a @Timed function that took 14sec to execute.
For more information you would need percentiles or histograms metrics. It may happen only once; usually at the first request wen cache need to be built or services needs to warm up.
| Micrometer | 60,206,507 | 11 |
Recently I switched to Spring Boot 2 with Micrometer. As I got these shiny new metrics, I talked with our DevOps guys and we started exporting them to Telegraf.
To distinguish between different applications and application nodes, we decided to use tags. This works perfectly for custom metrics, but now I started thinking about the pre-defined. To achieve the same for default metrics, I need the ability to add extra tags for them as well.
Is it possible to achieve this? Am I doing this right?
EDIT: I tried next approach:
@Component
public class MyMetricsImpl implements MyMetrics {
@Autowired
protected MyProperties myProperties;
@Autowired
protected MeterRegistry meterRegistry;
@PostConstruct
public void initialize() {
this.meterRegistry.config()
.commonTags(commonTags());
}
@Override
public List<Tag> commonTags() {
List<Tag> tags = new ArrayList<>();
tags.add(Tag.of("application", myProperties.getApplicationName()));
tags.add(Tag.of("node", myProperties.getNodeName()));
return tags;
}
}
The problem is that my metrics behave correctly and even some of the Boot's metrics (at least http.server.requests) see my tags. But jvm.*, system.*, tomcat.* and many others still don't have the needed tags.
| If you are looking for common tags support, you can do it by registering a MeterFilter doing it.
See this commit or this branch for an example.
With the upcoming Spring Boot 2.1.0.M1, you can use the following properties:
management.metrics.tags.*= # Common tags that are applied to every meter.
See the reference for details.
UPDATED:
As the question has been updated, I checked the updated question with this MeterFilter-based approach and confirmed it's working as follows:
Request: http://localhost:8080/actuator/metrics/jvm.gc.memory.allocated
Response:
{
"name" : "jvm.gc.memory.allocated",
"measurements" : [ {
"statistic" : "COUNT",
"value" : 1.98180864E8
} ],
"availableTags" : [ {
"tag" : "stack",
"values" : [ "prod" ]
}, {
"tag" : "region",
"values" : [ "us-east-1" ]
} ]
}
I didn't check the approach which has been provided in the updated question but I'd just use the proven MeterFilter-based approach unless there's any reason to stick with the approach.
2nd UPDATED:
I looked into the approach and was able to reproduce it with this branch.
It's too late to apply common tags in @PostConstruct as some metrics have been registered already. The reason why http.server.requests works is that it will be registered with the first request. Try to put a breakpoint on the point of filters' application if you're interested in it.
In short, try the above approach which is similar to the upcoming Spring Boot out-of-box support.
| Micrometer | 51,552,889 | 11 |
After upgrading from spring-boot-parent version 2.5.5 to 2.6.0, I started seeing these error messages spamming the logs:
[INFO] [stdout] 2022-01-11 13:40:01.157 WARN 76859 --- [ udp-epoll-2] i.m.s.reactor.netty.channel.FluxReceive : [6d1243de, L:/127.0.0.1:58160 - R:localhost/127.0.0.1:8125] An exception has been observed post termination, use DEBUG level to see the full stack: java.net.PortUnreachableException: readAddress(..) failed: Connection refused
Using DEBUG level:
[INFO] [stdout] 2022-01-11 13:38:29.733 WARN 76479 --- [ udp-epoll-2] i.m.s.reactor.netty.channel.FluxReceive : [43aad7ce, L:/127.0.0.1:38108 - R:localhost/127.0.0.1:8125] An exception has been observed post termination
[INFO] [stdout]
[INFO] [stdout] java.net.PortUnreachableException: readAddress(..) failed: Connection refused
[INFO] [stdout] at io.micrometer.shaded.io.netty.channel.epoll.EpollDatagramChannel.translateForConnected(EpollDatagramChannel.java:575)
[INFO] [stdout] at io.micrometer.shaded.io.netty.channel.epoll.EpollDatagramChannel.access$400(EpollDatagramChannel.java:56)
[INFO] [stdout] at io.micrometer.shaded.io.netty.channel.epoll.EpollDatagramChannel$EpollDatagramChannelUnsafe.epollInReady(EpollDatagramChannel.java:503)
[INFO] [stdout] at io.micrometer.shaded.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
[INFO] [stdout] at io.micrometer.shaded.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
[INFO] [stdout] at io.micrometer.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
[INFO] [stdout] at io.micrometer.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
[INFO] [stdout] at io.micrometer.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
[INFO] [stdout] at java.base/java.lang.Thread.run(Thread.java:833)
[INFO] [stdout] Caused by: io.micrometer.shaded.io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection refused
I can't find much about it in the release notes, except for a dependency upgrade that seems relevant:
Upgrade to Micrometer 1.8.0 #28516
But the linked issue is not informative. Neither were Micronaut's own release notes for version 1.8.0 (except for the JVM crash notice, which we did run into - a surprising and rather unfortunate side-effect of upgrading Spring Boot, but I digress)
We don't (consciously) use Micrometer, so I tried disabling it in the application.yml file (micrometer.enabled: false and instrumentation.micrometer.enabled: false), but to no avail.
Despite lots of googling (for various permutations of elements of the error message and digging through code on GitHub), I haven't been able to find how to fix this message, let alone figure out what causes it.
Now I could of course suppress this message in the logging configuration, but I'd like to know what it's actually trying to achieve here, and whether it is useful for our application. And if not, disable it completely.
| assuming statsd is not used and configured on your side, since it's pointed to localhost, you may disable it by setting
management.metrics.export.statsd.enabled
to false
| Micrometer | 70,667,172 | 10 |
I use MicroMeter gauges in a Spring Boot 2 application to track statuses of objects. On status change, the statusArrived() method is called. This function should update the gauge related to that object.
Here is my current implementation:
public class PrometheusStatusLogger {
private int currentStatus;
public void statusArrived(String id, int value) {
currentStatus = value;
Tags tags = Tags.of("product_id", id);
Gauge.builder("product_status",this::returnStatus)
.tags(tags)
.strongReference(true)
.register(Metrics.globalRegistry);
}
private int returnStatus(){
return currentStatus;
}
}
This works quite well, but the problem is that when this method is called, all gauges values are updated. I would like only the gauge with the given product_id to be updated.
Input:
statusArrived(1, 2);
statusArrived(2, 3);
Current output:
product_status{product_id=1} 3
product_status{product_id=2} 3
All gauges are updated
Desired output:
product_status{product_id=1} 2
product_status{product_id=2} 3
Only the gauge with the given product_id tag is updated.
How can I achieve that ?
| Since all your gauges are referencing the same currentStatus, when the new value comes in, all the gauge's source is changed. Instead use a map to track all the current status by id:
public class PrometheusStatusLogger {
private Map<String, Integer> currentStatuses = new HashMap<>();
public void statusArrived(String id, int value) {
if(!currentStatuses.containsKey(id)) {
Tags tags = Tags.of("product_id", id);
Gauge.builder("product_status",currentStatuses, map -> map.get(id))
.tags(tags)
.register(Metrics.globalRegistry);
}
currentStatuses.put(id, value);
}
}
| Micrometer | 60,171,522 | 10 |
I'm trying out Spring Boot Actuator and looking at the "/actuator/metrics/jvm.memory.max" endpoint.
I am also running my Springboot app with the following JVM option:
-Xmx104m
I created an endpoint ("/memory" which returns the Total, Free, Used & Max memory for the app. I used Runtime.getRuntime().getXXX() methods for this.
The question that I have is that the value Springboot's "jvm.memory.max" shows me in bytes does not match with the -Xmx value and what the "/memory" endpoint shows me.
Any thoughts why this mismatch?
| Spring Boot uses Micrometer for its metrics support. The jvm.memory.max metrics is produced by Mirometer's JvmMemoryMetrics class using MemoryPoolMXBean.getUsage().getMax().
The MemoyPoolMXBean exposes information about both heap and non-heap memory and Micrometer separates these using tags on the jvm.memory.max metric. In the output shown in the question, the value is the heap and non-heap memory combined which is why it is not the same as the heap-specific value configured with -Xmx.
You can drill down into a metric using its tags and query parameters. This is described in the Actuator's documentation. For example, to get the max heap memory, you would use http://localhost:9001/actuator/metrics/jvm.memory.max?tag=area:heap.
| Micrometer | 54,591,870 | 10 |
We are using Nagios to monitor our network with great success. However, we have a syslog for critical application errors and while I set up check_log, it doesn't seem to work as well as monitering a device.
The issues are:
It only shows the last entry
There doesn't seem to be a way to acknowledge the critical error and
return the monitor to a good state
Is nagios the wrong tool, or are we just not setting up the service monitering right?
Here are my entries
# log file
define command{
command_name check_log
command_line $USER1$/check_log -F /var/log/applications/appcrit.log -O /tmp/appcrit.log -q ?
}
# Define the log monitering service
define service{
name logfile-check ;
use generic-service ;
check_period 24x7 ;
max_check_attempts 1 ;
normal_check_interval 5 ;
retry_check_interval 1 ;
contact_groups admins ;
notification_options w,u,c,r ;
notification_period 24x7 ;
register 0 ;
}
define service{
use logfile-check
host_name localhost
service_description CritLogFile
check_command check_log
}
| For monitoring logs with Nagios, typically the log checker will return a warning only for newly discovered error messages each time it is invoked (so it must retain some state in order to know to ignore them on subsequent runs). Therefore I usually set:
max_check_attempts 1
is_volatile 1
This causes Nagios to send out the alert immeidately, but only once, and then go back to normal.
My favorite log checker is logwarn, but I'm biased because I wrote it myself after not finding any existing ones that I liked. The logwarn package includes a Nagios plugin.
| Nagios | 2,373,212 | 25 |
Maybe a strange and green question, however
Is there anything that Nagios or Ganglia can do that the other can't?
In terms of monitoring, alerts in general.
I'm looking for a general solution for my school's computer club, in my mind its like comparing norton vs advast. both are antivirus however are there any specific benefits that one has over the other? Or am I asking a very stupid question now?
thank you.
| Ganglia is aimed at monitoring compute grids, i.e. a bunch of servers working on the same task to achieve a common goal - such as a cluster of web servers.
Nagios is aimed at monitoring anything and everything - servers, services on servers, switches, network bandwidth via SNMP etc etc. Nagios will send alerts based on set criteria (ie, you can set it to send yourself an email or if x service dies).
Note that they are not competing products, they are aimed at different scenarios. By the sounds of it, you need Nagios.
If you have a play around with some online demos, you should be able to get a feel for what product you need (and I think you'll agree with me that Nagios is more suited)
Nagios - https://en.wikipedia.org/wiki/Nagios (Wikipedia)
Ganglia - https://en.wikipedia.org/wiki/Ganglia_(software) (Wikipedia)
| Nagios | 14,494,948 | 25 |
My situation: I'm working on a web monitoring dashboard that assembles informations from different applications and sources and generate graphs, info graphics and reports.
The applications I'm trying to integrate are CACTI, Nagios, and other local private monitoring tools. I had no problem to integrate these applications, except for Nagios (I don't have much experience with it).
What I want to know is if there is a way to use Nagios as a Web Service, or something similar, so I can expose some of the informations and use it to generate my own reports on my dashboard application.
Is it possible to do that without any epic effort?
thanks for reading.
| Nagios 4.x starting with version 4.4 now includes CGIs for JSON output. Installing the newest version of Nagios might be the easiest way to go.
See the announcement here.
Review the slides from Nagios World Conference 2013 here.
| Nagios | 7,768,215 | 16 |
I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in @INC (@INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
| I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
| Nagios | 9,246,557 | 13 |
I would like to monitor elasticsearch using nagios.
Basiclly, I want to know if elasticsearch is up.
I think I can use the elasticsearch Cluster Health API (see here)
and use the 'status' that I get back (green, yellow or red), but I still don't know how to use nagios for that matter ( nagios is on one server and elasticsearc is on another server ).
Is there another way to do that?
EDIT :
I just found that - check_http_json. I think I'll try it.
| After a while - I've managed to monitor elasticsearch using the nrpe.
I wanted to use the elasticsearch Cluster Health API - but I couldn't use it from another machine - due to security issues...
So, in the monitoring server I created a new service - which the check_command is check_command check_nrpe!check_elastic. And now in the remote server, where the elasticsearch is, I've editted the nrpe.cfg file with the following:
command[check_elastic]=/usr/local/nagios/libexec/check_http -H localhost -u /_cluster/health -p 9200 -w 2 -c 3 -s green
Which is allowed, since this command is run from the remote server - so no security issues here...
It works!!!
I'll still try this check_http_json command that I posted in my qeustion - but for now, my solution is good enough.
| Nagios | 10,276,989 | 10 |
I have NRPE daemon process running under xinetd on amazon ec2 instance and nagios server on my local machine.
The check_nrpe -H [amazon public IP] gives this error:
CHECK_NRPE: Error - Could not complete SSL handshake.
Both Nrpe are same versions. Both are compiled with this option:
./configure --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/i386-linux-gnu/
"allowed host" entry contains my local IP address.
What could be the possible reason of this error now??
| If you are running nrpe as a service, make sure you have this line in your nrpe.cfg on the client side:
# example 192. IP, yours will probably differ
allowed_hosts=127.0.0.1,192.168.1.100
You say that is done, however, if you are running nrpe under xinetd, make sure to edit the only_from directive in the file /etc/xinetd.d/nrpe.
Don't forget to restart the xinetd service:
service xinetd restart
| Nagios | 20,520,334 | 10 |
I am trying read a file and split a cell in each line by a comma and then display only the first and the second cells which contain information regarding the latitude and the longitude.
This is the file:
time,latitude,longitude,type
2015-03-20T10:20:35.890Z,38.8221664,-122.7649994,earthquake
2015-03-20T10:18:13.070Z,33.2073333,-116.6891667,earthquake
2015-03-20T10:15:09.000Z,62.242,-150.8769,earthquake
My program:
def getQuakeData():
filename = input("Please enter the quake file: ")
readfile = open(filename, "r")
readlines = readfile.readlines()
Type = readlines.split(",")
x = Type[1]
y = Type[2]
for points in Type:
print(x,y)
getQuakeData()
When I try to execute this program, it gives me an error
AttributeError: 'list' object has no attribute 'split'
Please help me!
| I think you've actually got a wider confusion here.
The initial error is that you're trying to call split on the whole list of lines, and you can't split a list of strings, only a string. So, you need to split each line, not the whole thing.
And then you're doing for points in Type, and expecting each such points to give you a new x and y. But that isn't going to happen. Types is just two values, x and y, so first points will be x, and then points will be y, and then you'll be done. So, again, you need to loop over each line and get the x and y values from each line, not loop over a single Types from a single line.
So, everything has to go inside a loop over every line in the file, and do the split into x and y once for each line. Like this:
def getQuakeData():
filename = input("Please enter the quake file: ")
readfile = open(filename, "r")
for line in readfile:
Type = line.split(",")
x = Type[1]
y = Type[2]
print(x,y)
getQuakeData()
As a side note, you really should close the file, ideally with a with statement, but I'll get to that at the end.
Interestingly, the problem here isn't that you're being too much of a newbie, but that you're trying to solve the problem in the same abstract way an expert would, and just don't know the details yet. This is completely doable; you just have to be explicit about mapping the functionality, rather than just doing it implicitly. Something like this:
def getQuakeData():
filename = input("Please enter the quake file: ")
readfile = open(filename, "r")
readlines = readfile.readlines()
Types = [line.split(",") for line in readlines]
xs = [Type[1] for Type in Types]
ys = [Type[2] for Type in Types]
for x, y in zip(xs, ys):
print(x,y)
getQuakeData()
Or, a better way to write that might be:
def getQuakeData():
filename = input("Please enter the quake file: ")
# Use with to make sure the file gets closed
with open(filename, "r") as readfile:
# no need for readlines; the file is already an iterable of lines
# also, using generator expressions means no extra copies
types = (line.split(",") for line in readfile)
# iterate tuples, instead of two separate iterables, so no need for zip
xys = ((type[1], type[2]) for type in types)
for x, y in xys:
print(x,y)
getQuakeData()
Finally, you may want to take a look at NumPy and Pandas, libraries which do give you a way to implicitly map functionality over a whole array or frame of data almost the same way you were trying to.
| Split | 30,042,334 | 28 |
I have a file which contains data as follows:
recv(1178884, NULL, 4294967267, 0) = 0
......
......
My source code is:
try (BufferedReader br = new BufferedReader(new FileReader("D:\\smsTrace.txt"))) {
String sCurrentLine;
while ((sCurrentLine = br.readLine()) != null) {
String sysCallName = sCurrentLine;
String[] sysCallTokens = sysCallName.split("(");
System.out.println(sCurrentLine);
}
} catch (IOException e) {
e.printStackTrace();
}
When I split with sysCallName.split(",") it works fine but when I use as above, it throws following exception.
Exception in thread "main" java.util.regex.PatternSyntaxException: Unclosed group near index 1
(
^
at java.util.regex.Pattern.error(Unknown Source)
at java.util.regex.Pattern.accept(Unknown Source)
at java.util.regex.Pattern.group0(Unknown Source)
at java.util.regex.Pattern.sequence(Unknown Source)
at java.util.regex.Pattern.expr(Unknown Source)
at java.util.regex.Pattern.compile(Unknown Source)
at java.util.regex.Pattern.<init>(Unknown Source)
at java.util.regex.Pattern.compile(Unknown Source)
at java.lang.String.split(Unknown Source)
at java.lang.String.split(Unknown Source)
at fileReading.main(fileReading.java:19)
Any idea what I am doing wrong?
| You have to escape the opening bracket:
sysCallName.split("\\(");
Because split() expects a regular expression, and brackets are used to mark capturing groups in a regex. So they need to be in pairs. If you just want a bracket it needs to be escaped.
| Split | 13,948,751 | 28 |
I have a concatenated string like this:
my_str = 'str1;str2;str3;'
and I would like to apply split function to it and then convert the resulted list to a tuple, and get rid of any empty string resulted from the split (notice the last ';' in the end)
So far, I am doing this:
tuple(filter(None, my_str.split(';')))
Is there any more efficient (in terms of speed and space) way to do it?
| How about this?
tuple(my_str.split(';')[:-1])
('str1', 'str2', 'str3')
You split the string at the ; character, and pass all off the substrings (except the last one, the empty string) to tuple to create the result tuple.
| Split | 11,001,247 | 28 |
I'm trying to split a string at the first space and only keep the 2nd half. So if the input was "1. top of steel", the output would be "top of steel". I'm working with a few different examples from here and I cant get it to work. Thoughts? Thanks.
| var myString = "1. top of steel";
var newString = myString.Remove(0, myString.IndexOf(' ') + 1);
| Split | 10,389,805 | 28 |
I have a string:
String str = "a + b - c * d / e < f > g >= h <= i == j";
I want to split the string on all of the operators, but include the operators in the array, so the resulting array looks like:
[a , +, b , -, c , *, d , /, e , <, f , >, g , >=, h , <=, i , ==, j]
I've got this currently:
public static void main(String[] args) {
String str = "a + b - c * d / e < f > g >= h <= i == j";
String reg = "((?<=[<=|>=|==|\\+|\\*|\\-|<|>|/|=])|(?=[<=|>=|==|\\+|\\*|\\-|<|>|/|=]))";
String[] res = str.split(reg);
System.out.println(Arrays.toString(res));
}
This is pretty close, it gives:
[a , +, b , -, c , *, d , /, e , <, f , >, g , >, =, h , <, =, i , =, =, j]
Is there something I can do to this to make the multiple character operators appear in the array like I want them to?
And as a secondary question that isn't nearly as important, is there a way in the regex to trim the whitespace off from around the letters?
| String[] ops = str.split("\\s*[a-zA-Z]+\\s*");
String[] notops = str.split("\\s*[^a-zA-Z]+\\s*");
String[] res = new String[ops.length+notops.length-1];
for(int i=0; i<res.length; i++) res[i] = i%2==0 ? notops[i/2] : ops[i/2+1];
This should do it. Everything nicely stored in res.
| Split | 9,856,916 | 28 |
Emacs usualy automaticaly splits the window horizontaly.
How can I change this default behaviour, so that the window is splited verticaly by default?
| Use (setq split-width-threshold nil) for vertical split.
Use (setq split-width-threshold 1 ) for horizontal split.
Use http://www.emacswiki.org/emacs/ToggleWindowSplit to toggle between them.
Use http://www.emacswiki.org/emacs/TransposeWindows to transponse them.
| Split | 7,997,590 | 28 |
In Ruby, I did:
"string1::string2".split("::")
In Scala, I can't find how to split using a string, not a single character.
| The REPL is even easier than Stack Overflow. I just pasted your example as is.
Welcome to Scala version 2.8.1.final (Java HotSpot Server VM, Java 1.6.0_22).
Type in expressions to have them evaluated.
Type :help for more information.
scala> "string1::string2".split("::")
res0: Array[java.lang.String] = Array(string1, string2)
| Split | 5,522,572 | 28 |
I have a source file where 2 features have been added. In order to allow cherry-picking, I'd like to commit that in 2 phases: one for each feature. Until now, in similar situations, using git add -p served me well, to commit one feature while leaving the local files in their final stage.
However, I now have the problem that git add -p wants to stage a hunk that includes edits for both features. Even though the edits are on separate lines, s (for "split") no longer wants to split up the hunk into smaller pieces...
In short: I can't separate the changes for the 2 features this way. Is there a way to manually edit the patch, for example using vi, without actually changing the original file?
| As Alan says, edit the patch by pressing e (instead of s) during git add -p. This will launch your editor with that hunk of the patch so that you can manually edit it. There are comments within the text that explain how to properly discard modifications and it's actually pretty easy.
When you are done, note that you can test it with only the changes you've just added by doing git stash --keep-index. The changes you did not add to the index will be stashed away and you are now free to test just the changes that you are about to commit. When done, simply git stash pop or git stash apply to get the other changes back.
| Split | 2,333,828 | 28 |
Basically, I want to go from 1) to 2)
I usually do this by splitting horizontally first and then vertically, but as I want this to do three-way diffs, it is much handier to start vim by running:
$ vimdiff file1 file2 file3
And then doing something to open the split window below.
1)
+----+----+----+
¦ ¦ ¦ ¦
¦ f1 ¦ f2 ¦ f3 ¦
¦ ¦ ¦ ¦
+----+----+----+
2)
+----+----+----+
¦ ¦ ¦ ¦
¦ f1 ¦ f2 ¦ f3 ¦
+----+----+----+
¦ f4 ¦
+--------------+
Does anyone know of a way to this?
| use :botright split or :bo sp, it does what you want
| Split | 1,187,511 | 28 |
I've been switching some windows in VIM from vertical to horizontal splits and back using:
CTRL-W + K
CTRL-W + L
CTRL-W + J
CTRL-W + H
After doing this a few times the cursor disappeared. I can still type, and the status bar at the bottom still shows me my location, but there's no blinking cursor. Any ideas regarding:
Why does this happen?
How do I get the cursor back?
I'm using vim 7.2 on Linux
| I have the same problem and I have used couple of work-arounds that work for me:
Maximize gvim window and then click on the maximize button again to bring it to original size. This brings back the cursor.
Run some shell command e.g., !echo > /dev/null - this seems to bring back the cursor as well.
I am experimenting whether doing the following (remove the left side scroll bar completely) fixes this problem completely or not - this seems to work in limited experiments but the jury is still out on this :)
set guioptions-=L
set guioptions=-l
Osho
| Split | 1,025,762 | 28 |
I have a small problem with something I need to do in school...
My task is the get a raw input string from a user (text = raw_input())
and I need to print the first and final words of that string.
Can someone help me with that? I have been looking for an answer all day...
| You have to firstly convert the string to list of words using str.split and then you may access it like:
>>> my_str = "Hello SO user, How are you"
>>> word_list = my_str.split() # list of words
# first word v v last word
>>> word_list[0], word_list[-1]
('Hello', 'you')
From Python 3.x, you may simply do:
>>> first, *middle, last = my_str.split()
| Split | 41,228,115 | 27 |
I want to remove digits from the end of a string, but I have no idea.
Can the split() method work? How can I make that work?
The initial string looks like asdfg123,and I only want asdfg instead.
Thanks for your help!
| No, split would not work, because split only can work with a fixed string to split on.
You could use the str.rstrip() method:
import string
cleaned = yourstring.rstrip(string.digits)
This uses the string.digits constant as a convenient definition of what needs to be removed.
or you could use a regular expression to replace digits at the end with an empty string:
import re
cleaned = re.sub(r'\d+$', '', yourstring)
| Split | 40,691,451 | 27 |
I'm looking for something along the line of
str_split_whole_word($longString, $x)
Where $longString is a collection of sentences, and $x is the character length for each line. It can be fairly long, and I want to basically split it into multiple lines in the form of an array.
For example:
$longString = 'I like apple. You like oranges. We like fruit. I like meat, also.';
$lines = str_split_whole_word($longString, $x);
Desired output:
$lines = Array(
[0] = 'I like apple. You'
[1] = 'like oranges. We'
[2] = and so on...
)
| The easiest solution is to use wordwrap(), and explode() on the new line, like so:
$array = explode( "\n", wordwrap( $str, $x));
Where $x is a number of characters to wrap the string on.
| Split | 11,254,787 | 27 |
In Java, I am trying to split on the ^ character, but it is failing to recognize it. Escaping \^ throws code error.
Is this a special character or do I need to do something else to get it to recognize it?
String splitChr = "^";
String[] fmgStrng = aryToSplit.split(splitChr);
| The ^ is a special character in Java regex - it means "match the beginning" of an input.
You will need to escape it with "\\^". The double slash is needed to escape the \, otherwise Java's compiler will think you're attempting to use a special \^ sequence in a string, similar to \n for newlines.
\^ is not a special escape sequence though, so you will get compiler errors.
In short, use "\\^".
| Split | 10,695,104 | 27 |
I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient.
My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application.
My question is, what is the most time efficient means of parsing a large CSV file using only built in tools?
note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although it might work effectively.
edit
It can only involve built-in tools that ship with Perl 5.8. For bureaucratic reasons, I cannot use any third party modules (even if hosted on cpan)
another edit
Let's assume that our solution is only allowed to deal with the file data once it is entirely loaded into memory.
yet another edit
I just grasped how stupid this question is. Sorry for wasting your time. Voting to close.
| The right way to do it -- by an order of magnitude -- is to use Text::CSV_XS. It will be much faster and much more robust than anything you're likely to do on your own. If you're determined to use only core functionality, you have a couple of options depending on speed vs robustness.
About the fastest you'll get for pure-Perl is to read the file line by line and then naively split the data:
my $file = 'somefile.csv';
my @data;
open(my $fh, '<', $file) or die "Can't read file '$file' [$!]\n";
while (my $line = <$fh>) {
chomp $line;
my @fields = split(/,/, $line);
push @data, \@fields;
}
This will fail if any fields contain embedded commas. A more robust (but slower) approach would be to use Text::ParseWords. To do that, replace the split with this:
my @fields = Text::ParseWords::parse_line(',', 0, $line);
| Split | 3,065,095 | 27 |
I have an NSData object of approximately 1000kB in size. Now I want to transfer this via Bluetooth. It would be better if I have, let's say, 10 objects of 100kB. It comes to mind that I should use the -subdataWithRange: method of NSData.
I haven't really worked with NSRange. Well, I know how it works, but I can't figure out how to read from a given location with the length: 'to end of file'... I've no idea how to do that.
Some code on how to split this into multiple 100kB NSData objects would really help me out here. (it probably involves the -length method to see how many objects should be made..?)
Thank you in advance.
| The following piece of code does the fragmentation without copying the data:
NSData* myBlob;
NSUInteger length = [myBlob length];
NSUInteger chunkSize = 100 * 1024;
NSUInteger offset = 0;
do {
NSUInteger thisChunkSize = length - offset > chunkSize ? chunkSize : length - offset;
NSData* chunk = [NSData dataWithBytesNoCopy:(char *)[myBlob bytes] + offset
length:thisChunkSize
freeWhenDone:NO];
offset += thisChunkSize;
// do something with chunk
} while (offset < length);
Sidenote: I should add that the chunk objects cannot safely be used after myBlob has been released (or otherwise modified). chunk fragments point into memory owned by myBlob, so don't retain them unless you retain myBlob.
| Split | 2,899,020 | 27 |
str.rsplit([sep[, maxsplit]])
Return a
list of the words in the string, using
sep as the delimiter string. If
maxsplit is given, at most maxsplit
splits are done, the rightmost ones.
If sep is not specified or None, any
whitespace string is a separator.
Except for splitting from the right,
rsplit() behaves like split() which is
described in detail below.
http://docs.python.org/library/stdtypes.html#str.rsplit
| String.prototype.rsplit = function(sep, maxsplit) {
var split = this.split(sep);
return maxsplit ? [ split.slice(0, -maxsplit).join(sep) ].concat(split.slice(-maxsplit)) : split;
}
This one functions more closely to the Python version
"blah,derp,blah,beep".rsplit(",",1) // [ 'blah,derp,blah', 'beep' ]
| Split | 5,202,085 | 26 |
I have a word list like below. I want to split the list by .. Is there any better or useful code in Python 3?
a = ['this', 'is', 'a', 'cat', '.', 'hello', '.', 'she', 'is', 'nice', '.']
result = []
tmp = []
for elm in a:
if elm is not '.':
tmp.append(elm)
else:
result.append(tmp)
tmp = []
print(result)
# result: [['this', 'is', 'a', 'cat'], ['hello'], ['she', 'is', 'nice']]
Update
Add test cases to handle it correctly.
a = ['this', 'is', 'a', 'cat', '.', 'hello', '.', 'she', 'is', 'nice', '.']
b = ['this', 'is', 'a', 'cat', '.', 'hello', '.', 'she', 'is', 'nice', '.', 'yes']
c = ['.', 'this', 'is', 'a', 'cat', '.', 'hello', '.', 'she', 'is', 'nice', '.', 'yes']
def split_list(list_data, split_word='.'):
result = []
sub_data = []
for elm in list_data:
if elm is not split_word:
sub_data.append(elm)
else:
if len(sub_data) != 0:
result.append(sub_data)
sub_data = []
if len(sub_data) != 0:
result.append(sub_data)
return result
print(split_list(a)) # [['this', 'is', 'a', 'cat'], ['hello'], ['she', 'is', 'nice']]
print(split_list(b)) # [['this', 'is', 'a', 'cat'], ['hello'], ['she', 'is', 'nice'], ['yes']]
print(split_list(c)) # [['this', 'is', 'a', 'cat'], ['hello'], ['she', 'is', 'nice'], ['yes']]
| Using itertools.groupby
from itertools import groupby
a = ['this', 'is', 'a', 'cat', '.', 'hello', '.', 'she', 'is', 'nice', '.']
result = [
list(g)
for k,g in groupby(a,lambda x:x=='.')
if not k
]
print (result)
#[['this', 'is', 'a', 'cat'], ['hello'], ['she', 'is', 'nice']]
| Split | 47,604,449 | 26 |
In my Java application I need to find indices and split strings using the same "target" for both occasions. The target is simply a dot.
Finding indices (by indexOf and lastIndexOf) does not use regex, so
String target = ".";
String someString = "123.456";
int index = someString.indexOf(target); // index == 3
gives me the index I need.
However, I also want to use this "target" to split some strings. But now the target string is interpreted as a regex string. So I can't use the same target string as before when I want to split a string...
String target = ".";
String someString = "123.456";
String[] someStringSplit = someString.split(target); // someStringSplit is an empty array
So I need either of the following:
A way to split into an array by a non-regex target
A way to "convert" a non-regex target string into a regex string
Can someone help? Would you agree that it seems a bit odd of the standard java platform to use regex for "split" while not using regex for "indexOf"?
| You need to escape your "target" in order to use it as a regex.
Try
String[] someStringSplit = someString.split(Pattern.quote(target));
and let me know if that helps.
| Split | 39,038,102 | 26 |
I have values being returned with 255 comma separated values. Is there an easy way to split those into columns without having 255 substr?
ROW | VAL
-----------
1 | 1.25, 3.87, 2, ...
2 | 5, 4, 3.3, ....
to
ROW | VAL | VAL | VAL ...
---------------------
1 |1.25 |3.87 | 2 ...
2 | 5 | 4 | 3.3 ...
| Beware! The regexp_substr expression of the format '[^,]+' will not return the expected value if there is a null element in the list and you want that item or one after it. Consider this example where the 4th element is NULL and I want the 5th element and thus expect the '5' to be returned:
SQL> select regexp_substr('1,2,3,,5,6', '[^,]+', 1, 5) from dual;
R
-
6
Surprise! It returns the 5th NON-NULL element, not the actual 5th element! Incorrect data returned and you may not even catch it. Try this instead:
SQL> select regexp_substr('1,2,3,,5,6', '(.*?)(,|$)', 1, 5, NULL, 1) from dual;
R
-
5
So, the above corrected REGEXP_SUBSTR says to look for the 5th occurrence of 0 or more comma-delimited characters followed by a comma or the end of the line (allows for the next separator, be it a comma or the end of the line) and when found return the 1st subgroup (the data NOT including the comma or end of the line).
The search match pattern '(.*?)(,|$)' explained:
( = Start a group
. = match any character
* = 0 or more matches of the preceding character
? = Match 0 or 1 occurrences of the preceding pattern
) = End the 1st group
( = Start a new group (also used for logical OR)
, = comma
| = OR
$ = End of the line
) = End the 2nd group
EDIT: More info added and simplified the regex.
See this post for more info and a suggestion to encapsulate this in a function for easy reuse: REGEX to select nth value from a list, allowing for nulls
It's the post where I discovered the format '[^,]+' has the problem. Unfortunately it's the regex format you will most commonly see as the answer for questions regarding how to parse a list. I shudder to think of all the incorrect data being returned by '[^,]+'!
| Split | 31,464,275 | 26 |
Here is the error:
Exception in thread "main" java.util.regex.PatternSyntaxException: Unclosed character class near index 3
], [
^
at java.util.regex.Pattern.error(Pattern.java:1924)
at java.util.regex.Pattern.clazz(Pattern.java:2493)
at java.util.regex.Pattern.sequence(Pattern.java:2030)
at java.util.regex.Pattern.expr(Pattern.java:1964)
at java.util.regex.Pattern.compile(Pattern.java:1665)
at java.util.regex.Pattern.<init>(Pattern.java:1337)
at java.util.regex.Pattern.compile(Pattern.java:1022)
at java.lang.String.split(String.java:2313)
at java.lang.String.split(String.java:2355)
at testJunior2013.J2.main(J2.java:31)
This is the area of the code that is causing the issues.
String[][] split = new String[1][rows];
split[0] = (Arrays.deepToString(array2d)).split("], ["); //split at the end of an array row
What does this error mean and what needs to be done to fix the code above?
|
TL;DR
You want:
.split("\\], \\[")`
Escape each square bracket twice — once for each context in which you need to strip them from their special meaning: within a Regular Expression first, and within a Java String secondly.
Consider using Pattern#quote when you need your entire pattern to be interpreted literally.
Explanation
String#split works with a Regular Expression but [ and ] are not standard characters, regex-wise: they have a special meaning in that context.
In order to strip them from their special meaning and simply match actual square brackets, they need to be escaped, which is done by preceding each with a backslash — that is, using \[ and \].
However, in a Java String, \ is not a standard character either, and needs to be escaped as well.
Thus, just to split on [, the String used is "\\[" and you are trying to obtain:
.split("\\], \\[")
A sensible alternative
However, in this case, you're not just semantically escaping a few specific characters in a Regular Expression, but actually wishing that your entire pattern be interpreted literally: there's a method to do just that 🙂
Pattern#quote is used to signify that the:
Metacharacters [...] in your pattern will be given no special meaning.
(from the Javadoc linked above)
I recommend, in this case, that you use the following, more sensible and readable:
.split(Pattern.quote("], ["))
| Split | 21,816,788 | 26 |
I have the following string:
"hello.world.foo.bar"
and I want to split (with the "." as delimiter, and only want to get two elemets starting by the end) this in the following:
["hello.world.foo", "bar"]
How can I accomplish this? exist the limit by the end?
| Use str.rsplit specifying maxsplit (the second argument) as 1:
>>> "hello.world.foo.bar".rsplit('.', 1) # <-- 1: maxsplit
['hello.world.foo', 'bar']
| Split | 20,312,851 | 26 |
I have an array of 18 values:
$array = array('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r');
I want to split this array into 12 different arrays so it should look like this:
array(
0 => array('a', 'b'),
1 => array('c', 'd'),
2 => array('e', 'f'),
3 => array('g', 'h'),
4 => array('i', 'j'),
5 => array('k', 'l'),
6 => array('m'),
7 => array('n'),
8 => array('o'),
9 => array('p'),
10 => array('q'),
11 => array('r')
)
My function doesn't seem to work
function array_split($array, $parts){
return array_chunk($array, ceil(count($array) / $parts));
}
$result = array_split($array, 12);
because I get 9 different arrays instead of 12. It would return
array(
0 => array('a', 'b'),
1 => array('c', 'd'),
2 => array('e', 'f'),
3 => array('g', 'h'),
4 => array('i', 'j'),
5 => array('k', 'l'),
6 => array('m', 'n'),
7 => array('o', 'p'),
8 => array('q', 'r')
)
How would I go about doing this? Thanks.
| This simple function would work for you:
Usage
$array = range("a", "r"); // same as your array
print_r(alternate_chunck($array,12));
Output
Array
(
[0] => Array
(
[0] => a
[1] => b
)
[1] => Array
(
[0] => c
[1] => d
)
[2] => Array
(
[0] => e
[1] => f
)
[3] => Array
(
[0] => g
[1] => h
)
[4] => Array
(
[0] => i
[1] => j
)
[5] => Array
(
[0] => k
[1] => l
)
[6] => Array
(
[0] => m
)
[7] => Array
(
[0] => n
)
[8] => Array
(
[0] => o
)
[9] => Array
(
[0] => p
)
[10] => Array
(
[0] => q
)
[11] => Array
(
[0] => r
)
)
Update The above might not be useful for most cases ... here is another type of chunk
$array = range("a", "r"); // same as your array
print_r(fill_chunck($array, 5));
Output
Array
(
[0] => Array
(
[0] => a
[1] => b
[2] => c
[3] => d
)
[1] => Array
(
[0] => e
[1] => f
[2] => g
[3] => h
)
[2] => Array
(
[0] => i
[1] => j
[2] => k
[3] => l
)
[3] => Array
(
[0] => m
[1] => n
[2] => o
[3] => p
)
[4] => Array
(
[0] => q
[1] => r
)
)
This would make sure the group at no time is more that 5 elements where the other one has no limitation
Function Used
function alternate_chunck($array, $parts) {
$t = 0;
$result = array();
$max = ceil(count($array) / $parts);
foreach(array_chunk($array, $max) as $v) {
if ($t < $parts) {
$result[] = $v;
} else {
foreach($v as $d) {
$result[] = array($d);
}
}
$t += count($v);
}
return $result;
}
function fill_chunck($array, $parts) {
$t = 0;
$result = array_fill(0, $parts - 1, array());
$max = ceil(count($array) / $parts);
foreach($array as $v) {
count($result[$t]) >= $max and $t ++;
$result[$t][] = $v;
}
return $result;
}
| Split | 15,579,702 | 26 |
Is there a method in scala to get the (single) head element of a List or Seq and the (collection) tail of the list? I know there's
def splitAt(n: Int): (List[A], List[A])
and I can easily grab the single item from the first list of the tuple. But is there any built in method that is basically this?
def splitAtHead: (Option[A], List[A])
Like I said, you can easily chain splitAt to return the right signature, but I figured a built in method might be able to save an intermediate tuple.
Edit:
@om-nom-nom's answer is correct, but this is why I couldn't use his 2nd version.
List[S](s1, s2, s3, s4).sortBy { _.f (h) } match {
case hd :: tail => recurse(tail)
}
| You can use pattern matching:
val hd::tail = List(1,2,3,4,5)
//hd: Int = 1
//tail: List[Int] = List(2, 3, 4, 5)
Or just .head/.tail methods:
val hd = foo.head
// hd: Int = 1
val hdOpt = foo.headOption
// hd: Option[Int] = Some(1)
val tl = foo.tail
// tl: List[Int] = List(2, 3, 4)
| Split | 14,804,159 | 26 |
I am trying to find out if there is a way to split the value of each iteration of a list comprehension only once but use it twice in the output.
As an example of the problem I am trying to solve is, I have the string:
a = "1;2;4\n3;4;5"
And I would like to perform this:
>>> [(x.split(";")[1],x.split(";")[2]) for x in a.split("\n") if x.split(",")[1] != 5]
[('2', '4'), ('4', '5')]
Without the need for running split three times. So something like this (Which is obviously invalid syntax but hopefully is enough to get the message across):
[(x[1],x[2]) for x.split(";") in a.split("\n") if x[1] != 5]
In this question I am not looking for fancy ways to get the 2nd and 3rd column of the string. It is just a way of providing a concrete example. I could for course for the example use:
[x.split(";")[1:3] for x in a.split("\n")]
The possible solutions I have thought of:
Not use a list comprehension
Leave it as is
Use the csv.DictReader, name my columns and something like StringIO to give it the input.
This is mostly something that would be a nice pattern to be able to use rather than a specific case so its hard to answer the "why do you want to do this" or "what is this for" kind of questions
Update: After being reading the solution below I went and ran some speed tests. And I found in my very basic tests that the solution provided was 35% faster than the naive solution above.
| You could use a list comprehension wrapped around a generator expression:
[(x[1],x[2]) for x in (x.split(";") for x in a.split("\n")) if x[1] != 5]
| Split | 10,308,939 | 26 |
The default split method in Python treats consecutive spaces as a single delimiter. But if you specify a delimiter string, consecutive delimiters are not collapsed:
>>> 'aaa'.split('a')
['', '', '', '']
What is the most straightforward way to collapse consecutive delimiters? I know I could just remove empty strings from the result list:
>>> result = 'aaa'.split('a')
>>> result
['', '', '', '']
>>> result = [item for item in result if item]
But is there a more convenient way?
| This is about as concise as you can get:
string = 'aaa'
result = [s for s in string.split('a') if s]
Or you could switch to regular expressions:
string = 'aaa'
result = re.split('a+', string)
| Split | 6,478,845 | 26 |
Short version -- How do I do Python rsplit() in ruby?
Longer version -- If I want to split a string into two parts (name, suffix) at the first '.' character, this does the job nicely:
name, suffix = name.split('.', 2)
But if I want to split at the last (rightmost) '.' character, I haven't been able to come up with anything more elegant than this:
idx = name.rindex('.')
name, suffix = name[0..idx-1], name[idx+1..-1] if idx
Note that the original name string may not have a dot at all, in which case name should be untouched and suffix should be nil; it may also have more than one dot, in which case only the bit after the final one should be the suffix.
| String#rpartition does just that:
name, match, suffix = name.rpartition('.')
It was introduced in Ruby 1.8.7, so if running an earlier version you can use require 'backports/1.8.7/string/rpartition' for that to work.
| Split | 1,844,118 | 26 |
If, at a command prompt, I run
vimdiff file1 file2
I get a vim instance that has two files open side-by-side, something like this:
╔═══════╤═══════╗
║ │ ║
║ │ ║
║ file1 │ file2 ║
║ │ ║
║ │ ║
╚═══════╧═══════╝
This is very nice, but sometimes I want to open a third file to look at. I don't want to create another vertical split, because otherwise the lines will be so short I'd be scrolling horizontally all the time just to read them. But occupying a few lines at the bottom of the screen wouldn't hurt. So, how can I go from the above to the following:
╔═══════╤═══════╗
║ │ ║
║ file1 │ file2 ║
║ │ ║
╟───────┴───────╢
║ file3 ║
╚═══════════════╝
I've tried using :sp file3, but I just end up with this (supposing I ran the command while the cursor was in file1):
╔═══════╤═══════╗
║ file3 │ ║
║ │ ║
╟───────┤ file2 ║
║ file1 │ ║
║ │ ║
╚═══════╧═══════╝
Thanks in advance for your help!
| Use
:botright split
and open a new file inside.
| Split | 859,383 | 26 |
I have the following process which uses group_split of dplyr:
library(tidyverse)
set.seed(1)
iris %>% sample_n(size = 5) %>%
group_by(Species) %>%
group_split()
The result is:
[[1]]
# A tibble: 2 x 5
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
<dbl> <dbl> <dbl> <dbl> <fct>
1 5 3.5 1.6 0.6 setosa
2 5.1 3.8 1.5 0.3 setosa
[[2]]
# A tibble: 2 x 5
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
<dbl> <dbl> <dbl> <dbl> <fct>
1 5.9 3 4.2 1.5 versicolor
2 6.2 2.2 4.5 1.5 versicolor
[[3]]
# A tibble: 1 x 5
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
<dbl> <dbl> <dbl> <dbl> <fct>
1 6.2 3.4 5.4 2.3 virginica
What I want to achieve is to name this list by grouped name (i.e. Species).
Yielding this (done by hand):
$setosa
# A tibble: 2 x 5
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
<dbl> <dbl> <dbl> <dbl> <fct>
1 5 3.5 1.6 0.6 setosa
2 5.1 3.8 1.5 0.3 setosa
$versicolor
# A tibble: 2 x 5
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
<dbl> <dbl> <dbl> <dbl> <fct>
1 5.9 3 4.2 1.5 versicolor
2 6.2 2.2 4.5 1.5 versicolor
$virginica
# A tibble: 1 x 5
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
<dbl> <dbl> <dbl> <dbl> <fct>
1 6.2 3.4 5.4 2.3 virginica
How can I achieve that?
Update
I tried this new data, where the naming now is called Cluster :
df <- structure(list(Cluster = c("Cluster9", "Cluster11", "Cluster1",
"Cluster9", "Cluster6", "Cluster12", "Cluster9", "Cluster11",
"Cluster8", "Cluster8"), gene_name = c("Tbc1d8", "Vimp", "Grhpr",
"H1f0", "Zfp398", "Pikfyve", "Ankrd13a", "Fgfr1op2", "Golga7",
"Lars2"), p_value = c(3.46629097620496e-47, 3.16837338947245e-62,
1.55108439059684e-06, 9.46078511685542e-131, 0.000354049720507017,
0.0146807415917158, 1.42799750295289e-38, 2.0697825959399e-08,
4.13777221466668e-06, 3.92889640704683e-184), morans_test_statistic = c(14.3797687352223,
16.6057085487911, 4.66393667525872, 24.301453902967, 3.38642377758137,
2.17859882998961, 12.9350063459509, 5.48479186018979, 4.4579286289179,
28.9144540271157), morans_I = c(0.0814728893885783, 0.0947505609609695,
0.0260671534007409, 0.138921824574569, 0.018764800166045, 0.0119813199210325,
0.0736554862590782, 0.0309849638728409, 0.0250591347318986, 0.165310420808725
), q_value = c(1.57917584337356e-46, 1.62106594498462e-61, 3.43312171446844e-06,
6.99503520654745e-130, 0.000683559649593623, 0.0245476826213791,
5.96116678335584e-38, 4.97603701391971e-08, 8.9649490080526e-06,
3.48152096326702e-183)), row.names = c(NA, -10L), class = c("tbl_df",
"tbl", "data.frame"))
With Ronak Shah's approach I get inconsistent result:
df %>% group_split(Cluster) %>% setNames(unique(df$Cluster))
$Cluster9
# A tibble: 1 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster1 Grhpr 0.00000155 4.66 0.0261 0.00000343
$Cluster11
# A tibble: 2 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster11 Vimp 3.17e-62 16.6 0.0948 1.62e-61
2 Cluster11 Fgfr1op2 2.07e- 8 5.48 0.0310 4.98e- 8
$Cluster1
# A tibble: 1 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster12 Pikfyve 0.0147 2.18 0.0120 0.0245
$Cluster6
# A tibble: 1 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster6 Zfp398 0.000354 3.39 0.0188 0.000684
$Cluster12
# A tibble: 2 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster8 Golga7 4.14e- 6 4.46 0.0251 8.96e- 6
2 Cluster8 Lars2 3.93e-184 28.9 0.165 3.48e-183
$Cluster8
# A tibble: 3 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster9 Tbc1d8 3.47e- 47 14.4 0.0815 1.58e- 46
2 Cluster9 H1f0 9.46e-131 24.3 0.139 7.00e-130
3 Cluster9 Ankrd13a 1.43e- 38 12.9 0.0737 5.96e- 38
Note that $Cluster9 has Cluster1 in it.
Please advice how to go about this?
| Lots of good answers. You can also just do:
iris %>% sample_n(size = 5) %>%
split(f = as.factor(.$Species))
Which will give you:
$setosa
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
4 5.5 3.5 1.3 0.2 setosa
5 5.3 3.7 1.5 0.2 setosa
$versicolor
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
3 5 2.3 3.3 1 versicolor
$virginica
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
1 7.7 2.6 6.9 2.3 virginica
2 7.2 3.0 5.8 1.6 virginica
Also works with your dataframe above:
df %>%
split(f = as.factor(.$Cluster))
Gives you:
$Cluster1
# A tibble: 1 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster1 Grhpr 0.00000155 4.66 0.0261 0.00000343
$Cluster11
# A tibble: 2 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster11 Vimp 3.17e-62 16.6 0.0948 1.62e-61
2 Cluster11 Fgfr1op2 2.07e- 8 5.48 0.0310 4.98e- 8
$Cluster12
# A tibble: 1 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster12 Pikfyve 0.0147 2.18 0.0120 0.0245
$Cluster6
# A tibble: 1 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster6 Zfp398 0.000354 3.39 0.0188 0.000684
$Cluster8
# A tibble: 2 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster8 Golga7 4.14e- 6 4.46 0.0251 8.96e- 6
2 Cluster8 Lars2 3.93e-184 28.9 0.165 3.48e-183
$Cluster9
# A tibble: 3 x 6
Cluster gene_name p_value morans_test_statistic morans_I q_value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 Cluster9 Tbc1d8 3.47e- 47 14.4 0.0815 1.58e- 46
2 Cluster9 H1f0 9.46e-131 24.3 0.139 7.00e-130
3 Cluster9 Ankrd13a 1.43e- 38 12.9 0.0737 5.96e- 38
| Split | 57,107,721 | 25 |
I have the following Java code:
String str = "12+20*/2-4";
List<String> arr = new ArrayList<>();
arr = str.split("\\p{Punct}");
//expected: arr = {12,20,2,4}
I want the equivalent Kotlin code, but .split("\\p{Punct}") doesn't work. I don't understand the documentation here: https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.text/split.html
| you should using String#split(Regex) instead, for example:
val str = "12+20*/2-4";
val arr = str.split("\\p{Punct}".toRegex());
// ^--- but the result is ["12","20","","2","4"]
val arr2 = arr.filter{ !it.isBlank() };
// ^--- you can filter it as further, and result is: ["12","20","2","4"]
OR you can split more Punctuations by using \\p{Punct}+ , for example:
val arr = str.split("\\p{Punct}+".toRegex())
// ^--- result is: ["12","20","2","4"]
OR invert the regex and using Regex#findAll instead, and you can find out the negative numbers in this way. for example:
val str ="12+20*/2+(-4)";
val arr ="(?<!\\d)-?[^\\p{Punct}]+".toRegex().findAll(str).map{ it.value }.toList()
// ^--- result is ["12","20","2","-4"]
// negative number is found ---^
| Split | 45,064,788 | 25 |
I'm using ruby on rails and I want to display only first word of string.
My broken code: <%= @user.name %> displaying Barack Obama.
I would want to have it display Barack and in other place Obama.
How can I split it and display it?
| > "this is ruby".split.first
#=> "this"
| Split | 30,674,244 | 25 |
An incredibly basic question in R yet the solution isn't clear.
How to split a vector of character into its individual characters, i.e. the opposite of paste(..., sep='') or stringr::str_c() ?
Anything less clunky than this:
sapply(1:26, function(i) { substr("ABCDEFGHIJKLMNOPQRSTUVWXYZ",i,i) } )
"A" "B" "C" "D" "E" "F" "G" "H" "I" "J" "K" "L" "M" "N" "O" "P" "Q" "R" "S" "T" "U" "V" "W" "X" "Y" "Z"
Can it be done otherwise, e.g. with strsplit(), stringr::* or anything else?
| Yes, strsplit will do it. strsplit returns a list, so you can either use unlist to coerce the string to a single character vector, or use the list index [[1]] to access first element.
x <- paste(LETTERS, collapse = "")
unlist(strsplit(x, split = ""))
# [1] "A" "B" "C" "D" "E" "F" "G" "H" "I" "J" "K" "L" "M" "N" "O" "P" "Q" "R" "S"
#[20] "T" "U" "V" "W" "X" "Y" "Z"
OR (noting that it is not actually necessary to name the split argument)
strsplit(x, "")[[1]]
# [1] "A" "B" "C" "D" "E" "F" "G" "H" "I" "J" "K" "L" "M" "N" "O" "P" "Q" "R" "S"
#[20] "T" "U" "V" "W" "X" "Y" "Z"
You can also split on NULL or character(0) for the same result.
| Split | 23,028,885 | 25 |
How can I split a string such as "102:330:3133:76531:451:000:12:44412 by the ":" character, and put all of the numbers into an int array (number sequence will always be 8 elements long)? Preferably without using an external library such as boost.
Also, I'm wondering how I can remove unneeded characters from the string before it's processed such as "$" and "#"?
| stringstream can do all these.
Split a string and store into int array:
string str = "102:330:3133:76531:451:000:12:44412";
std::replace(str.begin(), str.end(), ':', ' '); // replace ':' by ' '
vector<int> array;
stringstream ss(str);
int temp;
while (ss >> temp)
array.push_back(temp); // done! now array={102,330,3133,76531,451,000,12,44412}
Remove unneeded characters from the string before it's processed such as $ and #: just as the way handling : in the above.
PS: The above solution works only for strings that don't contain spaces. To handle strings with spaces, please refer to here based on std::string::find() and std::string::substr().
| Split | 20,755,140 | 25 |
I am using the following command in SoX to split many large audio files at each place where there is silence longer than 0.3 seconds:
sox -V3 input.wav output.wav silence 1 0.50 0.1% 1 0.3 0.1% : newfile : restart
This however ends up occasionally creating files that are entirely silent and trimming the audio before each break.
I found better results with Audacity, but I need to split hundreds of WAV files and Audacity cannot even open 10 files simultaneously without freezing.
How can I use SoX or similar software to split the files at the end of the 0.3 second periods of silence, such that the silent portion is still affixed to the end of the speaking, but not before and there are no clips that are entirely silent, unless they come from the beginning of input.wav?
| if you change 0.5 to 3.0, it works fine:
sox -V3 input.wav output.wav silence 1 3.0 0.1% 1 0.3 0.1% : newfile : restart
| Split | 20,014,064 | 25 |
Can anyone help me a bit with regexs? I currently have this: re.split(" +", line.rstrip()), which separates by spaces.
How could I expand this to cover punctuation, too?
| The official Python documentation has a good example for this one. It will split on all non-alphanumeric characters (whitespace and punctuation). Literally \W is the character class for all Non-Word characters. Note: the underscore "_" is considered a "word" character and will not be part of the split here.
re.split('\W+', 'Words, words, words.')
See https://docs.python.org/3/library/re.html for more examples, search page for "re.split"
| Split | 19,894,478 | 25 |
I was designing a regex to split all the actual words from a given text:
Input Example:
"John's mom went there, but he wasn't there. So she said: 'Where are you'"
Expected Output:
["John's", "mom", "went", "there", "but", "he", "wasn't", "there", "So", "she", "said", "Where", "are", "you"]
I thought of a regex like that:
"(([^a-zA-Z]+')|('[^a-zA-Z]+))|([^a-zA-Z']+)"
After splitting in Python, the result contains None items and empty spaces.
How to get rid of the None items? And why didn't the spaces match?
Edit:
Splitting on spaces, will give items like: ["there."]
And splitting on non-letters, will give items like: ["John","s"]
And splitting on non-letters except ', will give items like: ["'Where","you'"]
| Instead of regex, you can use string-functions:
to_be_removed = ".,:!" # all characters to be removed
s = "John's mom went there, but he wasn't there. So she said: 'Where are you!!'"
for c in to_be_removed:
s = s.replace(c, '')
s.split()
BUT, in your example you do not want to remove apostrophe in John's but you wish to remove it in you!!'. So string operations fails in that point and you need a finely adjusted regex.
EDIT: probably a simple regex can solve your porblem:
(\w[\w']*)
It will capture all chars that starts with a letter and keep capturing while next char is an apostrophe or letter.
(\w[\w']*\w)
This second regex is for a very specific situation.... First regex can capture words like you'. This one will aviod this and only capture apostrophe if is is within the word (not in the beginning or in the end). But in that point, a situation raises like, you can not capture the apostrophe Moss' mom with the second regex. You must decide whether you will capture trailing apostrophe in names ending wit s and defining ownership.
Example:
rgx = re.compile("([\w][\w']*\w)")
s = "John's mom went there, but he wasn't there. So she said: 'Where are you!!'"
rgx.findall(s)
["John's", 'mom', 'went', 'there', 'but', 'he', "wasn't", 'there', 'So', 'she', 'said', 'Where', 'are', 'you']
UPDATE 2: I found a bug in my regex! It can not capture single letters followed by an apostrophe like A'. Fixed brand new regex is here:
(\w[\w']*\w|\w)
rgx = re.compile("(\w[\w']*\w|\w)")
s = "John's mom went there, but he wasn't there. So she said: 'Where are you!!' 'A a'"
rgx.findall(s)
["John's", 'mom', 'went', 'there', 'but', 'he', "wasn't", 'there', 'So', 'she', 'said', 'Where', 'are', 'you', 'A', 'a']
| Split | 12,705,293 | 25 |
I've been developing an Android app for the past 4 Months now and came across the following regarding the split function:
String [] arr;
SoapPrimitive result = (SoapPrimitive)envelope.getResponse();
arr = result.toString().trim().split("|");
The result variable is what I get after accessing my WebService, now this works perfectly. But, for some reason my split("|") method is not splitting at "|" but rather splitting at every single char in my result String. So my array looks like this:
arr[0] is "H",
arr[1] is "e",
etc......
I don't know why this is happening because I have used it before in the same project and it worked perfectly.
Thank you in advance
| arr = result.toString().trim().split("\\|");
the param of String.split accept a regular expression.
| Split | 6,965,642 | 25 |
I have a string like:
$Order_num = "0982asdlkj";
How can I split that into the 2 variables, with the number as one element and then another variable with the letter element?
The number element can be any length from 1 to 4 say and the letter element fills the rest to make every order_num 10 characters long in total.
I have found the php explode function...but don't know how to make it in my case because the number of numbers is between 1 and 4 and the letters are random after that, so no way to split at a particular letter.
| You can use preg_split using lookahead and lookbehind:
print_r(preg_split('#(?<=\d)(?=[a-z])#i', "0982asdlkj"));
prints
Array
(
[0] => 0982
[1] => asdlkj
)
This only works if the letter part really only contains letters and no digits.
Update:
Just to clarify what is going on here:
The regular expressions looks at every position and if a digit is before that position ((?<=\d)) and a letter after it ((?=[a-z])), then it matches and the string gets split at this position. The whole thing is case-insensitive (i).
| Split | 4,537,994 | 25 |
I've been whacking away at this for a while to no avail... Any help would be greatly
appreciated.
I have:
[{'event': 0, 'voltage': 1, 'time': 0},
{'event': 0, 'voltage': 2, 'time': 1},
{'event': 1, 'voltage': 1, 'time': 2},
{'event': 1, 'voltage': 2, 'time': 3},
{'event': 2, 'voltage': 1, 'time': 4},
{'event': 2, 'voltage': 2, 'time': 5},
...]
and I want to split that list of dictionaries up per event like this (there can be arbitrarily many events):
list0 = [{'event': 0, 'voltage': 1, 'time': 0},
{'event': 0, 'voltage': 2, 'time': 1}]
list1 = [{'event': 1, 'voltage': 1, 'time': 2},
{'event': 1, 'voltage': 2, 'time': 3}]
list2 = [{'event': 2, 'voltage': 1, 'time': 4},
{'event': 2, 'voltage': 2, 'time': 5}]
listN = ...
| use defaultdict
import collections
result = collections.defaultdict(list)
for d in dict_list:
result[d['event']].append(d)
result_list = result.values() # Python 2.x
result_list = list(result.values()) # Python 3
This way, you don't have to make any assumptions about how many different events there are or if there are any events missing.
This gives you a list of lists. If you want a dict indexed by event, I would probably use dict(d) if you plan on doing any random access.
As far as constructing a bunch of individual lists, I think that that's a bad idea. It will necessitate creating them as globals or using eval (or getting hacky in some other way) unless you know exactly how many there are going to be which you claim not to. It's best to just keep them in a container.
| Split | 4,091,680 | 25 |
How do you split a string based on some separator?
Given a string Topic1,Topic2,Topic3, I want to split the string based on , to generate:
Topic1 Topic2 Topic3
| In XSLT 1.0 you have to built a recursive template. This stylesheet:
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="text/text()" name="tokenize">
<xsl:param name="text" select="."/>
<xsl:param name="separator" select="','"/>
<xsl:choose>
<xsl:when test="not(contains($text, $separator))">
<item>
<xsl:value-of select="normalize-space($text)"/>
</item>
</xsl:when>
<xsl:otherwise>
<item>
<xsl:value-of select="normalize-space(substring-before($text, $separator))"/>
</item>
<xsl:call-template name="tokenize">
<xsl:with-param name="text" select="substring-after($text, $separator)"/>
</xsl:call-template>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
Input:
<root>
<text>Item1, Item2, Item3</text>
</root>
Output:
<root>
<text>
<item>Item1</item>
<item>Item2</item>
<item>Item3</item>
</text>
</root>
In XSLT 2.0 you have the tokenize() core function. So, this stylesheet:
<xsl:stylesheet version="2.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="text/text()" name="tokenize">
<xsl:param name="separator" select="','"/>
<xsl:for-each select="tokenize(.,$separator)">
<item>
<xsl:value-of select="normalize-space(.)"/>
</item>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
Result:
<root>
<text>
<item>Item1</item>
<item>Item2</item>
<item>Item3</item>
</text>
</root>
| Split | 3,336,424 | 25 |
I looking for a function like regexp_split_to_table, but our db is version 8.2.9, so it doesn't have it. I'm really only splitting on a space, so a string like
how now brown cow
would return
+------+
|Column|
+------+
|how |
|now |
|brown |
|cow |
+------+
is there a simple function that can handle this, or something I have to write myself?
| You can split an array to a resultset by using the unnest function, and you can turn a string literal into an array by using the string_to_array function. Combine both and you get this:
alvherre=# select unnest(string_to_array('the quick lazy fox', ' '));
unnest
--------
the
quick
lazy
fox
(4 filas)
Since 8.2 does not have UNNEST, you can write it in PostgreSQL like this:
create or replace function unnest(anyarray) returns setof anyelement
language sql as $$
select $1[i] from generate_series(array_lower($1, 1),
array_upper($1, 1)) as i;
$$;
| Split | 1,986,491 | 25 |
How can you split a word to its constituent letters?
Example of code which is not working
class Test {
public static void main( String[] args) {
String[] result = "Stack Me 123 Heppa1 oeu".split("\\a");
// output should be
// S
// t
// a
// c
// k
// M
// e
// H
// e
// ...
for ( int x=0; x<result.length; x++) {
System.out.println(result[x] + "\n");
}
}
}
The problem seems to be in the character \\a.
It should be a [A-Za-z].
| You need to use split("");.
That will split it by every character.
However I think it would be better to iterate over a String's characters like so:
for (int i = 0;i < str.length(); i++){
System.out.println(str.charAt(i));
}
It is unnecessary to create another copy of your String in a different form.
| Split | 1,521,921 | 25 |
I need to be able to take a string like:
'''foo, bar, "one, two", three four'''
into:
['foo', 'bar', 'one, two', 'three four']
I have an feeling (with hints from #python) that the solution is going to involve the shlex module.
| It depends how complicated you want to get... do you want to allow more than one type of quoting. How about escaped quotes?
Your syntax looks very much like the common CSV file format, which is supported by the Python standard library:
import csv
reader = csv.reader(['''foo, bar, "one, two", three four'''], skipinitialspace=True)
for r in reader:
print r
Outputs:
['foo', 'bar', 'one, two', 'three four']
HTH!
| Split | 118,096 | 25 |
When std::views::split() gets an unnamed string literal as a pattern, it will not split the string but works just fine with an unnamed character literal.
#include <iomanip>
#include <iostream>
#include <ranges>
#include <string>
#include <string_view>
int main(void)
{
using namespace std::literals;
// returns the original string (not splitted)
auto splittedWords1 = std::views::split("one:.:two:.:three", ":.:");
for (const auto word : splittedWords1)
std::cout << std::quoted(std::string_view(word));
std::cout << std::endl;
// returns the splitted string
auto splittedWords2 = std::views::split("one:.:two:.:three", ":.:"sv);
for (const auto word : splittedWords2)
std::cout << std::quoted(std::string_view(word));
std::cout << std::endl;
// returns the splitted string
auto splittedWords3 = std::views::split("one:two:three", ':');
for (const auto word : splittedWords3)
std::cout << std::quoted(std::string_view(word));
std::cout << std::endl;
// returns the original string (not splitted)
auto splittedWords4 = std::views::split("one:two:three", ":");
for (const auto word : splittedWords4)
std::cout << std::quoted(std::string_view(word));
std::cout << std::endl;
return 0;
}
See live @ godbolt.org.
I understand that string literals are always lvalues. But even though, I am missing some important piece of information that connects everything together. Why can I pass the string that I want splitted as an unnamed string literal whereas it fails (as-in: returns a range of ranges with the original string) when I do the same with the pattern?
| String literals always end with a null-terminator, so ":.:" is actually a range with the last element of \0 and a size of 4.
Since the original string does not contain such a pattern, it is not split.
When dealing with C++20 ranges, I strongly recommend using string_view instead of raw string literals, which works well with <ranges> and can avoid the error-prone null-terminator issue.
| Split | 74,260,112 | 24 |
In my dataframe, I have a categorical variable that I'd like to convert into dummy variables. This column however has multiple values separated by commas:
0 'a'
1 'a,b,c'
2 'a,b,d'
3 'd'
4 'c,d'
Ultimately, I'd want to have binary columns for each possible discrete value; in other words, final column count equals number of unique values in the original column. I imagine I'd have to use split() to get each separate value but not sure what to do afterwards. Any hint much appreciated!
Edit: Additional twist. Column has null values. And in response to comment, the following is the desired output. Thanks!
a b c d
0 1 0 0 0
1 1 1 1 0
2 1 1 0 1
3 0 0 0 1
4 0 0 1 1
| Use str.get_dummies
df['col'].str.get_dummies(sep=',')
a b c d
0 1 0 0 0
1 1 1 1 0
2 1 1 0 1
3 0 0 0 1
4 0 0 1 1
Edit: Updating the answer to address some questions.
Qn 1: Why is it that the series method get_dummies does not accept the argument prefix=... while pandas.get_dummies() does accept it
Series.str.get_dummies is a series level method (as the name suggests!). We are one hot encoding values in one Series (or a DataFrame column) and hence there is no need to use prefix. Pandas.get_dummies on the other hand can one hot encode multiple columns. In which case, the prefix parameter works as an identifier of the original column.
If you want to apply prefix to str.get_dummies, you can always use DataFrame.add_prefix
df['col'].str.get_dummies(sep=',').add_prefix('col_')
Qn 2: If you have more than one column to begin with, how do you merge the dummies back into the original frame?
You can use DataFrame.concat to merge one hot encoded columns with the rest of the columns in dataframe.
df = pd.DataFrame({'other':['x','y','x','x','q'],'col':['a','a,b,c','a,b,d','d','c,d']})
df = pd.concat([df, df['col'].str.get_dummies(sep=',')], axis = 1).drop('col', 1)
other a b c d
0 x 1 0 0 0
1 y 1 1 1 0
2 x 1 1 0 1
3 x 0 0 0 1
4 q 0 0 1 1
| Split | 46,867,201 | 24 |
I have a pandas dataframe containing (besides other columns) full names:
fullname
martin master
andreas test
I want to create a new column which splits the fullname column along the blank space and assigns the last element to a new column. The result should look like:
fullname lastname
martin master master
andreas test test
I thought it would work like this:
df['lastname'] = df['fullname'].str.split(' ')[-1]
However, I get a KeyError: -1
I use [-1], that is the last element of the split group, in order to be sure that I get the real last name. In some cases (e.g. a name like andreas martin master), this helps to get the last name, that is, master.
So how can I do this?
| You need another str to access the last splits for every row, what you did was essentially try to index the series using a non-existent label:
In [31]:
df['lastname'] = df['fullname'].str.split().str[-1]
df
Out[31]:
fullname lastname
0 martin master master
1 andreas test test
| Split | 38,498,718 | 24 |
I'd like to split an input string on the first colon that still has characters after it on the same line.
For this, I am using the regular expression /:(.+)/
So given the string
aaa:
bbb:ccc
I'd expect an output of
["aaa:\nbbb", "ccc"]
And given the string
aaa:bbb:ccc
I'd expect an output of
["aaa", "bbb:ccc"]
Yet when I actually run these commands, I get
["aaa:\nbbb", "ccc", ""]
["aaa", "bbb:ccc", ""]
As output.
So somehow, javascript is adding an empty string to the end of the array.
I have checked the documentation for String.split and whilst it does mention that if you perform string.split on an empty string with a specified separator, you'll get an array with 1 empty string in it (and not empty array). It makes no mention of there always being an empty string in the output, or a warning that you may get this result if you make a common mistake or something.
I'd understand if my input string had a colon at the end or something like that; then it splits at the colon and the rest of the match is empty string. That's the issue mentioned in Splitting string with regular expression to make it array without empty element - but I don't have this issue, as my input string does not end with my separator.
I know a quick solution in my case will be to just simply limit the amount of matches, via "aaa:bbb:ccc".split(/:(.+)/, 2), but I'm still curious:
Why does this string.split call return an array ending with an empty string?
| If we change the regex to /:.+/ and perform a split on it you get:
["aaa", ""]
This makes sense as the regex is matching the :bbb:ccc.
And gives you the same output, if you were to manually split that string.
>>> 'aaa:bbb:ccc'.split(':bbb:ccc')
['aaa', '']
Adding the capture group in just saves the bbb:ccc, but shouldn't change the original split behaviour.
| Split | 38,261,359 | 24 |
I've been using the Split() method to split strings. But this work if you set some character for condition in string.Split(). Is there any way to split a string when is see Uppercase?
Is it possible to get few words from some not separated string like:
DeleteSensorFromTemplate
And the result string is to be like:
Delete Sensor From Template
| Use Regex.split
string[] split = Regex.Split(str, @"(?<!^)(?=[A-Z])");
| Split | 36,147,162 | 24 |
I have this mystring with the delimiter _. The condition here is if there are two or more delimiters, I want to split at the second delimiter and if there is only one delimiter, I want to split at ".Recal" and get the result as shown below.
mystring<-c("MODY_60.2.ReCal.sort.bam","MODY_116.21_C4U.ReCal.sort.bam","MODY_116.3_C2RX-1-10.ReCal.sort.bam","MODY_116.4.ReCal.sort.bam")
result
"MODY_60.2" "MODY_116.21" "MODY_116.3" "MODY_116.4"
| You can do this using gsubfn
library(gsubfn)
f <- function(x,y,z) if (z=="_") y else strsplit(x, ".ReCal", fixed=T)[[1]][[1]]
gsubfn("([^_]+_[^_]+)(.).*", f, mystring, backref=2)
# [1] "MODY_60.2" "MODY_116.21" "MODY_116.3" "MODY_116.4"
This allows for cases when you have more than two "_", and you want to split on the second one, for example,
mystring<-c("MODY_60.2.ReCal.sort.bam",
"MODY_116.21_C4U.ReCal.sort.bam",
"MODY_116.3_C2RX-1-10.ReCal.sort.bam",
"MODY_116.4.ReCal.sort.bam",
"MODY_116.4_asdfsadf_1212_asfsdf",
"MODY_116.5.ReCal_asdfsadf_1212_asfsdf", # split by second "_", leaving ".ReCal"
"MODY")
gsubfn("([^_]+_[^_]+)(.).*", f, mystring, backref=2)
# [1] "MODY_60.2" "MODY_116.21" "MODY_116.3" "MODY_116.4"
# [5] "MODY_116.4" "MODY_116.5.ReCal" "MODY"
In the function, f, x is the original string, y and z are the next matches. So, if z is not a "_", then it proceeds with the splitting by the alternative string.
| Split | 31,931,954 | 24 |
I HATE velocity and rarely ever use it but sometimes I am called upon at my job to do so. I can never really figure out just how to use it.
I have this
#foreach( $product in $browseSiteProducts )
alert("$product.productId");
#foreach( $stringList in $product.productId.split("|") )
alert("inner loop");
#end
#end
$browseSiteProducts is an Array. Or List. Or whatever. I don't even know. The first alert of the productId works fine. I get "<stuff>|<morestuff>" which is what I expected when printed out. The inner loop then should split that on the "|" as the delimiter and give me to alerts of "inner loop". But instead I always get 24 alerts because there are 24 characters in the productId. so split() is not delimiting correctly for me. What the heck am I doing wrong??
Thanks
Kyle
| Velocity has extremely few objects and methods of its own. Instead, it allows you to work with real Java objects and call real Java methods on those objects. Which Velocity documentation says that the delimiter is a string?
Moreover, since Velocity is Java-based, a string is just a data type that can hold many types of information: phone numbers, names, identifiers, regular expressions... In Java, many methods dealing with regular expressions pass those REs as String objects.
You can check the actual type that a value behind a variable has by printing its classname:
Product class is $product.class
Product ID class is $product.productId.class
If the product ID is indeed a java.lang.String, then you can check that the split method takes a String parameter, but that String is expected to be a valid regular expression.
And since | is a special character in regular expressions, you need to escape it somehow. This works:
#foreach( $stringList in $product.productId.split("[|]") )
| Split | 21,288,687 | 24 |
I have a data frame with a numerical ID variable which identify the Primary, Secondary and Ultimate Sampling Units from a multistage sampling scheme. I want to split the original ID variable into three new variables, identifying the different sampling units separately:
Example:
>df[1:2,]
ID Var var1 var2 var3 var4 var5
501901 9 SP.1 1 W 12.10
501901 9 SP.1 2 W 17.68
What I want:
>df[1:2,]
ID1 ID2 ID3 var1 var2 var3 var4 var5
5 01 901 9 SP.1 1 W 12.10
5 01 901 9 SP.1 2 W 17.68
I know there is some functions available in R to split character strings, but I could not find same facilities for numbers.
Thank you,
Juan
| You could use for example use substring:
df <- data.frame(ID = c(501901, 501902))
splitted <- t(sapply(df$ID, function(x) substring(x, first=c(1,2,4), last=c(1,3,6))))
cbind(df, splitted)
# ID 1 2 3
#1 501901 5 01 901
#2 501902 5 01 902
| Split | 15,498,235 | 24 |
Is there a regex that would work with String.split() to break a String into contiguous characters - ie split where the next character is different to the previous character?
Here's the test case:
String regex = "your answer here";
String[] parts = "aaabbcddeee".split(regex);
System.out.println(Arrays.toString(parts));
Expected output:
[aaa, bb, c, dd, eee]
Although the test case has letters only as input, this is for clarity only; input characters may be any character.
Please do not provide "work-arounds" involving loops or other techniques.
The question is to find the right regex for the code as shown above - ie only using split() and no other methods calls. It is not a question about finding code that will "do the job".
| It is totally possible to write the regex for splitting in one step:
"(?<=(.))(?!\\1)"
Since you want to split between every group of same characters, we just need to look for the boundary between 2 groups. I achieve this by using a positive look-behind just to grab the previous character, and use a negative look-ahead and back-reference to check that the next character is not the same character.
As you can see, the regex is zero-width (only 2 look around assertions). No character is consumed by the regex.
| Split | 13,596,454 | 24 |
Ok i have a string where i want to remove the last word split by \
for example:
string name ="kak\kdk\dd\ddew\cxz\"
now i want to remove the last word so that i get a new value for name as
name= "kak\kdk\dd\ddew\"
is there an easy way to do this
thanks
| How do you get this string in the first place? I assume you know that '' is the escape character in C#. However, you should get far by using
name = name.TrimEnd('\\').Remove(name.LastIndexOf('\\') + 1);
| Split | 2,155,668 | 24 |
I have a script with these two functions:
# Getting content of each page
def GetContent(url):
response = requests.get(url)
return response.content
# Extracting the sites
def CiteParser(content):
soup = BeautifulSoup(content)
print "---> site #: ",len(soup('cite'))
result = []
for cite in soup.find_all('cite'):
result.append(cite.string.split('/')[0])
return result
When I run program I have the following error:
result.append(cite.string.split('/')[0])
AttributeError: 'NoneType' object has no attribute 'split'
Output Sample:
URL: <URL That I use to search 'can be google, bing, etc'>
---> site #: 10
site1.com
.
.
.
site10.com
URL: <URL That I use to search 'can be google, bing, etc'>
File "python.py", line 49, in CiteParser
result.append(cite.string.split('/')[0])
AttributeError: 'NoneType' object has no attribute 'split'
| It can happen, that the string has nothing inside, than it is "None" type, so what I can suppose is to check first if your string is not "None"
# Extracting the sites
def CiteParser(content):
soup = BeautifulSoup(content)
#print soup
print "---> site #: ",len(soup('cite'))
result = []
for cite in soup.find_all('cite'):
if cite.string is not None:
result.append(cite.string.split('/'))
print cite
return result
| Split | 25,882,670 | 23 |
I have a string that I want to split up in 2 pieces. The first piece is before the comma (,) and the second piece is all stuff after the comma (including the commas).
I already managed to retrieve the first piece before the comma in the variable $Header, but I don't know how to retrieve the pieces after the first comma in one big string.
$string = "Header text,Text 1,Text 2,Text 3,Text 4,"
$header = $string.Split(',')[0] # $Header = "Header text"
$content = "Text 1,Text 2,Text 3,Text 4,"
# There might be more text then visible here, like say Text 5, Text 6, ..
| PowerShell's -split operator supports specifying the maximum number of sub-strings to return, i.e. how many sub-strings to return. After the pattern to split on, give the number of strings you want back:
$header,$content = "Header text,Text 1,Text 2,Text 3,Text 4," -split ',',2
| Split | 25,383,263 | 23 |
I want to split the string "aaaabbbccccaaddddcfggghhhh" into "aaaa", "bbb", "cccc". "aa", "dddd", "c", "f" and so on.
I tried this:
String[] arr = "aaaabbbccccaaddddcfggghhhh".split("(.)(?!\\1)");
But this eats away one character, so with the above regular expression I get "aaa" while I want it to be "aaaa" as the first string.
How do I achieve this?
| Try this:
String str = "aaaabbbccccaaddddcfggghhhh";
String[] out = str.split("(?<=(.))(?!\\1)");
System.out.println(Arrays.toString(out));
=> [aaaa, bbb, cccc, aa, dddd, c, f, ggg, hhhh]
Explanation: we want to split the string at groups of same chars, so we need to find out the "boundary" between each group. I'm using Java's syntax for positive look-behind to pick the previous char and then a negative look-ahead with a back reference to verify that the next char is not the same as the previous one. No characters were actually consumed, because only two look-around assertions were used (that is, the regular expresion is zero-width).
| Split | 23,523,597 | 23 |
I am wondering about the simple task of splitting a vector into two at a certain index:
splitAt <- function(x, pos){
list(x[1:pos-1], x[pos:length(x)])
}
a <- c(1, 2, 2, 3)
> splitAt(a, 4)
[[1]]
[1] 1 2 2
[[2]]
[1] 3
My question: There must be some existing function for this, but I can't find it? Is maybe split a possibility? My naive implementation also does not work if pos=0 or pos>length(a).
| An improvement would be:
splitAt <- function(x, pos) unname(split(x, cumsum(seq_along(x) %in% pos)))
which can now take a vector of positions:
splitAt(a, c(2, 4))
# [[1]]
# [1] 1
#
# [[2]]
# [1] 2 2
#
# [[3]]
# [1] 3
And it does behave properly (subjective) if pos <= 0 or pos >= length(x) in the sense that it returns the whole original vector in a single list item. If you'd like it to error out instead, use stopifnot at the top of the function.
| Split | 16,357,962 | 23 |
I know that array_chunk() allows to split an array into several chunks, but the number of chunks changes according to the number of elements. What I need is to always split the array into a specific number of arrays like 4 arrays for example.
The following code splits the array into 3 chunks, two chunks with 2 elements each and 1 chunk with 1 element. What I would like is to split the array always into 4 chunks, no matter the number of total elements that the array has, but always trying to divide the elements evenly in the chunks like the array_chunck function does. How can I accomplish this? Is there any PHP function for this?
$input_array = array('a', 'b', 'c', 'd', 'e');
print_r(array_chunk($input_array, 2));
print_r(array_chunk($input_array, 2, true));
Thank you.
| You can try
$input_array = array(
'a',
'b',
'c',
'd',
'e'
);
print_r(partition($input_array, 4));
Output
Array
(
[0] => Array
(
[0] => a
[1] => b
)
[1] => Array
(
[0] => c
)
[2] => Array
(
[0] => d
)
[3] => Array
(
[0] => e
)
)
Function Used
/**
*
* @param Array $list
* @param int $p
* @return multitype:multitype:
* @link http://www.php.net/manual/en/function.array-chunk.php#75022
*/
function partition(Array $list, $p) {
$listlen = count($list);
$partlen = floor($listlen / $p);
$partrem = $listlen % $p;
$partition = array();
$mark = 0;
for($px = 0; $px < $p; $px ++) {
$incr = ($px < $partrem) ? $partlen + 1 : $partlen;
$partition[$px] = array_slice($list, $mark, $incr);
$mark += $incr;
}
return $partition;
}
| Split | 15,723,059 | 23 |
I am using boost::split to parse a data file. The data file contains lines such as the following.
data.txt
1:1~15 ASTKGPSVFPLAPSS SVFPLAPSS -12.6 98.3
The white space between the items are tabs. The code I have to split the above line is as follows.
std::string buf;
/*Assign the line from the file to buf*/
std::vector<std::string> dataLine;
boost::split( dataLine, buf , boost::is_any_of("\t "), boost::token_compress_on); //Split data line
cout << dataLine.size() << endl;
For the above line of code I should get a print out of 5, but I get 6. I have tried to read through the documentation and this solution seems as though it should do what I want, clearly I am missing something. Thanks!
Edit:
Running a forloop as follows on dataLine you get the following.
cout << "****" << endl;
for(int i = 0 ; i < dataLine.size() ; i ++) cout << dataLine[i] << endl;
cout << "****" << endl;
****
1:1~15
ASTKGPSVFPLAPSS
SVFPLAPSS
-12.6
98.3
****
| Even though "adjacent separators are merged together", it seems like the trailing delimeters make the problem, since even when they are treated as one, it still is one delimeter.
So your problem cannot be solved with split() alone. But luckily Boost String Algo has trim() and trim_if(), which strip whitespace or delimeters from beginning and end of a string. So just call trim() on buf, like this:
std::string buf = "1:1~15 ASTKGPSVFPLAPSS SVFPLAPSS -12.6 98.3 ";
std::vector<std::string> dataLine;
boost::trim_if(buf, boost::is_any_of("\t ")); // could also use plain boost::trim
boost::split(dataLine, buf, boost::is_any_of("\t "), boost::token_compress_on);
std::cout << out.size() << std::endl;
This question was already asked: boost::split leaves empty tokens at the beginning and end of string - is this desired behaviour?
| Split | 15,690,389 | 23 |
I have these url strings
file:///home/we/Pictures/neededWord/3193_n.jpg
file:///home/smes/Pictures/neededWord/jds_22.png
file:///home/seede/kkske/Pictures/neededWord/3193_n.jpg
I want to extract the "neededWord" from each of them. As it appears from them, the name of the image is always after the "neededWord" and the changing part in the string is before the "neededWord". The way I thought of is to split the string using the "/" seperator from right and take the second element in the resulted QstringList. So how to split from right, or is there a better way to do that?
| Well you would just take the second to last element:
QStringList pieces = url.split( "/" );
QString neededWord = pieces.value( pieces.length() - 2 );
Alternatively, you could use a regular expression.
| Split | 11,751,697 | 23 |
I need to split comma delimited string into a second columns
I have the following table :
CL1 POS POS2 LENGHT ALLELE
1 3015108,3015109 5 A
2 3015110,3015200 10 B
3 3015200,3015300 15 C
4 3015450,3015500 20 D
5 3015600,3015700 15 E
I want to split the numbers after the comma into a second column POS2
So it should like that
CL1 POS POS2 LENGHT ALLELE
1 3015108 3015109 5 A
2 3015110 3015200 10 B
3 3015200 3015300 15 C
4 3015450 3015500 20 D
5 3015600 3015700 15 E
So I've queried the following :
INSERT INTO MyTable (POS2)
SELECT RIGHT(POS, CHARINDEX(',', POS) + 1 ) FROM MyTable ;
It returns an error :
ERROR 1305 (42000): FUNCTION test.CHARINDEX does not exist
| MySQL doesn't have a built-in CHARINDEX() function. LOCATE() would be the MySQL equivalent.
Using SUBSTRING_INDEX() might be a more succinct way of doing this. Something like this (disclaimer: untested):
SUBSTRING_INDEX(POS, ',', 1) for POS
SUBSTRING_INDEX(POS, ',', -1) for POS2
As an aside, I may be misunderstanding what you're trying to accomplish, but it looks like you might want to UPDATE existing rows, not INSERT new ones? Something like:
UPDATE MyTable SET POS2 = SUBSTRING_INDEX(POS, ',', -1);
UPDATE MyTable SET POS = SUBSTRING_INDEX(POS, ',', 1);
| Split | 9,953,114 | 23 |
I have multiple strings in different cells like
CO20: 20 YR CONVENTIONAL
FH30: 30 YR FHLMC
FHA31
I need to get the substring from 1 to till index of ':' or if that is not available till ending(in case of string 3). I need help writing this in VBA.
| Shorter:
Split(stringval,":")(0)
| Split | 6,052,337 | 23 |
Is it possible to take two files that are open in separate tabs in gVim and combine them into one tab with a split/vsplit window? I'd prefer if there was a way to specify which tabs to join, but even something that is the opposite of :tab ball would be good enough.
Thanks
| Lots of handwork but...
:tabnew
:buffers "note the numbers
:split
:bn " where n is the number of
<CTRL-W><CTRL-W>
:bn " for the other file
:tabonly " not necessary, closes every other tab
Or you can create a function for it which asks for buffer numbers, then creates the tab, and closes every other tab (for the opened files)...
| Split | 4,615,856 | 23 |
I'm writing a program that requires a string to be inputted, then broken up into individual letters. Essentially, I need help finding a way to turn "string" into ["s","t","r","i","n","g"]. The strings are also stored using the string data type instead of just an array of chars by default. I would like to keep it that way and avoid char but will use it if necessary.
| Assuming you already have the string inputted:
string s("string");
vector<char> v(s.begin(), s.end());
This will fill the vector v with the characters from a string.
| Split | 2,158,943 | 23 |
Say, I have a string
"hello is it me you're looking for"
I want to cut part of this string out and return the new string, something like
s = string.cut(0,3);
s would now be equal to:
"lo is it me you're looking for"
EDIT: It may not be from 0 to 3. It could be from 5 to 7.
s = string.cut(5,7);
would return
"hellos it me you're looking for"
| You're almost there. What you want is:
http://www.w3schools.com/jsref/jsref_substr.asp
So, in your example:
Var string = "hello is it me you're looking for";
s = string.substr(3);
As only providing a start (the first arg) takes from that index to the end of the string.
Update, how about something like:
function cut(str, cutStart, cutEnd){
return str.substr(0,cutStart) + str.substr(cutEnd+1);
}
| Split | 1,707,527 | 23 |
I got a string of such format:
"Wilbur Smith (Billy, son of John), Eddie Murphy (John), Elvis Presley, Jane Doe (Jane Doe)"
so basicly it's list of actor's names (optionally followed by their role in parenthesis). The role itself can contain comma (actor's name can not, I strongly hope so).
My goal is to split this string into a list of pairs - (actor name, actor role).
One obvious solution would be to go through each character, check for occurances of '(', ')' and ',' and split it whenever a comma outside occures. But this seems a bit heavy...
I was thinking about spliting it using a regexp: first split the string by parenthesis:
import re
x = "Wilbur Smith (Billy, son of John), Eddie Murphy (John), Elvis Presley, Jane Doe (Jane Doe)"
s = re.split(r'[()]', x)
# ['Wilbur Smith ', 'Billy, son of John', ', Eddie Murphy ', 'John', ', Elvis Presley, Jane Doe ', 'Jane Doe', '']
The odd elements here are actor names, even are the roles. Then I could split the names by commas and somehow extract the name-role pairs. But this seems even worse then my 1st approach.
Are there any easier / nicer ways to do this, either with a single regexp or a nice piece of code?
| One way to do it is to use findall with a regex that greedily matches things that can go between separators. eg:
>>> s = "Wilbur Smith (Billy, son of John), Eddie Murphy (John), Elvis Presley, Jane Doe (Jane Doe)"
>>> r = re.compile(r'(?:[^,(]|\([^)]*\))+')
>>> r.findall(s)
['Wilbur Smith (Billy, son of John)', ' Eddie Murphy (John)', ' Elvis Presley', ' Jane Doe (Jane Doe)']
The regex above matches one or more:
non-comma, non-open-paren characters
strings that start with an open paren, contain 0 or more non-close-parens, and then a close paren
One quirk about this approach is that adjacent separators are treated as a single separator. That is, you won't see an empty string. That may be a bug or a feature depending on your use-case.
Also note that regexes are not suitable for cases where nesting is a possibility. So for example, this would split incorrectly:
"Wilbur Smith (son of John (Johnny, son of James), aka Billy), Eddie Murphy (John)"
If you need to deal with nesting your best bet would be to partition the string into parens, commas, and everthing else (essentially tokenizing it -- this part could still be done with regexes) and then walk through those tokens reassembling the fields, keeping track of your nesting level as you go (this keeping track of the nesting level is what regexes are incapable of doing on their own).
| Split | 1,648,537 | 23 |
Use Case
I need to split large files (~5G) of JSON data into smaller files with newline-delimited JSON in a memory efficient way (i.e., without having to read the entire JSON blob into memory). The JSON data in each source file is an array of objects.
Unfortunately, the source data is not newline-delimited JSON and in some cases there are no newlines in the files at all. This means I can't simply use the split command to split the large file into smaller chunks by newline. Here are examples of how the source data is stored in each file:
Example of a source file with newlines.
[{"id": 1, "name": "foo"}
,{"id": 2, "name": "bar"}
,{"id": 3, "name": "baz"}
...
,{"id": 9, "name": "qux"}]
Example of a source file without newlines.
[{"id": 1, "name": "foo"}, {"id": 2, "name": "bar"}, ...{"id": 9, "name": "qux"}]
Here's an example of the desired format for a single output file:
{"id": 1, "name": "foo"}
{"id": 2, "name": "bar"}
{"id": 3, "name": "baz"}
Current Solution
I'm able to achieve the desired result by using jq and split as described in this SO Post. This approach is memory efficient thanks to the jq streaming parser. Here's the command that achieves the desired result:
cat large_source_file.json \
| jq -cn --stream 'fromstream(1|truncate_stream(inputs))' \
| split --line-bytes=1m --numeric-suffixes - split_output_file
The Problem
The command above takes ~47 mins to process through the entire source file. This seems quite slow, especially when compared to sed which can produce the same output much faster.
Here are some performance benchmarks to show processing time with jq vs. sed.
export SOURCE_FILE=medium_source_file.json # smaller 250MB
# using jq
time cat ${SOURCE_FILE} \
| jq -cn --stream 'fromstream(1|truncate_stream(inputs))' \
| split --line-bytes=1m - split_output_file
real 2m0.656s
user 1m58.265s
sys 0m6.126s
# using sed
time cat ${SOURCE_FILE} \
| sed -E 's#^\[##g' \
| sed -E 's#^,\{#\{#g' \
| sed -E 's#\]$##g' \
| sed 's#},{#}\n{#g' \
| split --line-bytes=1m - sed_split_output_file
real 0m25.545s
user 0m5.372s
sys 0m9.072s
Questions
Is this slower processing speed expected for jq compared to sed? It makes sense jq would be slower given it's doing a lot of validation under the hood, but 4X slower doesn't seem right.
Is there anything I can do to improve the speed at which jq can process this file? I'd prefer to use jq to process files because I'm confident it could seamlessly handle other line output formats, but given I'm processing thousands of files each day, it's hard to justify the speed difference I've observed.
| jq's streaming parser (the one invoked with the --stream command-line option) intentionally sacrifices speed for the sake of reduced memory requirements, as illustrated below in the metrics section. A tool which strikes a different balance (one which seems to be closer to what you're looking for) is jstream, the homepage of which is https://github.com/bcicen/jstream
Running the sequence of commands in a bash or bash-like shell:
cd
go get github.com/bcicen/jstream
cd go/src/github.com/bcicen/jstream/cmd/jstream/
go build
will result in an executable, which you can invoke like so:
jstream -d 1 < INPUTFILE > STREAM
Assuming INPUTFILE contains a (possibly ginormous) JSON array, the above will behave like jq's .[], with jq's -c (compact) command-line option. In fact, this is also the case if INPUTFILE contains a stream of JSON arrays, or a stream of JSON non-scalars ...
Illustrative space-time metrics
Summary
For the task at hand (streaming the top-level items of an array):
mrss u+s
jq --stream: 2 MB 447
jstream : 8 MB 114
jm : 13 MB 109
jq : 5,582 MB 39
In words:
space: jstream is economical with memory, but not as much as jq's streaming parser.
time: jstream runs slightly slower than jq's regular parser
but about 4 times faster than jq's streaming parser.
Interestingly, space*time is about the same for jstream and jq's streaming parser.
Characterization of the test file
The test file consists of an array of 10,000,000 simple objects:
[
{"key_one": 0.13888342355537053, "key_two": 0.4258700286271502, "key_three": 0.8010012924267487}
,{"key_one": 0.13888342355537053, "key_two": 0.4258700286271502, "key_three": 0.8010012924267487}
...
]
$ ls -l input.json
-rw-r--r-- 1 xyzzy staff 980000002 May 2 2019 input.json
$ wc -l input.json
10000001 input.json
jq times and mrss
$ /usr/bin/time -l jq empty input.json
43.91 real 37.36 user 4.74 sys
4981452800 maximum resident set size
$ /usr/bin/time -l jq length input.json
10000000
48.78 real 41.78 user 4.41 sys
4730941440 maximum resident set size
/usr/bin/time -l jq type input.json
"array"
37.69 real 34.26 user 3.05 sys
5582196736 maximum resident set size
/usr/bin/time -l jq 'def count(s): reduce s as $i (0;.+1); count(.[])' input.json
10000000
39.40 real 35.95 user 3.01 sys
5582176256 maximum resident set size
/usr/bin/time -l jq -cn --stream 'fromstream(1|truncate_stream(inputs))' input.json | wc -l
449.88 real 444.43 user 2.12 sys
2023424 maximum resident set size
10000000
jstream times and mrss
$ /usr/bin/time -l jstream -d 1 < input.json > /dev/null
61.63 real 79.52 user 16.43 sys
7999488 maximum resident set size
$ /usr/bin/time -l jstream -d 1 < input.json | wc -l
77.65 real 93.69 user 20.85 sys
7847936 maximum resident set size
10000000
jm times and mrss
jm is another command-line utility for "streaming" JSON arrays and objects, using minimal memory.
$ /usr/bin/time -lp jm --count input.json
real 110.75 user 107.67 sys 1.22
13 295 616 maximum resident set size
7 303 168 peak memory footprint
10000000
| Split | 62,825,963 | 22 |
If I want to take
"hi, my name is foo bar"
and split it on "foo", and have that split be case insensitive (split on any of "foO", "FOO", "Foo", etc), what should I do? Keep in mind that although I would like to have the split be case insensitive, I also DO want to maintain the case sensitivity of the rest of the string.
So if I have:
test = "hi, my name is foo bar"
print test.split('foo')
print test.upper().split("FOO")
I would get
['hi, my name is ', ' bar']
['HI, MY NAME IS ', ' BAR']
respectively.
But what I want is:
['hi, my name is ', ' bar']
every time. The goal is to maintain the case sensitivity of the original string, except for what I am splitting on.
So if my test string was:
"hI MY NAME iS FoO bar"
my desired result would be:
['hI MY NAME iS ', ' bar']
| You can use the re.split function with the re.IGNORECASE flag (or re.I for short):
>>> import re
>>> test = "hI MY NAME iS FoO bar"
>>> re.split("foo", test, flags=re.IGNORECASE)
['hI MY NAME iS ', ' bar']
>>>
| Split | 30,834,159 | 22 |
In Scala, I want to split a string at a specific character like so:
scala> val s = "abba.aadd"
s: String = abba.aadd
scala> val (beforeDot,afterDot) = (s takeWhile (_!='.'), s dropWhile (_!='.'))
beforeDot: String = abba
afterDot: String = .aadd
This solution is slightly inefficient (maybe not asymptotically), but I have the feeling something like this might exist in the standard library already. Any ideas?
| There is a span method:
scala> val (beforeDot, afterDot) = s.span{ _ != '.' }
beforeDot: String = abba
afterDot: String = .aadd
From the Scala documentation:
c span p is equivalent to (but possibly more efficient than) (c takeWhile p, c dropWhile p), provided the evaluation of the predicate p does not cause any side-effects.
| Split | 23,489,107 | 22 |
I want to break a string up into lines of a specified maximum length, without splitting any words, if possible (if there is a word that exceeds the maximum line length, then it will have to be split).
As always, I am acutely aware that strings are immutable and that one should preferably use the StringBuilder class. I have seen examples where the string is split into words and the lines are then built up using the StringBuilder class, but the code below seems "neater" to me.
I mentioned "best" in the description and not "most efficient" as I am also interested in the "eloquence" of the code. The strings will never be huge, generally splitting into 2 or three lines, and it won't be happening for thousands of lines.
Is the following code really bad?
private static IEnumerable<string> SplitToLines(string stringToSplit, int maximumLineLength)
{
stringToSplit = stringToSplit.Trim();
var lines = new List<string>();
while (stringToSplit.Length > 0)
{
if (stringToSplit.Length <= maximumLineLength)
{
lines.Add(stringToSplit);
break;
}
var indexOfLastSpaceInLine = stringToSplit.Substring(0, maximumLineLength).LastIndexOf(' ');
lines.Add(stringToSplit.Substring(0, indexOfLastSpaceInLine >= 0 ? indexOfLastSpaceInLine : maximumLineLength).Trim());
stringToSplit = stringToSplit.Substring(indexOfLastSpaceInLine >= 0 ? indexOfLastSpaceInLine + 1 : maximumLineLength);
}
return lines.ToArray();
}
| Even when this post is 3 years old I wanted to give a better solution using Regex to accomplish the same:
If you want the string to be splitted and then use the text to be displayed you can use this:
public string SplitToLines(string stringToSplit, int maximumLineLength)
{
return Regex.Replace(stringToSplit, @"(.{1," + maximumLineLength +@"})(?:\s|$)", "$1\n");
}
If on the other hand you need a collection you can use this:
public MatchCollection SplitToLines(string stringToSplit, int maximumLineLength)
{
return Regex.Matches(stringToSplit, @"(.{1," + maximumLineLength +@"})(?:\s|$)");
}
NOTES
Remember to import regex (using System.Text.RegularExpressions;)
You can use string interpolation on the match:
$@"(.{{1,{maximumLineLength}}})(?:\s|$)"
The MatchCollection works almost like an Array
Matching example with explanation here
| Split | 22,368,434 | 22 |
split will always order the splits lexicographically. There may be situations where one would rather preserve the natural order. One can always implement a hand-rolled function but is there a base R solution that does this?
Reproducible example:
Input:
Date.of.Inclusion Securities.Included Securities.Excluded yearmon
1 2013-04-01 INDUSINDBK SIEMENS 4 2013
2 2013-04-01 NMDC WIPRO 4 2013
3 2012-09-28 LUPIN SAIL 9 2012
4 2012-09-28 ULTRACEMCO STER 9 2012
5 2012-04-27 ASIANPAINT RCOM 4 2012
6 2012-04-27 BANKBARODA RPOWER 4 2012
split output:
R> split(nifty.dat, nifty.dat$yearmon)
$`4 2012`
Date.of.Inclusion Securities.Included Securities.Excluded yearmon
5 2012-04-27 ASIANPAINT RCOM 4 2012
6 2012-04-27 BANKBARODA RPOWER 4 2012
$`4 2013`
Date.of.Inclusion Securities.Included Securities.Excluded yearmon
1 2013-04-01 INDUSINDBK SIEMENS 4 2013
2 2013-04-01 NMDC WIPRO 4 2013
$`9 2012`
Date.of.Inclusion Securities.Included Securities.Excluded yearmon
3 2012-09-28 LUPIN SAIL 9 2012
4 2012-09-28 ULTRACEMCO STER 9 2012
Note that yearmon is already sorted in a particular order I will like. This can be taken as given because the question is slightly mis-specified if this does not hold.
Desired output:
$`4 2013`
Date.of.Inclusion Securities.Included Securities.Excluded yearmon
1 2013-04-01 INDUSINDBK SIEMENS 4 2013
2 2013-04-01 NMDC WIPRO 4 2013
$`9 2012`
Date.of.Inclusion Securities.Included Securities.Excluded yearmon
3 2012-09-28 LUPIN SAIL 9 2012
4 2012-09-28 ULTRACEMCO STER 9 2012
$`4 2012`
Date.of.Inclusion Securities.Included Securities.Excluded yearmon
5 2012-04-27 ASIANPAINT RCOM 4 2012
6 2012-04-27 BANKBARODA RPOWER 4 2012
Thanks.
PS: I know there are better ways to create yearmon to preserve that order but I am looking for a generic solution.
| split converts the f (second) argument to factors, if it isn't already one. So, if you want the order to be retained, factor the column yourself with the desired level. That is:
df$yearmon <- factor(df$yearmon, levels=unique(df$yearmon))
# now split
split(df, df$yearmon)
# $`4_2013`
# Date.of.Inclusion Securities.Included Securities.Excluded yearmon
# 1 2013-04-01 INDUSINDBK SIEMENS 4_2013
# 2 2013-04-01 NMDC WIPRO 4_2013
# $`9_2012`
# Date.of.Inclusion Securities.Included Securities.Excluded yearmon
# 3 2012-09-28 LUPIN SAIL 9_2012
# 4 2012-09-28 ULTRACEMCO STER 9_2012
# $`4_2012`
# Date.of.Inclusion Securities.Included Securities.Excluded yearmon
# 5 2012-04-27 ASIANPAINT RCOM 4_2012
# 6 2012-04-27 BANKBARODA RPOWER 4_2012
But do not use split. Use data.table instead:
However normally, split tends to be terribly slow as the levels increase. So, I'd suggest using data.table to subset to a list. I'd suppose that'd be much faster!
require(data.table)
dt <- data.table(df)
dt[, grp := .GRP, by = yearmon]
setkey(dt, grp)
o2 <- dt[, list(list(.SD)), by = grp]$V1
Benchmarking on huge data:
set.seed(45)
dates <- seq(as.Date("1900-01-01"), as.Date("2013-12-31"), by = "days")
ym <- do.call(paste, c(expand.grid(1:500, 1900:2013), sep="_"))
df <- data.frame(x1 = sample(dates, 1e4, TRUE),
x2 = sample(letters, 1e4, TRUE),
x3 = sample(10, 1e4, TRUE),
yearmon = sample(ym, 1e4, TRUE),
stringsAsFactors=FALSE)
require(data.table)
dt <- data.table(df)
f1 <- function(dt) {
dt[, grp := .GRP, by = yearmon]
setkey(dt, grp)
o1 <- dt[, list(list(.SD)), by=grp]$V1
}
f2 <- function(df) {
df$yearmon <- factor(df$yearmon, levels=unique(df$yearmon))
o2 <- split(df, df$yearmon)
}
require(microbenchmark)
microbenchmark(o1 <- f1(dt), o2 <- f2(df), times = 10)
# Unit: milliseconds
expr min lq median uq max neval
# o1 <- f1(dt) 43.72995 43.85035 45.20087 715.1292 1071.976 10
# o2 <- f2(df) 4485.34205 4916.13633 5210.88376 5763.1667 6912.741 10
Note that the solution from o1 will be an unnamed list. But you can set the names simply by doing names(o1) <- unique(dt$yearmon)
| Split | 17,611,734 | 22 |
When I perform
String test="23x34 ";
String[] array=test.split("x"); //splitting using simple letter
I got two items in array as 23 and 34
but when I did
String test="23x34 ";
String[] array=test.split("X"); //splitting using capitalletter
I got one item in array 23x34
So is there any way I can use the split method as case insensitive or whether there is any other method that can help?
| split uses, as the documentation suggests, a regexp. a regexp for your example would be :
"[xX]"
Also, the (?i) flag toggles case insensitivty. Therefore, the following is also correct :
"(?i)x"
In this case, x can be any litteral properly escaped.
| Split | 16,581,977 | 22 |
Just wanted to verify some thought regarding split function. I have constructed a simple code.
var array1 = [{}];
var string1 = "A, B, C, D";
array1 = string1.split(",");
The problem is based on this kind of coding for example in flash. The string1 will split all "," then transfers it to the array1 in this format ["A","B","C", "D"]. Is this kind of concept similar to Google Spreadsheet - GAS? If yes can you site some example? Thanks a lot guys.
P.S: When I tried splitting the "," it only returns the value "A B C D" as a single element.
Thanks,
Nash :)
| Your code should definitely work, I just ran this with a breakpoint on Logger.log(array1); The debugger shows it as an array, and the log logs it as: [A, B, C, D]. Note, that to get the output you wanted I had to add a space to the split to get: string1.split(", ");
function myFunction() {
var array1 = splitTest();
Logger.log(array1);
}
function splitTest() {
var array1 = [{}];
var string1 = "A, B, C, D";
array1 = string1.split(", ");
return array1
}
| Split | 11,752,911 | 22 |
I am trying to get name of a File object without its extension, e.g. getting "vegetation" when the filename is "vegetation.txt." I have tried implementing this code:
openFile = fileChooser.getSelectedFile();
String[] tokens = openFile.getName().split(".");
String name = tokens[0];
Unfortunately, it returns a null object. There is a problem just in the defining the String object, I guess, because the method getName() works correctly; it gives me the name of the file with its extension.
Do you have any idea?
| If you want to implement this yourself, try this:
String name = file.getName();
int pos = name.lastIndexOf(".");
if (pos > 0) {
name = name.substring(0, pos);
}
(This variation doesn't leave you with an empty string for an input filename like ".txt". If you want the string to be empty in that case, change > 0 to >= 0.)
You could replace the if statement with an assignment using a conditional expression, if you thought it made your code more readable; see @Steven's answer for example. (I don't think it does ... but it is a matter of opinion.)
It is arguably a better idea to use an implementation that someone else has written and tested. Apache FilenameUtils is a good choice; see @slachnick's Answer, and also the linked Q&A.
| Split | 8,393,849 | 22 |
say I have the following string:
"Hello there. My name is Fred. I am 25.5 years old."
I want to split this into sentences, so that I have the following list:
["Hello there", "My name is Fred", "I am 25.5 years old"]
As you can see, I want to split the string on all occurrences of the string ". ", not any occurrence of either "." or " ". Python's str.split() will not work in this case because it will treat each character of the string as a separate delimiter, rather than the whole string as a multi-character delimiter. Is there a simple way to solve this problem?
| Works for me
>>> "Hello there. My name is Fr.ed. I am 25.5 years old.".split(". ")
['Hello there', 'My name is Fr.ed', 'I am 25.5 years old.']
| Split | 8,081,569 | 22 |
I am trying to split a string using 'split-string' function based on the . character. But
(split-string "1.2.3" ".") doesn't work at all. It just returns a list of variable number of empty strings. Is . a special character that needs to be escaped or specified in some different way?
| Here is the official documentation for split-string function -
https://www.gnu.org/software/emacs/manual/html_node/elisp/Creating-Strings.html
The second argument to the split-string function in (split-string "1.2.3" "\.") is a regular expression and as a result both the '.' character and '' character have special meaning. So the '.' character needs to be escaped and even the '' character needs to be escaped with another ''. (split-string "1.2.3" "\\.") will work fine as expected.
| Split | 6,236,196 | 22 |
I have a string like this:
key=value, key2=value2
and I would like to parse it into something like this:
array(
"key" => "value",
"key2" => "value2"
)
I could do something like
$parts = explode(",", $string)
$parts = array_map("trim", $parts);
foreach($parts as $currentPart)
{
list($key, $value) = explode("=", $currentPart);
$keyValues[$key] = $value;
}
But this seems ridiciulous. There must be some way to do this smarter with PHP right?
| If you don't mind using regex ...
$str = "key=value, key2=value2";
preg_match_all("/([^,= ]+)=([^,= ]+)/", $str, $r);
$result = array_combine($r[1], $r[2]);
var_dump($result);
| Split | 4,923,951 | 22 |
As shown below,
Is it possible to split a Polygon by a Line? (into two Polygons). If the line doesn't go all the way across the polygon it would fail.
Is this possible? If so, how would I do this?
| I had to do this recently. Just walking the polygon won't work for concave polygons, as in your diagram. Below is a sketch of my algorithm, inspired by the Greiner-Hormann polygon clipping algorithm. Splitting is both easier and harder than polygon clipping. Easier because you only clip against a line instead of a rect or another polygon; harder because you need to keep both sides.
Create an empty list of output polygons
Create an empty list of pending crossbacks (one for each polygon)
Find all intersections between the polygon and the line.
Sort them by position along the line.
Pair them up as alternating entry/exit lines.
Append a polygon to the output list and make it current.
Walk the polygon. For each edge:
Append the first point to the current polygon.
If there is an intersection with the split line:
Add the intersection point to the current polygon.
Find the intersection point in the intersection pairs.
Set its partner as the crossback point for the current polygon.
If there is an existing polygon with a crossback for this edge:
Set the current polygon to be that polygon.
Else
Append a new polygon and new crossback point to the output lists and make it current.
Add the intersection point to the now current polygon.
| Split | 3,623,703 | 22 |
I have a code for reading files with float numbers on line stored like this: "3.34|2.3409|1.0001|...|1.1|". I would like to read them using istringstream, but it doesn't work as I would expect:
string row;
string strNum;
istringstream separate; // textovy stream pro konverzi
while ( getline(file,row) ) {
separate.str(row); // = HERE is PROBLEM =
while( getline(separate, strNum, '|') ) { // using delimiter
flNum = strToFl(strNum); // my conversion
insertIntoMatrix(i,j,flNum); // some function
j++;
}
i++;
}
In marked point, row is copied into separate stream only first time. In next iteration it doesn't work and it does nothing. I expected it is possible to be used more times without constructing new istringstream object in every iteration.
| After setting the row into the istringstream...
separate.str(row);
... reset it by calling
separate.clear();
This clears any iostate flags that are set in the previous iteration or by setting the string.
http://www.cplusplus.com/reference/iostream/ios/clear/
| Split | 2,767,298 | 22 |
How do you take paragraph or large amount of text and break it into sentences (perferably using Ruby) taking into account cases such as Mr. and Dr. and U.S.A? (Assuming you just put the sentences into an array of arrays)
UPDATE:
One possible solution I thought of involves using a parts-of-speech tagger (POST) and a classifier to determine the end of a sentence:
Getting data from Mr. Jones felt the warm sun on his face as he stepped out onto the balcony of his summer home in Italy. He was happy to be alive.
CLASSIFIER
Mr./PERSON Jones/PERSON felt/O the/O warm/O sun/O on/O his/O face/O as/O he/O stepped/O out/O onto/O the/O balcony/O of/O his/O summer/O home/O in/O Italy/LOCATION ./O He/O was/O happy/O to/O be/O alive/O ./O
POST
Mr./NNP Jones/NNP felt/VBD the/DT warm/JJ sun/NN on/IN his/PRP$ face/NN as/IN he/PRP stepped/VBD out/RP onto/IN the/DT balcony/NN of/IN his/PRP$ summer/NN home/NN in/IN Italy./NNP He/PRP was/VBD happy/JJ to/TO be/VB alive./IN
Can we assume, since Italy is a location, the period is the valid end of the sentence? Since ending on "Mr." would have no other parts-of-speech, can we assume this is not a valid end-of-sentence period? Is this the best answer to the my question?
Thoughts?
| Try looking at the Ruby wrapper around the Stanford Parser. It has a getSentencesFromString() function.
| Split | 860,809 | 22 |
I want to add a space between every 4 characters. I'm developing a webpage about credit card numbers.
example
var x = document.getElementById("card_number").value;
example :
if the value of x is 1234567890123456
I need to split this number every 4 characters and add a space. Like this
1234 5678 9012 3456
and I need this number on a variable. Because I need to set this number on the textbox.
Please help me. thanks
| You can use RegEx for this
const dummyTxt = '1234567890123456';
const joy = dummyTxt.match(/.{1,4}/g);
console.log(joy.join(' '));
| Split | 53,427,046 | 21 |
What would be an idiomatic way to split a string into strings of 2 characters each?
Examples:
"" -> [""]
"ab" -> ["ab"]
"abcd" -> ["ab", "cd"]
We can assume that the string has a length which is a multiple of 2.
I could use a regex like in this Java answer but I was hoping to find a better way (i.e. using one of kotlin's additional methods).
| Once Kotlin 1.2 is released, you can use the chunked function that is added to kotlin-stdlib by the KEEP-11 proposal. Example:
val chunked = myString.chunked(2)
You can already try this with Kotlin 1.2 M2 pre-release.
Until then, you can implement the same with this code:
fun String.chunked(size: Int): List<String> {
val nChunks = length / size
return (0 until nChunks).map { substring(it * size, (it + 1) * size) }
}
println("abcdef".chunked(2)) // [ab, cd, ef]
This implementation drops the remainder that is less than size elements. You can modify it do add the remainder to the result as well.
| Split | 45,659,916 | 21 |
I am trying to split (or explode) a string in Swift (1.2) using multiple delimiters, or seperators as Apple calls them.
My string looks like this:
KEY1=subKey1=value&subkey2=valueKEY2=subkey1=value&subkey2=valueKEY3=subKey1=value&subkey3=value
I have formatted it for easy reading:
KEY1=subKey1=value&subkey2=value
KEY2=subkey1=value&subkey2=value
KEY3=subKey1=value&subkey3=value
The uppercase "KEY" are predefined names.
I was trying to do this using:
var splittedString = string.componentsSeparatedByString("KEY1")
But as you can see, I can only do this with one KEY as the separator, so I am looking for something like this:
var splittedString = string.componentsSeperatedByStrings(["KEY1", "KEY2", "KEY3"])
So the result would be:
[
"KEY1" => "subKey1=value&subkey2=value",
"KEY2" => "subkey1=value&subkey2=value",
"KEY3" => "subkey1=value&subkey2=value"
]
Is there anything built into Swift 1.2 that I can use?
Or is there some kind of extension/library that can do this easily?
Thanks for your time, and have a great day!
| One can also use the following approach to split a string with multiple delimiters in case keys are single characters:
//swift 4+
let stringData = "K01L02M03"
let res = stringData.components(separatedBy: CharacterSet(charactersIn: "KLM"))
//older swift syntax
let res = stringData.componentsSeparatedByCharactersInSet(NSCharacterSet(charactersInString: "KLM"));
res will contain ["01", "02", "03"]
If anyone knows any kind of special syntax to extend the approach to multiple characters per key you are welcome to suggest and to improve this answer
| Split | 32,465,121 | 21 |
I have one big array:
[(1.0, 3.0, 1, 427338.4297000002, 4848489.4332)
(1.0, 3.0, 2, 427344.7937000003, 4848482.0692)
(1.0, 3.0, 3, 427346.4297000002, 4848472.7469) ...,
(1.0, 1.0, 7084, 427345.2709999997, 4848796.592)
(1.0, 1.0, 7085, 427352.9277999997, 4848790.9351)
(1.0, 1.0, 7086, 427359.16060000006, 4848787.4332)]
I want to split this array into multiple arrays based on the 2nd value in the array (3.0, 3.0, 3.0...1.0,1.0,10).
Every time the 2nd value changes, I want a new array, so basically each new array has the same 2nd value. I've looked this up on Stack Overflow and know of the command
np.split(array, number)
but I'm not trying to split the array into a certain number of arrays, but rather by a value. How would I be able to split the array in the way specified above?
Any help would be appreciated!
| You can find the indices where the values differ by using numpy.where and numpy.diff on the first column:
>>> arr = np.array([(1.0, 3.0, 1, 427338.4297000002, 4848489.4332),
(1.0, 3.0, 2, 427344.7937000003, 4848482.0692),
(1.0, 3.0, 3, 427346.4297000002, 4848472.7469),
(1.0, 1.0, 7084, 427345.2709999997, 4848796.592),
(1.0, 1.0, 7085, 427352.9277999997, 4848790.9351),
(1.0, 1.0, 7086, 427359.16060000006, 4848787.4332)])
>>> np.split(arr, np.where(np.diff(arr[:,1]))[0]+1)
[array([[ 1.00000000e+00, 3.00000000e+00, 1.00000000e+00,
4.27338430e+05, 4.84848943e+06],
[ 1.00000000e+00, 3.00000000e+00, 2.00000000e+00,
4.27344794e+05, 4.84848207e+06],
[ 1.00000000e+00, 3.00000000e+00, 3.00000000e+00,
4.27346430e+05, 4.84847275e+06]]),
array([[ 1.00000000e+00, 1.00000000e+00, 7.08400000e+03,
4.27345271e+05, 4.84879659e+06],
[ 1.00000000e+00, 1.00000000e+00, 7.08500000e+03,
4.27352928e+05, 4.84879094e+06],
[ 1.00000000e+00, 1.00000000e+00, 7.08600000e+03,
4.27359161e+05, 4.84878743e+06]])]
Explanation:
Here first we are going to fetch the items in the second 2 column:
>>> arr[:,1]
array([ 3., 3., 3., 1., 1., 1.])
Now to find out where the items actually change we can use numpy.diff:
>>> np.diff(arr[:,1])
array([ 0., 0., -2., 0., 0.])
Any thing non-zero means that the item next to it was different, we can use numpy.where to find the indices of non-zero items and then add 1 to it because the actual index of such item is one more than the returned index:
>>> np.where(np.diff(arr[:,1]))[0]+1
array([3])
| Split | 31,863,083 | 21 |
Really, pretty much what the title says.
Say you have this string:
var theString = "a=b=c=d";
Now, when you run theString.split("=") the result is ["a", "b", "c", "d"] as expected. And, of course, when you run theString.split("=", 2) you get ["a", "b"], which after reading the MDN page for String#split() makes sense to me.
However, the behavior I'm looking for is more like Java's String#split(): Instead of building the array normally, then returning the first n elements, it builds an array of the first n-1 matches, then adds all the remaining characters as the last element of the array. See the relevant docs for a better description.
How can I get this effect in Javascript?
I'm looking for the answer with the best performance that works like the Java implementation, though the actual way it works can be different.
I'd post my attempt, but I don't know how to go about writing this at all.
| If you want the exact equivalent of the Java implementation (no error checking or guard clauses etc):
function split(str, sep, n) {
var out = [];
while(n--) out.push(str.slice(sep.lastIndex, sep.exec(str).index));
out.push(str.slice(sep.lastIndex));
return out;
}
console.log(split("a=b=c=d", /=/g, 2)); // ['a', 'b', 'c=d']
This has the added benefit of not computing the complete split beforehand, as you mentioned in your question.
| Split | 29,998,343 | 21 |
In my SQL statement I have to extract a substring from a string at the character '_'. Strings can be for example 'A_XXX' 'AB_XXX' 'ABC_XXXX', so the extracted substrings should be like 'A' 'AB' 'ABC'.
In Oracle this is easy with the substr() and instr() functions:
select substr('AB_XXX', 1, instr('AB_XXX', '_')-1) as substring
from dual;
The result would be:
SUBSTRING
------------------------
AB
I need this query to check if a specific substring is in an array of strings.
The whole query would look like:
select 'AB_XXX' from dual
where (instr('ABC_AB_A', substr('AB_XXX', 1, instr('AB_XXX', '_')-1))>0);
Is there a way to write it in a SQL standard-compliant way?
If PostgreSQL provides an alternative function, it also helps. The rest could be solved with e.g. IN.
The really important part is to get the substring.
| tl;dr
Use split_part which was purposely built for this:
split_part(string, '_', 1)
Explanation
Quoting this PostgreSQL API docs:
SPLIT_PART() function splits a string on a specified delimiter and returns the nth substring.
The 3 parameters are the string to be split, the delimiter, and the part/substring number (starting from 1) to be returned.
So if you have a field named string that contains stuff like AB_XXX and you would like to get everything before _, then you split by that and get the first part/substring: split_part(string, '_', 1).
| Split | 29,895,896 | 21 |
for example .. If I have a text like this
214asd
df5df8
d66f66
I want to split them into 3 strings using vb.net .
| Assuming you want to split on new lines - using String.Split will return an array containing the parts:
Dim parts As String() = myString.Split(new String() {Environment.NewLine},
StringSplitOptions.None)
This will be platform specific, so you may want to split on "\n", "\r", "\n\r" or a combination of them. String.Split has an overload that takes a string array with the strings you wish to split on.
| Split | 14,795,943 | 21 |
Subsets and Splits