_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d3101 | train | you can use it like this:
os.system('code test_01.py')
A: You can use either one, but first, read the docs on what os.system and os.startfile does.
os.system(command)
Execute the command (a string) in a subshell. This is implemented by calling the Standard C function system(), and has the same limitations. Changes to sys.stdin, etc. are not reflected in the environment of the executed command.
So this basically runs the command string you pass to it. If your intention is to open a file in VS Code, then you need to check if you can use the VS Code command for opening files/folders from the command line:
code myfile.py
If that works on your terminal, then your Python script would basically just be:
os.system("code myfile.py")
os.startfile(path[, operation])
Start a file with its associated application.
When operation is not specified or 'open', this acts like double-clicking the file in Windows Explorer, or giving the file name as an argument to the start command from the interactive command shell: the file is opened with whatever application (if any) its extension is associated.
I assume you are on Windows, because startfile is only available on Windows.
The main thing here is that startfile is the same behavior as double-clicking the file in Windows Explorer. So, first make sure that when you double-click on a file, that it opens in VS Code. If it doesn't, then you need to associate that file with VS Code first. This is usually done by right-click > "Opens with.." then selecting VS Code from the list.
Once double-clicking on a file opens it in VS Code, then your Python script would simply be:
os.startfile("myfile.py", "open")
The "open" here is optional, but I prefer to be explicit. | unknown | |
d3102 | train | Not really enough information to give a definitive assessment, but some things to consider:
*
*You're unlikely to get a skip scan benefit, so if you want snappy
response from predicates with leading E or leading D, that will be 2
indexes. (One leading with D, and one leading with E).
*If A/B are updated frequently (although that's a generic term),
you might choose to leave them out of the index definition in
order to reduce index maintenance overhead. | unknown | |
d3103 | train | Be careful with that.
These are generic instructions and they assume a non-CentOS/fedora distro. In CentOS/fedora, /lib is a symlink to /usr/lib (and /lib64 is a symlink to /usr/lib64). So, the instructions won't work.
On other distros, /lib and /usr/lib are distinct directories. The command is trying to create a symlink from /usr/lib to /lib, such that:
/usr/lib/libpcre.so --> /lib/libpcre.so
And, /lib/libcpre.so is the real file.
On CentOS/fedora, this would create a symlink cycle:
/usr/lib/libpcre.so --> /usr/lib/libpcre.so
Unless you have a specific reason not to, just do "yum install pcre" and let the distro install handle it. You may also want the development package: "yum install pcre-devel"
Also, note that the instructions are for a 32 bit system. If yours is 64 bit, then we're talking about /lib64 and /usr/lib64. If your system is 64 bit, but you want to create 32 bit apps, you'll need "yum install pcre.i686"
IMO, in the instructions, using "&&" is "getting cute" and makes me distrust them a bit (e.g. ./configure -blah && make) is technically correct, but it's usually better to do these as separate steps:
- ./configure -blah
- check for errors
- make
- check for errors
The reason that it's relative (using ../..) is so that you could install under /usr/local or other (e.g /usr/local/lib and /usr/local/usr/lib) | unknown | |
d3104 | train | The point around which rotation occurs is affected by the layer's position and anchorPoint properties, see Anchor Points Affect Geometric Manipulations. The default values for these properties do not appear to match the documentation, at least under macOS 10.11 and whichever version you used.
Try setting them by adding four lines as follows:
[btnScan setWantsLayer:YES];
CALayer *btnLayer = btnScan.layer;
NSRect frame = btnLayer.frame;
btnLayer.position = NSMakePoint(NSMidX(frame), NSMidY(frame)); // position to centre of frame
btnLayer.anchorPoint = NSMakePoint(0.5, 0.5); // anchor point to centre - specified in unit coordinates
[btnScan.layer addAnimation:animation forKey:@"transform.rotation.z"];
It doesn't actually matter in which order you set these properties the result should be an image rotation around the centre point of the button. | unknown | |
d3105 | train | I don't have a 14GB file to try this with, so memory footprint is a concern. Someone who knows regex better than myself might have some performance tweaking suggestions.
The main concept is don't iterate through each line when avoidable. Let re do it's magic on the whole body of text then write that body to the file.
import re
newdate = "20150201,"
f = open('sample.csv', 'r')
g = open('result.csv', 'w')
body = f.read()
## keeps the original csv
g.write(body)
# strip off the header -- we already have one.
header, mainbody = body.split('\n', 1)
# replace all the dates
newbody = re.sub(r"20150131,", newdate, mainbody)
#end of the body didn't have a newline. Adding one back in.
g.write('\n' + newbody)
f.close()
g.close()
A: Batch writing your rows isn't really going to be an improvement because your write IO's are still going to be the same size. Batching up writes only gives you an improvement if you can increase your IO size, which reduces the number of system calls and allows the IO system to deal with fewer but larger writes.
Honestly, I wouldn't complicate the code with batch writing for maintainability reasons, but I can certainly understand the desire to experiment with trying to improve the speed, if only for educational reasons.
What you want to do is batch up your writes -- batching up your csv rows doesn't really accomplish this.
[Example using StringIO removed .. there's a better way.]
Python write() uses buffered I/O. It just by default buffers at 4k (on Linux). If you open the file with a buffering parameter you can make it bigger:
with open("/tmp/x", "w", 1024*1024) as fd:
for i in range(0, 1000000):
fd.write("line %d\n" %i)
Then your writes will be 1MB. strace output:
write(3, "line 0\nline 1\nline 2\nline 3\nline"..., 1048576) = 1048576
write(3, "ine 96335\nline 96336\nline 96337\n"..., 1048576) = 1048576
write(3, "1\nline 184022\nline 184023\nline 1"..., 1048576) = 1048576
write(3, "ne 271403\nline 271404\nline 27140"..., 1048576) = 1048576
write(3, "58784\nline 358785\nline 358786\nli"..., 1048576) = 1048576
write(3, "5\nline 446166\nline 446167\nline 4"..., 1048576) = 1048576
write(3, "ne 533547\nline 533548\nline 53354"..., 1048576) = 1048576
[...]
Your simpler original code will work and you only need to change the blocksize for the open() calls (I would change it for both source and destination.)
My other suggestion is to abandon csv, but that potentially takes some risks. If you have quoted strings with commas in them you have to create the right kind of parser.
BUT -- since the field you want to modify is fairly regular and the first field, you may find it much simpler to just have a readline/write loop where you just replace the first field and ignore the rest.
#!/usr/bin/python
import datetime
import re
with open("/tmp/out", "w", 1024*1024) as fdout, open("/tmp/in", "r", 1024*1024) as fdin:
for i in range(0, 6):
fdin.seek(0)
for line in fdin:
if i == 0:
fdout.write(line)
continue
match = re.search(r"^(\d{8}),", line)
if match:
date = datetime.datetime.strptime(match.group(1), "%Y%m%d")
fdout.write(re.sub("^\d{8},", (date + datetime.timedelta(days=i)).strftime("%Y%m%d,"), line))
else:
if line.startswith("TIME_SK,"):
continue
raise Exception("Could not find /^\d{8},/ in '%s'" % line)
If order doesn't matter, then don't reread the file over and over:
#!/usr/bin/python
import datetime
import re
with open("/tmp/in", "r", 1024*1024) as fd, open("/tmp/out", "w", 1024*1024) as out:
for line in fd:
match = re.search("^(\d{8}),", line)
if match:
out.write(line)
date = datetime.datetime.strptime(match.group(1), "%Y%m%d")
for days in range(1, 6):
out.write(re.sub("^\d{8},", (date + datetime.timedelta(days=days)).strftime("%Y%m%d,"), line))
else:
if line.startswith("TIME_SK,"):
out.write(line)
continue
raise Exception("Could not find /^\d{8},/ in %s" % line)
I went ahead and profiled one of these with python -mcProfile and was surprised how much time was spent in strptime. Also try caching your strptime() calls by using this memoized strptime():
_STRPTIME = {}
def strptime(s):
if s not in _STRPTIME:
_STRPTIME[s] = datetime.datetime.strptime(s, "%Y%m%d")
return _STRPTIME[s]
A: First of all, you're going to be limited by the write speed. Typical write speed for a desktop machine is on the order of about 40 seconds per gigabyte. You need to write 4,000 gigabytes, so it's going to take on the order of 160,000 seconds (44.5 hours) just to write the output. The only way to reduce that time is to get a faster drive.
To make a 4 TB file by replicating a 14 GB file, you have to copy the original file 286 (actually 285.71) times. The simplest way to do things is:
open output file
starting_date = date on first transaction
for pass = 1 to 286
open original file
while not end of file
read transaction
replace date
write to output
increment date
end while
end for
close output file
But with a typical read speed of about 20 seconds per gigabyte, that's 80,000 seconds (22 hours and 15 minutes) just for reading.
You can't do anything about the writing time, but you probably can reduce the reading time by a lot.
If you can buffer the whole 14 GB input file, then reading time becomes about five minutes.
If you don't have the memory to hold the 14 GB, consider reading it into a compressed memory stream. That CSV should compress quite well--to less than half of its current size. Then, rather than opening the input file every time through the loop, you just re-initialize a stream reader from the compressed copy of the file you're holding in memory.
In C#, I'd just use the MemoryStream and GZipStream classes. A quick Google search indicates that similar capabilities exist in python, but since I'm not a python programmer I can't tell you exactly how to use them. | unknown | |
d3106 | train | So, your data seems to be reasonably correct, just that you are using an old reference. Unfortunately, Intel's website is either broken presently or it doesn't like Firefox and/or Linux.
76036301
76 means trace cache with 64K ops.
03 means 4 way DATA TLB with 64 entries.
63 is 32KB L1 cache - the source here shows that value, which is not in your docs.
01 means 4 way Instruction TLB with 32 entries.
00f0b5ff gives
00 "nothing"
f0 prefetch, 64 entries.
0b Instruction 4 way TLB for large pages, 4 entries.
b5 is not documented even on that link. [guessing small data TLB]
To get L2 and L3 cache sizes, you need to use CPUID with EAX=4, and set ECX to 0, 1, 2, ... for each caching level. The linked code shows this, and Intel's docs have details on which bits mean what.
A: Intel's Instruction Set Reference has all the relevant information you need (at around page 263), and is actually up to date unlike every other source I have found.
Probably the best way to get the cache info is mentioned in that reference.
When eax = 4 and ecx is the cache level,
Ways = EBX[31:22]
Partitions = EBX[21:12]
LineSize = EBX[11:0]
Sets = ECX
Total Size = (Ways + 1) * (Partitions + 1) * (Line_Size + 1) * (Sets + 1)
So when CUPID is called with eax = 4 and ecx = 3, you can get your L3 cache size by doing the computation above. Using the OP's posted data:
ebx: 02c0003f
ecx: 00001fff
Ways = 63
Partitions = 0
LineSize = 11
Sets = 8191
Total L3 cache size = 6291456
Which is what was expected. | unknown | |
d3107 | train | Suppose this simplified form of data represents your actual data:
dat <- structure(list(State = c("Alabama", "Alaska", "Arizona", "Others"
), average_aqi = c(300, 550, 150, 1000)), class = "data.frame", row.names = c(NA,
-4L))
If I understand your purpose correctly, you want to get the proportion of average_aqi in this way:
dat |> mutate(avaqi_perc = average_aqi/sum(average_aqi))
# State average_aqi avaqi_perc
#1 Alabama 300 0.150
#2 Alaska 550 0.275
#3 Arizona 150 0.075
#4 Others 1000 0.500 | unknown | |
d3108 | train | Sounds like you need to use jQuery deferred. It basically allows you to chain multiple event handlers to the jQuery Ajax object and gives you finer control over when the callbacks are invoked.
Further reading:
*
*http://msdn.microsoft.com/en-us/scriptjunkie/gg723713
*http://www.erichynds.com/jquery/using-deferreds-in-jquery/
A: It's asynchronous - the "success" fires sometime in the future. The script does not wait for it to respond. Since you're firing off three requests in your loop, they will all be "scan1".
"scan_2" will be called as each request completes.
Change the request to synchronous if you want to control the order of events.
A: You are starting by sending off three ajax calls at once.
Scan1 (loop 1)
Scan1 (loop 2)
Scan1 (loop 3)
When each Scan 1 completes, it's subsequent Scan 2, and then Scan 3 are called.
What did you actually want to happen? Scan 1 2 and 3 of loop 1, then 1 2 and 3 of loop 2, and then 1 2 and 3 of loop 3? That would require more nesting, or possibly deferred objects.
A: Instead of using the success callback for each $.ajax() call, you can store each set of AJAX requests (their jqXHR objects) in an array and wait for all of them to resolve:
function scan_1 () {
//setup array to store jqXHR objects (deferred objects)
var jqXHRs = [];
for(var i = 1; i <= 3; i++)
{
//push a new index onto the array, `$.ajax()` returns an object that will resolve when the response is returned
jqXHRs[jqXHRs.length] = $.ajax({
type: 'GET',
url: url,
data: 'do=scan&step=1&' + string,
dataType: 'json'
});
}
//wait for all four of the AJAX requests to resolve before running `scan_2()`
$.when(jqXHRs).then(function () {
if(result.proceed == 'true') {
scan_2();
}
});
}
A: I've had similar problems working heavily with SharePoint web services - you often need to pull data from multiple sources to generate input for a single process.
To solve it I embedded this kind of functionality into my AJAX abstraction library. You can easily define a request which will trigger a set of handlers when complete. However each request can be defined with multiple http calls. Here's the component (and detailed documentation):
DPAJAX at DepressedPress.com
This simple example creates one request with three calls and then passes that information, in the call order, to a single handler:
// The handler function
function AddUp(Nums) { alert(Nums[1] + Nums[2] + Nums[3]) };
// Create the pool
myPool = DP_AJAX.createPool();
// Create the request
myRequest = DP_AJAX.createRequest(AddUp);
// Add the calls to the request
myRequest.addCall("GET", "http://www.mysite.com/Add.htm", [5,10]);
myRequest.addCall("GET", "http://www.mysite.com/Add.htm", [4,6]);
myRequest.addCall("GET", "http://www.mysite.com/Add.htm", [7,13]);
// Add the request to the pool
myPool.addRequest(myRequest);
Note that unlike many of the other solutions provided this method does not force single threading of the calls being made - each will still run as quickly (or as slowly) as the environment allows but the single handler will only be called when all are complete. It also supports the setting of timeout values and retry attempts if your service is a little flakey.
I've found it insanely useful (and incredibly simple to understand from a code perspective). No more chaining, no more counting calls and saving output. Just "set it and forget it". | unknown | |
d3109 | train | I suspect the answer is that you can't include aggregate functions such as SUM() in a query unless you can guarantee (usually by adding a GROUP BY clause) that the values of the non-aggregated columns are the same for all rows included in the SUM().
The aggregate functions effectively condense a column over many rows into a single value, which cannot be done for non-aggregated columns unless SQL knows that they are guaranteed to have the same value for all considered rows (which a GROUP BY will do, but this may not be what you want). | unknown | |
d3110 | train | You can set the look and feel to reflect the platform:
try {
UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
} catch (Exception e) {
e.printStackTrace();
}
If this is not nice enough for you, take a look at SWT for Eclipse.
A: if you're good with Photo shop you could declare the JFrame as undecorated and create your own image that will server as your custom GUI.
in this example refer to the JFrame as frame.
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setSize(800, 600);
//If you want full screen enter frame.setExtendedState(JFrame.MAXIMIZE_BOTH);
frame.setUndecorated(true);
now that the JFrame has its own border and custom GUI, if you defined your own custom buttons for minimize, maximize, and exit you need to lay a JPanel over the buttons and declare what those buttons will do in a action listener function(lol sorry...I don't know if it's called function like c++ or class...)
for example we will refer to the JPanel as panel and we will set up a exit button.
panel.addActionListener(new ActionListener(){
@Override
public void actionPerformed(ActionEvent e){
System.exit(JFrame.EXIT_ON_CLOSE);
});
} | unknown | |
d3111 | train | so you'll want to subscribe to your observable in the component. This is typically done so the component can determine when the http request should be ran & so the component can wait for the http request to finish (and follow with some logic).
// subscribe to observable somewhere in your component (like ngOnInit)
this.setorService.obterSetores().subscribe(
res => {
this.myObjects = res;
},
err => {}
)
now the component knows when the http request finishes | unknown | |
d3112 | train | I had the same issue occurring when added this to the info plist -
Application does not run in background - YES or source code
<key>UIApplicationExitsOnSuspend</key>
<true/>
Hope this would help someone, obviously you have to measure if you need this setting in the plist or not
Set this to NO / <false/> and this problem goes | unknown | |
d3113 | train | Regex to match the whole Openings tag is,
<Openings>.*?<\/Openings>
If you want to capture the contents inside the Openings tag then try the below,
<Openings>(.*?)<\/Openings>
A: ([\<Openings\>])\w+
The brackets mean "Match any character in this". You should use
(\<Openings\>)\w+
which matches specifically "<Openings>" plus one or more word characters. | unknown | |
d3114 | train | Have you tried with the official API?
Getting started with the API is on MSDN.
A: the Xbox Music API I could not find it working for windows Phone 8.1 but there are packages for Windows Phone 8 and windows 8.1 | unknown | |
d3115 | train | If you want to interact with a web site, filling text boxes, clicking buttons etc, I think a more logical solution would be using and managing an actual web browser.
Selenium.WebDriver NuGet Package
C# Tutorial 1
C# Tutorial 2
A: Well - it looks like I underestimated the power of AngleSharp
There's a wonderful post here which describes how to use it to log into a website, and post forms.
The library has been updated since so a few things have changed, but the capability and approach is the same.
I'll include my "test" code here which demonstrates usability.
public async Task LogIn()
{
//Sets up the context to preserve state from one request to the next
var configuration = Configuration.Default.WithDefaultLoader().WithDefaultCookies();
var context = BrowsingContext.New(configuration);
/Loads the login page
await context.OpenAsync("https://my.website.com/login/");
//Identifies the only form on the page (can use CSS selectors to choose one if multiple), fills in the fields and submits
await context.Active.QuerySelector<IHtmlFormElement>("form").SubmitAsync(new
{
username = "CharlieChaplin",
pass = "x78gjdngmf"
});
//stores the response page body in the result variable.
var result = context.Active.Body;
EDIT - after working with this for a while, I've discovered that Anglesharp.IO has a more robust HttpRequester in it. The above code then becomes
public async Task LogIn()
{
var client = new HttpClient();
var requester = new HttpClientRequester(client);
//Sets up the context to preserve state from one request to the next
var configuration = Configuration.Default
.WithRequester(requester)
.WithDefaultLoader()
.WithDefaultCookies();
var context = BrowsingContext.New(configuration);
/Loads the login page
await context.OpenAsync("https://my.website.com/login/");
//Identifies the only form on the page (can use CSS selectors to choose one if multiple), fills in the fields and submits
await context.Active.QuerySelector<IHtmlFormElement>("form").SubmitAsync(new
{
username = "CharlieChaplin",
pass = "x78gjdngmf"
}); | unknown | |
d3116 | train | You could try this one:
/(^\/\/[^\n]+$\n)+/gm
see here https://regex101.com/r/CrR9WU/1
This selects first the two / at the beginning of each line then anything that is not a newline and finally (at the end of the line) the newline character itself. There are two matches: rows 1 to 3 and rows 4 to 6. If you also allow "empty comment lines like // then this will do too:
/(^\/\/[^\n]*$\n)+/gm
Edit:
I know, it is a little late now, but Casimir's helpful comment got me on to this modified solution:
/(?:^\/\/.*\n?)+/gm
It solves the problem of the final \n, does not capture groups and is simpler. (And it is pretty similar to Jan's solution ;-) ...)
A: This is what modifiers are for:
(?:^\/{2}.+\n?)+
With MULTILINE mode, see a demo on regex101.com.
Broken apart, this says:
(?: # a non-capturing group
^ # start of the line
\/{2} # //
.+ # anything else in that line
\n? # a newline, eventually but greedy
)+ # repeat the group | unknown | |
d3117 | train | In the javascript put
$.get("http://example.com/foo.php?name=" + text);
instead of the alert and in the php use:
mysql_real_escape_string($_GET['name'])
instead of Trucks. | unknown | |
d3118 | train | You should parse to a DateTime and then use the ToString to go back to a string. The following works with your given input.
var dateStrings = new []{"30/04/2018", "01/03/2017","10/11/2018","12/11/2123","1/1/2018"};
foreach(var ds in dateStrings)
{
Console.WriteLine(DateTime.ParseExact(ds, "d/M/yyyy", System.Globalization.CultureInfo.InvariantCulture).ToString("ddMMyy"));
}
The only change I made is to the first date as that is not a valid date within that month (April has 30 days, not 31). If that is going to be a problem then you should change it to TryParse instead, currently I assumed your example was faulty and not your actual data.
A: Your structure varies, all of the examples above use two digit month and day, while the bottom only uses a single digit month and day. Your current code basically will replace the slash with an empty string, but when you remove index four to two your output would deviate.
The simplest approach would be:
var date = DateTime.Parse("...");
var filter = $"o/p = {date:MMddyyyy}";
Obviously you may have to validate and ensure accuracy of your date conversion, but I don't know how your applications works.
A: If you can reasonably expect that the passed in dates are actual dates (hint: there are only 30 days in April) you should make a function that parses the string into DateTimes, then uses string formats to get the output how you want:
public static string ToDateTimeFormat(string input)
{
DateTime output;
if(DateTime.TryParse(input, out output))
{
return output.ToString("MMddyy");
}
return input; //parse fails, return original input
}
My example will still take "bad" dates, but it will not throw an exception like some of the other answers given here (TryParse() vs Parse()).
There is obviously a small bit of overhead with parsing but its negligible compared to all the logic you would need to get the proper string manipulation.
Fiddle here
A: Parse the string as DateTime. Then run ToString with the format you desire.
var a = "1/1/2018";
var date = DateTime.Parse(a);
var result = date.ToString("ddMMyyyy");
A: You can use ParseExact to parse the input, then use ToString to format the output.
For example:
private static void Main()
{
var testData = new List<string>
{
"31/04/2018",
"01/03/2017",
"10/11/2018",
"12/11/2123",
"1/1/2018",
};
foreach (var data in testData)
{
Console.WriteLine(DateTime.ParseExact(data, "d/m/yyyy", null).ToString("ddmmyy"));
}
GetKeyFromUser("\nDone! Press any key to exit...");
}
Output
A: You didn't specify whether these are DateTime values or just strings that look like date time values. I'll assume these are DateTime values.
Convert the string to a DateTime. Then use a string formatter. It's important to specify the culture. In this case dd/mm/yyyy is common in the UK.
var culture = new CultureInfo("en-GB");//UK uses the datetime format dd/MM/yyyy
var dates = new List<string>{"30/04/2018", "01/03/2017","10/11/2018","12/11/2123","1/1/2018"};
foreach (var date in dates)
{
//TODO: Do something with these values
DateTime.Parse(date, culture).ToString("ddMMyyyy");
}
Otherwise, running DateTime.Parse on a machine with a different culture could result in a FormatException. Parsing dates and times in .NET. | unknown | |
d3119 | train | Finally, after research, I haven't find appropriate way to update backoffice with ant updatesystem.
To update backoffice upon application start or/and login to backoffice we can use these properties:
backoffice.cockpitng.reset.triggers=start,login
backoffice.cockpitng.reset.scope=widgets,cockpitConfig
When the application starts, configuration files will be reassembled and saved. This won't take much time(I didn't notice any impact on application startup time, with all OOTB backoffice extensions enabled, sometimes app took 10 sec longer to start, sometimes 10 sec faster). | unknown | |
d3120 | train | Below is an example. Since you didn't post your original query attempt, we can't really say why you were getting multiple rows. No need for a LEFT JOIN unless you are missing codes in the joined tables.
SELECT Table1.ID
, Table1.Acode
, Table2.Adescription
, Table1.Bcode
, Table3.Bdescription
, Table1.Ccode
, Table4.Cdescription
FROM dbo.Table1
JOIN dbo.Table2 ON Table2.Acode = Table1.Acode
JOIN dbo.Table3 ON Table3.Bcode = Table1.Bcode
JOIN dbo.Table4 ON Table4.Ccode = Table1.Ccode;
A: Thanks for help
LEFT Join worked well. I tried to narrow down the tables one by one and found the table where I was getting duplicate records. After finding table I found that I forgot to add Unique key and 1 record (Description) was entered twice which was giving duplicate records and total number of rows were increased.
Thanks all to help me out,and Dan Guzman to point me for duplicate codes. | unknown | |
d3121 | train | I'm also in the class so this might be completely wrong but this is what I saw:
// right middle column
for (int l = j; l <= j; l++)
Your for loop for the int l doesn't seem like it's correct. Shouldn't there be 2 columns?
// lower left corner (anchored)
for (int j = 0; j < height; j++)
j should be width but since its a corner based on your previous code I think it should be 1 | unknown | |
d3122 | train | I don't really understand what you are looking for, but I would have done it this way, for more readability. It may also be more efficient :
function doLoadNames() {
}
function doLoadOther() {
}
function doLoad($toLoad) {
$functionName = 'doLoad'.ucfirst($toLoad);
if (function_exists($functionName)) {
$functionName();
return true;
}
return false;
}
A: all you need is a function called call_user_func or call_user_func_array, manuals are here call_user_func, call_user_func_array | unknown | |
d3123 | train | You need e(fx)clipse or you need to add the jfxrt.jar to your classpath. It comes with your JDK.
How do I work with JavaFX in Eclipse Juno? should sort it out for you. NetBeans and IntelliJ come with built-in support for JavaFX.
A: You could also work with Java 8 as JavaFX is already included there. It also provides a lot of improvements on JavaFX components. | unknown | |
d3124 | train | Blogger conditional statements don't provide pattern matching capability. It would be easier to implement what you require via JavaScript directly in the Blogger template -
An example code would look like -
if(window.location.href.match(/Regex-Condition/){
var script = document.createElement('script'); script.type = 'text/javascript';
script.src = 'URL-Of-The-Script-You-Want-To-Load';script.async=true;
var insertScript = document.getElementsByTagName('script')[0]; insertScript.parentNode.insertBefore(script, insertScript);
}
Other than this, you can also take a look at Lambda expressions (Refer to How to use Google Blogger Lambda Operators )
A: Hi and thank you for your feedback! While I was waiting for some reply I realized I've to use JavaScript as you said.
I was testing this code but it looks something is wrong:
<script type='text/javascript'>
var pattern = new RegExp('mywebsite\.com\/(200[8-9]|201[0-6])\/');
if (pattern.test(document.location.href)) {
}
</script>
I've two main concerns:
*
*if the regex is correct
*between the brackets I've to execute some code which includes Blogger conditional statements, div and plain JavaScript. Does this code need some special attention?
Thanks | unknown | |
d3125 | train | First of all, you should know that this is not a good practice of websockets, where you are forcing the client (the restaurant) to be connected.
Whatever, at the current state of your code, there is an illogical behavior: at the end of the useEffect of your “useWebSocketLite” function, you are closing the socket connection:
return () => {
ws.close();
};
Knowing that the useEffect hook is called twice: after the first render of the component, and then after every change of the dependencies (the “retry” state in your case); Your code can be ridden like so: everytime the “retry” state changes, we will close the socket! So for me that is why you got the client disconnected. | unknown | |
d3126 | train | I assume you didn't set timeout in your job definition.
There is another timeout setting in Rundeck SSH plguin. You can set it in different level(node,project,rundeck)
For node level:
ssh-connection-timeout connection timeout
ssh-command-timeout command timeout
The default value is 0 (no timeout)
The config file is framework.properties under rundeck base dir
Specifying SSH Timeout options | unknown | |
d3127 | train | You can do that by a toast. Here is an example.
var content =
$@"
<toast activationType='foreground' launch='args'>
<visual>
<binding template='ToastGeneric'>
<text>Open app</text>
<text>Your clipboard is ready.Open app</text>
</binding>
</visual>
<actions>
<action content='ok' activationType='foreground' arguments='check'/>
<action activationType='system' arguments='dismiss' content='cancel' />
</actions>
</toast>";
Windows.Data.Xml.Dom.XmlDocument toastDOM = new Windows.Data.Xml.Dom.XmlDocument();
toastDOM.LoadXml(content);
ToastNotification toast = new ToastNotification(toastDOM);
var notifier = ToastNotificationManager.CreateToastNotifier();
notifier.Show(toast);
And then if user press the ok button you must go in App.xaml.cs and handle that
protected override void OnActivated(IActivatedEventArgs args)
{
if (args.Kind == ActivationKind.ToastNotification)
{
var toastArgs = args as ToastNotificationActivatedEventArgs;
var arguments = toastArgs.Argument;
if (arguments == "check")
{
Frame rootFrame = Window.Current.Content as Frame;
if (rootFrame == null)
{
rootFrame = new Frame();
Window.Current.Content = rootFrame;
}
rootFrame.Navigate(typeof(YOURPAGEHERE));
Window.Current.Activate();
}
}
} | unknown | |
d3128 | train | if(sum/10!=0){
doSum(sum);
}
This is what is wrong with your logic. You recursively call doSum() on the new sum but you do nothing with the result. So you need to change this to:
if(sum/10!=0){
sum = doSum(sum);
} | unknown | |
d3129 | train | Perhaps something like this would help. I did not test it.
public void DeleteDirectoryFolders(DirectoryInfo dirInfo){
foreach (DirectoryInfo dirs in dirInfo.GetDirectories())
{
dirs.Delete(true);
}
}
public void DeleteDirectoryFiles(DirectoryInfo dirInfo) {
foreach(FileInfo files in dirInfo.GetFiles())
{
files.Delete();
}
}
public void DeleteDirectoryFilesAndFolders(string dirName) {
DirectoryInfo dir = new DirectoryInfo(dirName);
DeleteDirectoryFiles(dir)
DeleteDirectoryFolders(dir)
}
public void main() {
List<string> DirectoriesToDelete;
DirectoriesToDelete.add("c:\temp");
DirectoriesToDelete.add("c:\temp1");
DirectoriesToDelete.add("c:\temp2");
DirectoriesToDelete.add("c:\temp3");
foreach (string dirName in DirectoriesToDelete) {
DeleteDirectoryFilesAndFolders(dirName);
}
}
A: Here's a recursive function that will delete all files in a given directory and navigate down the directory structure. A pattern string can be supplied to only work with files of a given extension, as per your comment to another answer.
Action<string,string> fileDeleter = null;
fileDeleter = (directoryPath, pattern) =>
{
string[] files;
if (!string.IsNullOrEmpty(pattern))
files = Directory.GetFiles(directoryPath, pattern);
else
files = Directory.GetFiles(directoryPath);
foreach (string file in files)
{
File.Delete(file);
}
string[] directories = Directory.GetDirectories(directoryPath);
foreach (string dir in directories)
fileDeleter(dir, pattern);
};
string path = @"C:\some_folder\";
fileDeleter(path, "*.bmp");
Directories are otherwise left alone, and this can obviously be used with an array or list of strings to work with multiple initial directory paths.
Here is the same code rewritten as a standard function, also with the recursion as a parameter option.
public void DeleteFilesFromDirectory(string directoryPath, string pattern, bool includeSubdirectories)
{
string[] files;
if (!string.IsNullOrEmpty(pattern))
files = Directory.GetFiles(directoryPath, pattern);
else
files = Directory.GetFiles(directoryPath);
foreach (string file in files)
{
File.Delete(file);
}
if (includeSubdirectories)
{
string[] directories = Directory.GetDirectories(directoryPath);
foreach (string dir in directories)
DeleteFilesFromDirectory(dir, pattern, includeSubdirectories);
}
} | unknown | |
d3130 | train | Lines 1 through 9 are just splitting the input (A) into two pieces (L and R). Lines 10 and 11 are doing a bit of initializing to get ready for merging. The merge itself is from lines 12-17.
IOW, everything before line 12 (or arguably 10, not that it really matters) is irrelevant to analyzing the merge because none of it is really part of the merge at all.
Edit: Ultimately, however, a single iteration from 1 to 27 is linear: in lines 4-7, you walk exactly once through the input array, assigning each input exactly once to either L or R. In lines 12-27, you then walk through those two pieces, and copy them back to the original input. Ignoring a few other minor details like initializing i and j, the total number of operations is exactly 2N. For big-O notation, constant factors are ignored, so it's O(N).
A: It's confusingly written. Both loops (4-7 and 12-17) have the same length (n) and the inside of both loops are constant time (no nested loops). So they're each O(n), for a total of O(n) for the whole routine.
Regarding Jerry's answer, lines 4-7 matter because they're still O(n). If you could magically remove lines 12-17 you'd still have an O(n) routine. | unknown | |
d3131 | train | Yes.
You can fetch data from a URL using XMLHttpRequest and then use standard DOM methods to change the content of the document.
That said, meta data is generally consumed either as the document loads (so it would be too late to change it for any practical effect by the time JS ran) or is consumed by tools that are likely to not execute JavaScript.
A: Yes as Quentin said, it is possible but any content you change with JS might not be seen by Google or other search engines.
If your purpose is to load content that needs to be read by a search engine you will want to 'build' the HTML on the server side, using a server side language like PHP for instance. | unknown | |
d3132 | train | try install form github like this npm i -D github:user-name/repo-name, or define like this in your package.json file:
{
"dependencies": {
"repo-name": "github:user-name/repo-name"
}
}
then run npm install | unknown | |
d3133 | train | Your query isn't quoted:
$this->delete('username = ' . (string) $username);
This equates to:
WHERE username = test
If you use the where() method, it will do this for you:
$table->where('username = ?', $username);
Or (like the example in the docs):
$where = $table->getAdapter()->quoteInto('bug_id = ?', 1235);
$table->delete($where); | unknown | |
d3134 | train | Use the boolean variable used to count as an index:
import numpy as np
import pandas as pd
names=["start","stop","percent","order"]
vals=np.array([
[1,9,0.51, 3],
[1,9,0.29,80],
[1,10,0.92, 3],
[2,10,0.60, 3],
[2,10,0.10, 4],
[2,11,0.12, 8],
[2,11,0.60,89],
[3,11,0.30, 2],
[3,11,0.10, 3],
[3,12,0.42, 4],
[3,11,0.51, 5],
[3,12,0.51,64],
[3,11,0.51,82],
[3,11,0.10,68]
])
df = pd.DataFrame(vals, columns=names)
df
max_pos=df[['start', 'stop']].values.max()
pos_range=np.arange(1, max_pos+1)
_ix = ((df[['start']].values <= pos_range) & (pos_range <= df[['stop']].values))
counts = _ix.sum(axis=0)
sum_percent=[]
for i in _ix.T:
sum_percent.append(df["percent"].values[i].sum())
sum_order = []
for i in _ix.T:
sum_order.append(df["order"].values[i].sum())
o = pd.DataFrame({'pos': pos_range, "counts": counts, "sum_percent":sum_percent, "sum_order":sum_order}) | unknown | |
d3135 | train | Multilevel menu markup should look like this:
<ul>
<li><a>Link 1</a></li>
<li><a>Link 2</a></li>
<li>
<a>Link 3</a>
<ul>
<li><a>Link 3.1</a>
<li><a>Link 3.2</a>
(...)
</ul>
</li>
</ul>
A: This kind of technique is broadly published in the internet.
A quick search landed me in a tutorial that captures precisely what you want to achieve.
Simple CSS Drop Down Menu by James Richardson
And here is a quick JSFiddle from the tutorial I've created.
Quick look over the CSS styling.
ul {list-style: none;padding: 0px;margin: 0px;}
ul li {display: block;position: relative;float: left;border:1px solid #000}
li ul {display: none;}
ul li a {display: block;background: #000;padding: 5px 10px 5px 10px;text-decoration: none;
white-space: nowrap;color: #fff;}
ul li a:hover {background: #f00;}
li:hover ul {display: block; position: absolute;}
li:hover li {float: none;}
li:hover a {background: #f00;}
li:hover li a:hover {background: #000;}
#drop-nav li ul li {border-top: 0px;} | unknown | |
d3136 | train | You can easily override the method in this way:
protected override void WndProc(ref Message m){...}
Here you can find some examples: http://msdn.microsoft.com/library/system.windows.forms.control.wndproc%28v=vs.110%29.aspx
A: If you pass the form as a parameter you have the instance of the class, not the definition. You have to create your own Form, override the method as mentioned before and then pass this new Form to your MyDll constructor. for example:
Create a new class library project. Add two files:
- MyDllClass.cs
- MyForm (add new->Window Form)
public partial class MyForm : Form
{
protected override void WndProc(ref Message m)
{
//your code here
base.WndProc(ref m);
}
}
Then in MyDllClass.cs you will have
public MyDllClass(Form your_form_here)
{
//your code here
} | unknown | |
d3137 | train | As Andrei Stefan suggested, Kibana 4.2.0-beta solved the issue.
Wasted my whole day. | unknown | |
d3138 | train | I figured it out. My fault (as usual). Just for future reference... those are actually not nose arguments and probably shouldn't be in there. They are args for pinocchio.
pinocchio | unknown | |
d3139 | train | Most probbably this is because php-amqplib could not be installed properly.
I had issues with composer install that I did not know because of which php-amqplib could not be installed.
composer.json
"php-amqplib/php-amqplib": ">=2.9.0"
Issues with composer install:
Then I ran composer update but that gave issues as well because of some libraries in composer.json
Then I finally I had to run following command to see successful installation of php-amqplib which resolved the issue. This command could be different for you as there could be different issues with installation on your system. Just keep an eye on composer command outputs.
Command:
composer update --no-plugins --no-scripts magento-hackathon/magento-composer-installer
Output:
PHP File:
<?php
require_once('../../app/Mage.php');
Mage::app();
require_once '../../vendor/autoload.php';
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
$connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest');
$channel = $connection->channel();
$channel->queue_declare('rabbitmq-dev', false, false, false, false);
$msg = new AMQPMessage('Hello World!');
$channel->basic_publish($msg, '', 'hello');
echo " [x] Sent 'Hello World!'\n";
$channel->close();
$connection->close(); ?> | unknown | |
d3140 | train | A simple solution that works with simple documents such as the one in your question (PSv4+):
$xmlDoc = [xml] (Get-Content -Raw "C:\Test\Test.xml")
# Initialize the results hashtable; make it ordered to preserve
# input element order.
$ht = [ordered] @{}
# Loop over all child elements of <Test> and create matching
# hashtable entries.
$xmlDoc.Test.ChildNodes.ForEach({ $ht[$_.Name] = $_.InnerText })
# Output the resulting hashtable.
$ht
With your sample document this yields:
Name Value
---- -----
TLC FWE
Crew3LC KMU
MyText Hello World | unknown | |
d3141 | train | Unfortunately, no.
It's been requested from the Angular Material team but they responded with this:
https://github.com/angular/material/issues/10003#issuecomment-364730323
We have no plans to add an option to visually display the week number on the calendar as this is not part of the Material Design Spec.
Week numbers would be great feature to add, but won't happen until Google decides to add it to the Material Design Spec. | unknown | |
d3142 | train | I have managed to get it working now
Dim _cal As New Microsoft.Exchange.WebServices.Data.FolderId(Microsoft.Exchange.WebServices.Data.WellKnownFolderName.Calendar, New Microsoft.Exchange.WebServices.Data.Mailbox(_otherAddress))
Dim _calendarView As New Microsoft.Exchange.WebServices.Data.CalendarView(_startTime.Date, _endTime.Date.AddDays(1))
For Each appointmentItem As Microsoft.Exchange.WebServices.Data.Appointment In _
service.FindAppointments( _
_cal, _
_calendarView)
Next | unknown | |
d3143 | train | I can see several problemens in your code:
*
*in the line printf("%s\n", arr[2]; you forgot a closing )
*Your arr variable local to the main function is never initialized. In C, parameters are passed by value meaning that you are passing the NULL pointer to your function and inside this function, the local pointer arr is allocated but not the one of the main function.
The solution is to allocate the array in the main function and pass a pointer to the array and the size to the function that fills it:
main(){
char **arr = malloc(10*sizeof(char *));
funct(arr, 10);
printf("%s\n", arr[2]);
}
funct(char **arr, int size){
// Add data to array
arr[0] = "first data";
....
arr[size -1] = "last data";
} | unknown | |
d3144 | train | Goto SourceTree Preferences > Accounts
Add your account.
A: You could try go to:
~/Library/Application Support/SourceTree
and then look for a file similar to username\@STAuth-path.to.gitrepository.com and delete it.
You will be prompted for a new password | unknown | |
d3145 | train | You need to add them comma separated
return db.EquipmentApprovals
.SqlQuery("select * from EquipmentApproval where rejectedReason IS NOT NULL AND createdBy = @username",
new SqlParameter("username", username))
.AsQueryable<EquipmentApproval>() | unknown | |
d3146 | train | One way to create this graphical panel is through IPython widgets if you are running in a Jupyter notebook. Here is an example with Python and APM although you could just as easily create this with Gekko.
A built-in option for Gekko is the GUI interface that can be accessed with m.solve(GUI=True). There is more information on the GUI interface in the Gekko paper. | unknown | |
d3147 | train | I've not seen any proven techniques for your need.
But, it is a bit similar to how people try to track the drift in word meanings over different eras. There's been some published work like HistWords from Stanford on that task.
I have also in past answers suggested people working on the eras-drift task try probabilistically replacing words whose sense may vary with alternate, context-labeled tokens. That is, if king is one of the words that you expect to vary based on your geography-contexts, expand your training corpus to sometimes replace king in UK contexts with king_UK, and in US contexts with king_US. (In some cases, you might even repeat your texts to do this.) Then, at the end of training, you'll have separate (but close) vectors for all of king, king_UK, & king_US – and the subtle difference between them may be reflective of what you're trying to study/capture.
You can see other discussion of related ideas in previous answers:
https://stackoverflow.com/a/57400356/130288
https://stackoverflow.com/a/59095246/130288
I'm not sure how well this approach might work, nor (if it does) optimal ways to transform the corpus to capture all the geography-flavored meaning-shifts.
I suspect the extreme approach of transforming every word in a UK-context to its UK-specific token, & same for other contexts, would work less well than only sometimes transforming the tokens – because a total transformation would mean each region's tokens only get trained with each other, never with shared (non-regionalized) words that help 'anchor' variant-meanings in the same shared overall context. But, that hunch would need to be tested.
(This simple "replace-some-tokens" strategy has the advantage that it can be done entirely via corpus preprocessing, with no change to the algorithms. If willing/able to perform big changes to the library, another approach could be more fasttext-like: treat every instance of king as a sum of both a generic king_en vector and a region king_UK (etc) vector. Then every usage example would update both.) | unknown | |
d3148 | train | You can't.
Many errors, including most (if not all) status: 0 errors, are not exposed to JavaScript.
They indicate network or same origin policy errors and exposing the details to JavaScript could leak information. For example, if it was possible to distinguish between "Connection Refused", "Post doesn't speak HTTP" and "No Access-Control-Allow-Origin header", then it would be trivial to write a page that would quietly map out all the webservers on the visitor's corporate network. That information could then be used for phishing attacks. | unknown | |
d3149 | train | You can use Thread.sleep(3000) inside for loop.
Note: This will require a try/catch block.
A: public class HelloWorld extends TimerTask{
public void run() {
System.out.println("Hello World");
}
}
public class PrintHelloWorld {
public static void main(String[] args) {
Timer timer = new Timer();
timer.schedule(new HelloWorld(), 0, 5000);
while (true) {
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
System.out.println("InterruptedException Exception" + e.getMessage());
}
}
}
}
infinite loop is created ad scheduler task is configured.
A: The easiest way would be to set the main thread to sleep for 3000 milliseconds (3 seconds):
for(int i = 0; i< 10; i++) {
try {
//sending the actual Thread of execution to sleep X milliseconds
Thread.sleep(3000);
} catch(InterruptedException ie) {}
System.out.println("Hello world!");
}
This will stop the thread at least X milliseconds. The thread could be sleeping more time, but that's up to the JVM. The only thing guaranteed is that the thread will sleep at least those milliseconds. Take a look at the Thread#sleep doc:
Causes the currently executing thread to sleep (temporarily cease execution) for the specified number of milliseconds, subject to the precision and accuracy of system timers and schedulers.
A: Try doing this:
Timer t = new Timer();
t.schedule(new TimerTask() {
@Override
public void run() {
System.out.println("Hello World");
}
}, 0, 5000);
This code will run print to console Hello World every 5000 milliseconds (5 seconds).
For more info, read https://docs.oracle.com/javase/1.5.0/docs/api/java/util/Timer.html
A: Use java.util.Timer and Timer#schedule(TimerTask,delay,period) method will help you.
public class RemindTask extends TimerTask {
public void run() {
System.out.println(" Hello World!");
}
public static void main(String[] args){
Timer timer = new Timer();
timer.schedule(new RemindTask(), 3000,3000);
}
}
A: If you want to do a periodic task, use a ScheduledExecutorService. Specifically ScheduledExecutorService.scheduleAtFixedRate
The code:
Runnable helloRunnable = new Runnable() {
public void run() {
System.out.println("Hello world");
}
};
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
executor.scheduleAtFixedRate(helloRunnable, 0, 3, TimeUnit.SECONDS);
A: You can also take a look at Timer and TimerTask classes which you can use to schedule your task to run every n seconds.
You need a class that extends TimerTask and override the public void run() method, which will be executed everytime you pass an instance of that class to timer.schedule() method..
Here's an example, which prints Hello World every 5 seconds: -
class SayHello extends TimerTask {
public void run() {
System.out.println("Hello World!");
}
}
// And From your main() method or any other method
Timer timer = new Timer();
timer.schedule(new SayHello(), 0, 5000);
A: This is the simple way to use thread in java:
for(int i = 0; i< 10; i++) {
try {
//sending the actual Thread of execution to sleep X milliseconds
Thread.sleep(3000);
} catch(Exception e) {
System.out.println("Exception : "+e.getMessage());
}
System.out.println("Hello world!");
}
A: I figure it out with a timer, hope it helps. I have used a timer from java.util.Timer and TimerTask from the same package. See below:
TimerTask task = new TimerTask() {
@Override
public void run() {
System.out.println("Hello World");
}
};
Timer timer = new Timer();
timer.schedule(task, new Date(), 3000);
A: What he said. You can handle the exceptions however you like, but
Thread.sleep(miliseconds);
is the best route to take.
public static void main(String[] args) throws InterruptedException {
A: Here's another simple way using Runnable interface in Thread Constructor
public class Demo {
public static void main(String[] args) {
Thread t1 = new Thread(new Runnable() {
@Override
public void run() {
for(int i = 0; i < 5; i++){
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Thread T1 : "+i);
}
}
});
Thread t2 = new Thread(new Runnable() {
@Override
public void run() {
for(int i = 0; i < 5; i++) {
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Thread T2 : "+i);
}
}
});
Thread t3 = new Thread(new Runnable() {
@Override
public void run() {
for(int i = 0; i < 5; i++){
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Thread T3 : "+i);
}
}
});
t1.start();
t2.start();
t3.start();
}
}
A: Add Thread.sleep
try {
Thread.sleep(3000);
} catch(InterruptedException ie) {}
A: For small applications it is fine to use Timer and TimerTask as Rohit mentioned but in web applications I would use Quartz Scheduler to schedule jobs and to perform such periodic jobs.
See tutorials for Quartz scheduling.
A: public class TimeDelay{
public static void main(String args[]) {
try {
while (true) {
System.out.println(new String("Hello world"));
Thread.sleep(3 * 1000); // every 3 seconds
}
} catch (InterruptedException e) {
e.printStackTrace();
}
}
} | unknown | |
d3150 | train | You need to register a BroadcastReceiver to detect when your are has been updated.
<receiver android:name=".MyBroadcastReceiver">
<intent-filter>
<action android:name="android.intent.action.MY_PACKAGE_REPLACED" />
</intent-filter>
</receiver>
Take a look at
How to know my Android application has been upgraded in order to reset an alarm?
A: Is launchBackgroundService in a service or run from the activity? If it is run from the service please check this answer Background Service getting killed in android
START_STICKY was the thing I missed. | unknown | |
d3151 | train | The username & password in the /login is for Azure DevOps Server. for Azure DevOps you should use OAuth:
param ($oauth)
/loginType:OAuth /login:.,$auth
In the agent job options you need to enable the "Allow scripts to access the OAuth token":
And pass the $(System.AccessToken) as oauth parameter:
A: So I wasn't able to get this to work with tf.exe as Microsoft Docs advised us to, but I was able to figure it out with the help of a blog here.
Step 1
Get the $descriptor value for the Contributors group
$descriptor = az devops security group list --org https://dev.azure.com/$organization --project $project | ConvertFrom-Json | select -expand graphGroups | where principalName -eq "[$project]\Contributors"
Step 2
Create your token values for the branches you want. Unfortunately, you cannot explicitly state which branch pattern you would like. Azure DevOps requires the Hex value (don't ask)
#Create the tokens
function hexify($string) {
return ($string | Format-Hex -Encoding Unicode | Select-Object -Expand Bytes | ForEach-Object { '{0:x2}' -f $_ }) -join ''
}
$hexFeatureBranch = hexify -string "feature"
$featureToken = "refs/heads/$hexFeatureBranch"
$hexSprintBranch = hexify -string "sprint"
$sprintToken = "refs/heads/$hexSprintBranch"
$hexHotfixBranch = hexify -string "hotfix"
$hotfixToken = "refs/heads/$hexHotfixBranch"
Step 3
Get the Namespace ID for your Git Repos
#Get the namespace ID for the organization's Git repos
$namespaceId = az devops security permission namespace list --org "https://dev.azure.com/$organization/" --query "[[email protected] == 'Git Repositories'].namespaceId | [0]"
Step 4
Get the project's JSON object from the server
#Get the Project's JSON object
$projectObj = az devops project show --org https://dev.azure.com/$organization --project $project | ConvertFrom-Json
$projectid = $projectObj.id
Step 5
Put a Deny on CreateBranch for every branch in your repo. The Azure CLI requires you to know which bits for allow and deny need to be. Luckily, I was able to do this by capturing the JSON response when I did it via the UI.
$denyBranch = az devops security permission update --id $namespaceId --org https://dev.azure.com/$organization --subject $descriptor.descriptor --token "repoV2/" --deny-bit 16 --allow-bit 16494
Step 6
Put an Allow on CreateBranch for every branch that has the pattern feature/, sprint/ and hotfix/*
$featureTokenBuild = "repoV2/$projectid/$repoid/$featureToken"
$sprintTokenBuild = "repoV2/$projectid/$repoid/$sprintToken"
$hotfixTokenBuild = "repoV2/$projectid/$repoid/$hotfixToken"
$allowCreateBranchFeature = az devops security permission update --id $namespaceId --org https://dev.azure.com/$organization --subject $descriptor.descriptor --token $featureTokenBuild --deny-bit 0 --allow-bit 16
$allowCreateBranchSprint = az devops security permission update --id $namespaceId --org https://dev.azure.com/$organization --subject $descriptor.descriptor --token $sprintTokenBuild --deny-bit 0 --allow-bit 16
$allowCreateBranchHotfix = az devops security permission update --id $namespaceId --org https://dev.azure.com/$organization --subject $descriptor.descriptor --token $hotfixTokenBuild --deny-bit 0 --allow-bit 16
It wasn't easy but at least for anyone in the future looking to do the same, here is the solution! | unknown | |
d3152 | train | Please try the code below.
const {BlobServiceClient, StorageSharedKeyCredential} = require('@azure/storage-blob');
const createCsvStringifier = require('csv-writer').createObjectCsvStringifier;
const accountName = 'account-name';
const accountKey = 'account-key';
const container = 'container-name';
const blobName = 'text.csv';
const csvStringifier = createCsvStringifier({
header: [
{id: 'name', title: 'NAME'},
{id: 'lang', title: 'LANGUAGE'}
]
});
const records = [
{name: 'Bob', lang: 'French, English'},
{name: 'Mary', lang: 'English'}
];
const headers = csvStringifier.getHeaderString();
const data = csvStringifier.stringifyRecords(records);
const blobData = `${headers}${data}`;
const credentials = new StorageSharedKeyCredential(accountName, accountKey);
const blobServiceClient = new BlobServiceClient(`https://${accountName}.blob.core.windows.net`, credentials);
const containerClient = blobServiceClient.getContainerClient(container);
const blockBlobClient = containerClient.getBlockBlobClient(blobName);
const options = {
blobHTTPHeaders: {
blobContentType: 'text/csv'
}
};
blockBlobClient.uploadData(Buffer.from(blobData), options)
.then((result) => {
console.log('blob uploaded successfully!');
console.log(result);
})
.catch((error) => {
console.log('failed to upload blob');
console.log(error);
});
Two things essentially in this code:
*
*Use createObjectCsvStringifier if you don't want to write the data to disk.
*Use @azure/storage-blob node package instead of azure-storage package as former is the newer one and the latter is being deprecated.
Update
Here's the code using azure-storage package.
const azure = require('azure-storage');
const createCsvStringifier = require('csv-writer').createObjectCsvStringifier;
const accountName = 'account-name';
const accountKey = 'account-key';
const container = 'container-name';
const blobName = 'text.csv';
const csvStringifier = createCsvStringifier({
header: [
{id: 'name', title: 'NAME'},
{id: 'lang', title: 'LANGUAGE'}
]
});
const records = [
{name: 'Bob', lang: 'French, English'},
{name: 'Mary', lang: 'English'}
];
const headers = csvStringifier.getHeaderString();
const data = csvStringifier.stringifyRecords(records);
const blobData = `${headers}${data}`;
const blobService = azure.createBlobService(accountName, accountKey);
const options = {
contentSettings: {
contentType: 'text/csv'
}
}
blobService.createBlockBlobFromText(container, blobName, blobData, options, (error, response, result) => {
if (error) {
console.log('failed to upload blob');
console.log(error);
} else {
console.log('blob uploaded successfully!');
console.log(result);
}
}); | unknown | |
d3153 | train | If your sheet isn't active it can't be found by "ActiveSheet". Make your sheet "PivotTable" active:
Sheets("PivotTable").Select
With ActiveSheet.pivottables("MFTPiv1").PivotFields("Wholesaler")
.Orientation = xlRowField
.Position = 1
End With
Or not:
With Sheets("PivotTable").pivottables("MFTPiv1").PivotFields("Wholesaler")
.Orientation = xlRowField
.Position = 1
End With
Additionally, I've never used that "xlConsolidation" source type. I tried it on a bit of data I have here and it created a pivot table that is not conducive to what you're trying to accomplish. A "standard" pivot cache uses SourceType:=xlDatabase. Maybe try that. | unknown | |
d3154 | train | did you check the hide folder named ".bitname" in your profile root folder? If not, try to find the "xampp" folder inside ".bitname/machines" and copy it to another folder to backup current xampp data.
After isntall/reinstall xampp just put the folder back to the same place ".bitname/machines".
Steps:
*
*Open Finder and make hidden files visible (cmd + shift + .)
*Go to folder /Users/USERNAME/.bitnami/stackman/machines and backup/copy complete xampp folder to a safe place
*Delete everything in folder /Users/USERNAME/.bitnami/stackman
*Download from https://sourceforge.net/projects/xamp...
*Install newest version of XAMPP
*Run XAMPP once for all folders to be created
*Quit XAMPP
*Rename new folder /Users/USERNAME/.bitnami/stackman/machines/xampp to /Users/USERNAME/.bitnami/stackman/machines/xampp_original
*Copy saved folder xampp to /Users/USERNAME/.bitnami/stackman/machines
*Run XAMPP
PS: If you have another MAC maybe is a good idea to test it before using a simulated xampp instalation! | unknown | |
d3155 | train | Usually mocking some object looks like this:
public class TestClass {
private HttpServer server;
public HttpServer getServer() {
return server;
}
public void setServer(HttpServer server) {
this.server = server;
}
public void method(){
//some action with server
}
}
And test class:
public class TestClassTest {
//class under test
TestClass test = new TestClass();
@org.junit.Test
public void testMethod() throws Exception {
HttpServer mockServer = Mockito.mock(HttpServer.class);
test.setServer(mockServer);
//set up mock, test and verify
}
}
Here you some useful links:
*
*Code example
*Official documentation | unknown | |
d3156 | train | # perf stat ls 2>&1 >/dev/null | tail -n 2 | sed 's/ \+//' | sed 's/ /,/'
0.002272536,seconds time elapsed
A: Starting with kernel 5.2-rc1, a new event called duration_time is exposed by perf statto solve exactly this problem. The value of this event is exactly equal to the time elapsed value, but the unit is nanoseconds instead of seconds. | unknown | |
d3157 | train | 192.168.1.60
Client B: 192.168.1.61
In the logs, we see the following:
2018-03-12T07:19:11.607+0000 I ACCESS [conn119719] Successfully authenticated as principal SOMEUSER on SOMEDB
2018-03-12T07:19:11.607+0000 I NETWORK [conn119719] end connection 192.168.1.60 (2 connections now open)
2018-03-12T07:19:17.087+0000 I NETWORK [listener] connection accepted from 192.168.1.60:47806 #119720 (3 connections now open)
2018-03-12T07:19:17.371+0000 I ACCESS [conn119720] Successfully authenticated as principal SOMEUSER on SOMEDB
So if the other MongoDB instances were connecting, that would be fine, but my question is regarding why the clients are able to connect even when the hidden option is true and if that behavior is normal.
Thank You | unknown | |
d3158 | train | The way you have it set up, you have a ResourceDictionary inside another ResourceDictionary with no key/name/reference. When you call Application.Current.Resources["FrameBorder"];, you are accessing the upper-most level of the dictionary and looking for "FrameBorder", not its sub-levels. However, calling TryGetValue goes through all levels of the Dictionary.
Read through the Docs to understand how to access values from Dictionaries.
Ok, so I have played around with the ResourceDictionary tags and files, and I can see where you're going wrong.
According to the Xamarin Forms Docs on ResourceDictionary, you can add stand-alone ResourceDictionary XAML files to your App.xaml like so:
<App ...>
<App.Resources>
<!-- Add more resources here -->
<ResourceDictionary Source="MyResourceDictionary.xaml" />
<!-- Add more resources here -->
</App.Resources>
...
</App>
You can replace App with ContentPage or ContentView, depending on your use.
What does this do?
Simply put, this creates a new ResourceDictionary object and merges other ResourceDictionary files into it. Let's go into a bit more detail.
Let's start by looking at the Application.Current.Resources property, which is of type ResourceDictionary and implements ICollection, IDictionary, and IResourceDictionary. Given the interfaces it implements, you can obtain the values by numeric or string ID (ex: App.Current.Resources[0] or App.Current.Resources["LabelStyle"]). This accesses the top-level contents of the dictionary, so only those that have been created in the <[App/ContentPage/ContentView].Resources> tag.
You also have a property called MergedDictionaries, which implements ICollection. This means that you can only access the list items by numeric ID (ex: MyMergedDictionary[0]).
When you add a ResourceDictionary file like:
<ResourceDictionary Source="MyResourceDictionary.xaml" />
You are actually merging this file with the current ResourceDictionary. To then access the contents of MyResourceDictionary you would call (in C#):
App.Current.Resources.MergedDictionaries[0]["InternalKeyName"]
This is probably better explained like so:
<App ...>
<App.Resources>
<!-- Called using App.Current.Resources["PrimaryColor"] -->
<Color x:Key="PrimaryColor">#2196F3</Color>
<!-- Called using App.Current.Resources.MergedDictionaries[0]["InternalKeyName"] -->
<ResourceDictionary Source="MyResourceDictionary.xaml" />
</App.Resources>
...
</App>
Your case
In your case you have two dictionaries merged into your Application Resources: FooterRes and FrameRes.
For better management, I would create an enum like:
public enum MergedResourcesEnum
{
Footer,
Frame
}
and use this enum when calling the resources in C# like so:
Application
.Current
.Resources
.MergedDictionaries[(int)MergedResourcesEnum.Frame]["FrameBorder"];
A: You can try to delete the outer ResourceDictionary tag and both of
var fb2 = (Style)Application.Current.Resources["FrameBorder"];
and
Application.Current.Resources.TryGetValue("FrameBorder", out object frameBorder);
var fb = (Style)frameBorder;
work right.
So your xaml code is like this:
(Please ignore the unrelated package name and FooterRes.xaml because I you didnot share your FooterRes.xaml file)
<Application xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
x:Class="App75.App">
<Application.Resources>
<ResourceDictionary Source="/Resources/FooterRes.xaml" />
</Application.Resources>
</Application> | unknown | |
d3159 | train | It's probably best to parse the file in some other language and then invoke INSERT from there, but since the order of fields within the file is predictable, you could go via user variables with something like:
LOAD DATA INFILE '/path/to/file.txt' INTO TABLE my_table
FIELDS TERMINATED BY '\n' LINES TERMINATED BY 0x1e
(@dummy, @dummy, @name, @dummy, @email, @city, @zip, @dummy)
SET Name = SUBSTRING(@name, LOCATE(' >>> ', @name ) + 5),
Email = SUBSTRING(@email, LOCATE(' >>> ', @email) + 5),
City = SUBSTRING(@city, LOCATE(' >>> ', @city ) + 5),
ZIP = SUBSTRING(@zip, LOCATE(' >>> ', @zip ) + 5); | unknown | |
d3160 | train | I need to make the img source show the checkbox_yellow image instead of checkbox-empty image,when the div containing the image and text is clicked.And to change it back when the div is clicked again. | unknown | |
d3161 | train | I understood that, by defining a method final the class designer promises this method will always work as described, or implied. But validations need to create a partial customization that is only possible without the final.
changing from:
public final ResponseEntity<Response> createUser(@RequestHeader("token") final String token, @RequestBody final Request request) {
return createUser.execute( ... ); // createUser is null
}
for
public ResponseEntity<Response> createUser(@RequestHeader("token") final String token, @RequestBody final Request request) {
return createUser.execute( ... ); // createUser is null
} | unknown | |
d3162 | train | What you need to do is add a private probing path into the application configuration file. This tells the CLR which directories to look in for extra assemblies.
*
*http://msdn.microsoft.com/en-us/library/823z9h8w.aspx
Sample app.config
<configuration>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<probing privatePath="Common;Modules"/>
</assemblyBinding>
</runtime>
</configuration> | unknown | |
d3163 | train | this is not automatically a reference to the right object in the ajax callback. You can change that by closing over a variable that does have the right value:
$("#someDiv .myClass").each(function() {
var $this = $(this);
var ajaxData = "myAjaxData";
$.ajax({
type: "POST",
url: "somefile.php",
data: ajaxData,
success: function(data) {
$this.removeClass('classBlackFont').addClass('classGreenFont');
}
});
});
or by using the context option of $.ajax():
$("#someDiv .myClass").each(function() {
var ajaxData = "myAjaxData";
$.ajax({
type: "POST",
url: "somefile.php",
data: ajaxData,
context: this,
success: function(data) {
$(this).removeClass('classBlackFont').addClass('classGreenFont');
}
});
});
A: context : this
like something
/**
* Default ajax
*/
$.ajaxSetup({
type: 'post',
dataType: 'json',
context: this
}); | unknown | |
d3164 | train | on Pages.php you have
$r = mysqli_query($dbc, $q);
$q is fine but you have not mentioned $dbc
on your setup page, create a class for connection, declareing a connection method and then, on PAGES.PHP:
$db_obj = new setup(); /* create object for setup class */
$dbc = $db_obj -> connect_db();/* call connection method */ | unknown | |
d3165 | train | I managed to solve the issue by explicitly setting the env variable LC_ALL to LC_ALL=en_US.UTF-8 in the rbenv-vars plugin file. | unknown | |
d3166 | train | its 5 in the morning so this might be all wrong, but here goes:
the key is what we are sorting by, the values aren't interesting
your insert function should probably look something like this:
def insert(self, key, value):
if self.root = None:
self.root = Node(key,value)
return
#regular binary tree traversal (comparing the key) to find where to insert, lets assume we need to insert on the left
parent.left = Node(key,value)
can you figure it out from here or would you like more direction
A: You didn't specify, but I'm guessing the point of the keys is to determine if a particular key is already in the tree, and if so, replace the related node's value in O(1) runtime complexity.
So when you're inserting a node, you will first check the dictionary for the key (you will initialize an empty dictionary yourself in __init__). If it already is there, then you simply just need to replace the value of the node for that particular key. Otherwise, you add the new node the same way that you would in any BST, and also remember to update your dictionary to map the key to it's node. | unknown | |
d3167 | train | Ugly:
i%3==0 ? cout<< "Hello\n" : cout<<i;
Nice:
if ( i%3 == 0 )
cout << "Hello\n";
else
cout << i;
Your version doesn't work because the result types of the expressions on each side of : need to be compatible.
A: You can't use the conditional operator if the two alternatives have incompatible types. The clearest thing is to use if:
if (i%3 == 0)
cout << "Hello\n";
else
cout << i;
although in this case you could convert the number to a string:
cout << (i%3 == 0 ? "Hello\n" : std::to_string(i));
In general, you should try to maximise clarity rather than minimise characters; you'll thank yourself when you have to read the code in the future.
A: operator<< is overloaded, and the two execution paths don't use the same overload. Therefore, you can't have << outside the conditional.
What you want is
if (i%3 == 0) cout << "Hello\n"; else cout << i;
This can be made a bit shorter by reversing the condition:
if (i%3) cout << i; else cout << "Hello\n";
And a few more characters saved by using the ternary:
(i%3)?(cout<<i):(cout<<"Hello\n");
A: std::cout << i % 3 == 0 ? "Hello" : std::to_string(i);
But, as all the other answers have said, you probably shouldn't do this, because it quickly turns into spaghetti code. | unknown | |
d3168 | train | This scripts work on MacBook Air M1
Dockerfile
FROM ubuntu:20.04
RUN apt-get update && apt-get -y install libpq-dev gcc && pip install psycopg2
COPY requirements.txt /cs_account/
RUN pip3 install -r requirements.txt
requirements.txt
psycopg2-binary~=2.8.6
Updated answer from the answer of Zoltán Buzás
A: I made it work. This is the code:
FROM python:3.8.3-slim #Image python:3.9.5-slim also works # Image python:3.9.5-slim-buster also works
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
&& pip install psycopg2
A: This worked for me. Try slim-buster image.
In your Dockerfile
FROM python:3.8.7-slim-buster
and in your requirements.txt file
psycopg2-binary~= <<version_number>>
A: On Alpine Linux, you will need to compile all packages, even if a pre-compiled binary wheel is available on PyPI. On standard Linux-based images, you won't (https://pythonspeed.com/articles/alpine-docker-python/ - there are also other articles I've written there that might be helpful, e.g. on security).
So change your base image to python:3.8.3-slim-buster or python:3.8-slim-buster and it should work.
A: I added this to the top answer because I was getting other errors like below:
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
and
src/pyodbc.h:56:10: fatal error: sql.h: No such file or directory
#include <sql.h>
This is what I did to fix this, so I am not sure how others were getting that to work, however maybe it was some of the other things I was doing?
My solution that I found from other posts when googling those two errors:
FROM python:3.8.3-slim
RUN apt-get update \
&& apt-get -y install g++ libpq-dev gcc unixodbc unixodbc-dev
A: I've made a custom image with
FROM python:alpine
ADD requirements.txt /
RUN apk update --no-cache \
&& apk add build-base postgresql-dev libpq --no-cache --virtual .build-deps \
&& pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r /requirements.txt \
&& apk del .build-deps
RUN apk add postgresql-libs libpq --no-cache
and requirements.txt
django
djangorestframework
psycopg2-binary | unknown | |
d3169 | train | (I don't know what class you mean by Int - perhaps you mean java.lang.Integer, but perhaps you mean some custom class. It's not totally relevant to the answer, however)
You always need a parameter list when you invoke a constructor, even if it is empty:
new Int()
Or, if you mean to create an array, you need to specify the number of elements:
new Int[10]
However, you don't need the first assignment:
Queue[y]=new Int();
Queue[y]=num;
The second line overwrites the value in the first line, so it's actually just creating an object and then immediately discarding it.
You could simply write:
Queue[y]=num;
Note that this isn't actually assigning an int to an Object array element: due to autoboxing, the compiler automatically converts this to:
Queue[y]=Integer.valueOf(num);
so an instance of Integer is being added to the array. However, this conversion isn't something that you need to do yourself. | unknown | |
d3170 | train | From your description you seem to mean this, which is a list of name_all that does not match table1 name.
SELECT table2.Name_all
FROM table2
LEFT JOIN table1 ON table2.Name_all = table1.Name
WHERE table1.Name Is Null
If you need a count as well, you can say:
SELECT table2.Name_all, Count(table2.Name_all) AS CountOf
FROM table2
LEFT JOIN table1 ON table2.Name_all= table1.Name
WHERE table1.Name Is Null
GROUP BY table2.Name_all;
A: Try this
This query will fetch common names from table 1 and table 2. It will also fetch names from table 2 which are not there in table 1.
SELECT table1.Name,
table2.Name,
COUNT(*)
FROM table2
LEFT JOIN table1
ON table2.Name_all = table1.Name
GROUP BY table1.Name, table2.Name
Let me if this works.
A: I would do a simple in check then
SELECT table1.Name,
FROM table1
WHERE table1.Name In (SELECT table2.Name_all FROM table2)
GROUP BY table1.Name
There are other ways, but without more information this is a simple way t | unknown | |
d3171 | train | Change the fadeInAnimation and pass a boolean argument, if true do fade-In animation else fade-out animation. Code sample is given below. Usage fadeAnimation(true) for fadeIn animation and fadeAnimation(false) for fadeOut animation. Hope this helps.
private Animation fadeAnimation(boolean fadeIn) {
Animation animation = null;
if(fadeIn)
animation = new AlphaAnimation(0f, 1.0f);
else
animation = new AlphaAnimation(1.0f, 0f);
animation.setDuration(1000);
animation.setFillEnabled(true);
animation.setFillAfter(true);
return animation;
}
A: Checkout below code
viewObject.animate()
.alpha(0.0f)
.setStartDelay(10000)
.setDuration(2000)
.setListener(new AnimatorListenerAdapter() {
@Override
public void onAnimationEnd(Animator animation) {
super.onAnimationEnd(animation);
// do your stuff if any, after animation ends
}
}).start();
A: possible duplicate of View.setVisibility(View.INVISIBLE) does not work for animated view. visibility are not honoured as long as an animation is performed even though animation is cancelled | unknown | |
d3172 | train | In your update code. $stmt->bind_param('i', $now, $id); you forgot to add another i.
It should be
$stmt->bind_param('ii', $now, $id); | unknown | |
d3173 | train | This works like a "catch all" statement, reading in plain English:
*
*wait for the request to be processed by other parts of the app
(await next())
*when done, check if the app responded with a body or the request does not require a response body
*if none is true, return HTTP code 404 "Not Found" | unknown | |
d3174 | train | ScalaCheck is a framework to generate data, you generate a raw data based on the schema using you custom generators.
Visit ScalaCheck Documentation.
A: Using @JacekLaskowski's advice, you could generate dynamic data using generators with ScalaCheck (Gen) based on field/types you are expecting.
It could look like this:
import org.apache.spark.sql.types._
import org.apache.spark.sql.{Row, SaveMode}
import org.scalacheck._
import scala.collection.JavaConverters._
val dynamicValues: Map[(String, DataType), Gen[Any]] = Map(
("a", DoubleType) -> Gen.choose(0.0, 100.0),
("aa", StringType) -> Gen.oneOf("happy", "sad", "glad"),
("p", LongType) -> Gen.choose(0L, 10L),
("pp", StringType) -> Gen.oneOf("Iam", "You're")
)
val schemas = Map(
"schema1" -> StructType(
List(
StructField("a", DoubleType, true),
StructField("aa", StringType, true),
StructField("p", LongType, true),
StructField("pp", StringType, true)
)),
"schema2" -> StructType(
List(
StructField("a", DoubleType, true),
StructField("pp", StringType, true),
StructField("p", LongType, true)
)
)
)
val numRecords = 1000
schemas.foreach {
case (name, schema) =>
// create a data frame
spark.createDataFrame(
// of #numRecords records
(0 until numRecords).map { _ =>
// each of them a row
Row.fromSeq(schema.fields.map(field => {
// with fields based on the schema's fieldname & type else null
dynamicValues.get((field.name, field.dataType)).flatMap(_.sample).orNull
}))
}.asJava, schema)
// store to parquet
.write.mode(SaveMode.Overwrite).parquet(name)
}
A: You could do something like this
import org.apache.spark.SparkConf
import org.apache.spark.sql.types._
import org.apache.spark.sql.{DataFrame, SparkSession}
import org.json4s
import org.json4s.JsonAST._
import org.json4s.jackson.JsonMethods._
import scala.util.Random
object Test extends App {
val structType: StructType = StructType(
List(
StructField("a", DoubleType, true),
StructField("aa", StringType, true),
StructField("p", LongType, true),
StructField("pp", StringType, true)
)
)
val spark = SparkSession
.builder()
.master("local[*]")
.config(new SparkConf())
.getOrCreate()
import spark.implicits._
val df = createRandomDF(structType, 1000)
def createRandomDF(structType: StructType, size: Int, rnd: Random = new Random()): DataFrame ={
spark.read.schema(structType).json((0 to size).map { _ => compact(randomJson(rnd, structType))}.toDS())
}
def randomJson(rnd: Random, dataType: DataType): JValue = {
dataType match {
case v: DoubleType =>
json4s.JDouble(rnd.nextDouble())
case v: StringType =>
JString(rnd.nextString(10))
case v: IntegerType =>
JInt(rnd.nextInt())
case v: LongType =>
JInt(rnd.nextLong())
case v: FloatType =>
JDouble(rnd.nextFloat())
case v: BooleanType =>
JBool(rnd.nextBoolean())
case v: ArrayType =>
val size = rnd.nextInt(10)
JArray(
(0 to size).map(_ => randomJson(rnd, v.elementType)).toList
)
case v: StructType =>
JObject(
v.fields.flatMap {
f =>
if (f.nullable && rnd.nextBoolean())
None
else
Some(JField(f.name, randomJson(rnd, f.dataType)))
}.toList
)
}
}
} | unknown | |
d3175 | train | You can do logic on the Key returned from the object list:
first_dummy = None
first_real = None
for object in s3_resource.Bucket(BUCKET_NAME).objects.filter(Prefix='data/'):
if not first_dummy and 'date=1900-01-01-00' in object.key:
first_dummy = object.key
elif not first_real and 'date=1900-01-01-00' not in object.key:
first_real = object.key
if first_dummy and first_real:
break
print(first_dummy, first_real) | unknown | |
d3176 | train | Here is the R version of b-h-'s function, just in case:
measure <- function(lon1,lat1,lon2,lat2) {
R <- 6378.137 # radius of earth in Km
dLat <- (lat2-lat1)*pi/180
dLon <- (lon2-lon1)*pi/180
a <- sin((dLat/2))^2 + cos(lat1*pi/180)*cos(lat2*pi/180)*(sin(dLon/2))^2
c <- 2 * atan2(sqrt(a), sqrt(1-a))
d <- R * c
return (d * 1000) # distance in meters
}
A: The earth is an annoyingly irregular surface, so there is no simple formula to do this exactly. You have to live with an approximate model of the earth, and project your coordinates onto it. The model I typically see used for this is WGS 84. This is what GPS devices usually use to solve the exact same problem.
NOAA has some software you can download to help with this on their website.
A: There are many tools that will make this easy. See monjardin's answer for more details about what's involved.
However, doing this isn't necessarily difficult. It sounds like you're using Java, so I would recommend looking into something like GDAL. It provides java wrappers for their routines, and they have all the tools required to convert from Lat/Lon (geographic coordinates) to UTM (projected coordinate system) or some other reasonable map projection.
UTM is nice, because it's meters, so easy to work with. However, you will need to get the appropriate UTM zone for it to do a good job. There are some simple codes available via googling to find an appropriate zone for a lat/long pair.
A: For approximating short distances between two coordinates I used formulas from
http://en.wikipedia.org/wiki/Lat-lon:
m_per_deg_lat = 111132.954 - 559.822 * cos( 2 * latMid ) + 1.175 * cos( 4 * latMid);
m_per_deg_lon = 111132.954 * cos ( latMid );
.
In the code below I've left the raw numbers to show their relation to the formula from wikipedia.
double latMid, m_per_deg_lat, m_per_deg_lon, deltaLat, deltaLon,dist_m;
latMid = (Lat1+Lat2 )/2.0; // or just use Lat1 for slightly less accurate estimate
m_per_deg_lat = 111132.954 - 559.822 * cos( 2.0 * latMid ) + 1.175 * cos( 4.0 * latMid);
m_per_deg_lon = (3.14159265359/180 ) * 6367449 * cos ( latMid );
deltaLat = fabs(Lat1 - Lat2);
deltaLon = fabs(Lon1 - Lon2);
dist_m = sqrt ( pow( deltaLat * m_per_deg_lat,2) + pow( deltaLon * m_per_deg_lon , 2) );
The wikipedia entry states that the distance calcs are within 0.6m for 100km longitudinally and 1cm for 100km latitudinally but I have not verified this as anywhere near that accuracy is fine for my use.
A: Here is a javascript function:
function measure(lat1, lon1, lat2, lon2){ // generally used geo measurement function
var R = 6378.137; // Radius of earth in KM
var dLat = lat2 * Math.PI / 180 - lat1 * Math.PI / 180;
var dLon = lon2 * Math.PI / 180 - lon1 * Math.PI / 180;
var a = Math.sin(dLat/2) * Math.sin(dLat/2) +
Math.cos(lat1 * Math.PI / 180) * Math.cos(lat2 * Math.PI / 180) *
Math.sin(dLon/2) * Math.sin(dLon/2);
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
var d = R * c;
return d * 1000; // meters
}
Explanation: https://en.wikipedia.org/wiki/Haversine_formula
The haversine formula determines the great-circle distance between two points on a sphere given their longitudes and latitudes.
A: One nautical mile (1852 meters) is defined as one arcminute of longitude at the equator. However, you need to define a map projection (see also UTM) in which you are working for the conversion to really make sense.
A: There are quite a few ways to calculate this. All of them use aproximations of spherical trigonometry where the radius is the one of the earth.
try http://www.movable-type.co.uk/scripts/latlong.html for a bit of methods and code in different languages.
A: Given you're looking for a simple formula, this is probably the simplest way to do it, assuming that the Earth is a sphere with a circumference of 40075 km.
Length in km of 1° of latitude = always 111.32 km
Length in km of 1° of longitude = 40075 km * cos( latitude ) / 360
A: 'below is from
'http://www.zipcodeworld.com/samples/distance.vbnet.html
Public Function distance(ByVal lat1 As Double, ByVal lon1 As Double, _
ByVal lat2 As Double, ByVal lon2 As Double, _
Optional ByVal unit As Char = "M"c) As Double
Dim theta As Double = lon1 - lon2
Dim dist As Double = Math.Sin(deg2rad(lat1)) * Math.Sin(deg2rad(lat2)) + _
Math.Cos(deg2rad(lat1)) * Math.Cos(deg2rad(lat2)) * _
Math.Cos(deg2rad(theta))
dist = Math.Acos(dist)
dist = rad2deg(dist)
dist = dist * 60 * 1.1515
If unit = "K" Then
dist = dist * 1.609344
ElseIf unit = "N" Then
dist = dist * 0.8684
End If
Return dist
End Function
Public Function Haversine(ByVal lat1 As Double, ByVal lon1 As Double, _
ByVal lat2 As Double, ByVal lon2 As Double, _
Optional ByVal unit As Char = "M"c) As Double
Dim R As Double = 6371 'earth radius in km
Dim dLat As Double
Dim dLon As Double
Dim a As Double
Dim c As Double
Dim d As Double
dLat = deg2rad(lat2 - lat1)
dLon = deg2rad((lon2 - lon1))
a = Math.Sin(dLat / 2) * Math.Sin(dLat / 2) + Math.Cos(deg2rad(lat1)) * _
Math.Cos(deg2rad(lat2)) * Math.Sin(dLon / 2) * Math.Sin(dLon / 2)
c = 2 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1 - a))
d = R * c
Select Case unit.ToString.ToUpper
Case "M"c
d = d * 0.62137119
Case "N"c
d = d * 0.5399568
End Select
Return d
End Function
Private Function deg2rad(ByVal deg As Double) As Double
Return (deg * Math.PI / 180.0)
End Function
Private Function rad2deg(ByVal rad As Double) As Double
Return rad / Math.PI * 180.0
End Function
A: To convert latitude and longitude in x and y representation you need to decide what type of map projection to use. As for me, Elliptical Mercator seems very well. Here you can find an implementation (in Java too).
A: Here is a MySQL function:
SET @radius_of_earth = 6378.137; -- In kilometers
DROP FUNCTION IF EXISTS Measure;
DELIMITER //
CREATE FUNCTION Measure (lat1 REAL, lon1 REAL, lat2 REAL, lon2 REAL) RETURNS REAL
BEGIN
-- Multiply by 1000 to convert millimeters to meters
RETURN 2 * @radius_of_earth * 1000 * ASIN(SQRT(
POW(SIN((lat2 - lat1) / 2 * PI() / 180), 2) +
COS(lat1 * PI() / 180) *
COS(lat2 * PI() / 180) *
POW(SIN((lon2 - lon1) / 2 * PI() / 180), 2)
));
END; //
DELIMITER ;
A: Why limiting to one degree?
The formula is based on the proportion:
distance[m] : distance[deg] = max circumference[m] : 360[deg]
Lets say you are given an angle for a latitude and one for longitude both in degrees: (longitude[deg], latitude[deg])
For the latitude, the max circumference is always the one passing for the poles. In a spherical model, with radius R (in meters) the max circumference is 2 * pi * R and the proportions resolves to:
latitude[m] = ( 2 * pi * R[m] * latitude[deg] ) / 360[deg]
(note that deg and deg simplifies, and what remains is meters on both sides).
For the longitude the max circumference is proportional to the cosine of the latitude (as you can imagine running in circle the north pole is shorter than running in circle around the equator), so it is 2 * pi * R * cos(latitude[rad]).
Therefore
longitude distance[m] = ( 2 * pi * R[m] * cos(latitude[rad]) * longitude[deg] ) / 360[deg]
Note that you will have to convert the latitude from deg to rad before computing the cos.
Omitting details for who is just looking for the formula:
lat_in_m = 111132.954 * lat_in_degree / 360
lon_in_m = 111132.954 * cos(lat_in_radians) * lon_in_deg ) / 360
A: If its sufficiently close you can get away with treating them as coordinates on a flat plane. This works on say, street or city level if perfect accuracy isnt required and all you need is a rough guess on the distance involved to compare with an arbitrary limit.
A: Here is a version in Swift:
func toDegreeAt(point: CLLocationCoordinate2D) -> CLLocationDegrees {
let latitude = point.latitude
let earthRadiusInMetersAtSeaLevel = 6378137.0
let earthRadiusInMetersAtPole = 6356752.314
let r1 = earthRadiusInMetersAtSeaLevel
let r2 = earthRadiusInMetersAtPole
let beta = latitude
let earthRadiuseAtGivenLatitude = (
( pow(pow(r1, 2) * cos(beta), 2) + pow(pow(r2, 2) * sin(beta), 2) ) /
( pow(r1 * cos(beta), 2) + pow(r2 * sin(beta), 2) )
)
.squareRoot()
let metersInOneDegree = (2 * Double.pi * earthRadiuseAtGivenLatitude * 1.0) / 360.0
let value: CLLocationDegrees = self / metersInOneDegree
return value
}
A: Original poster asked
"If I have a latitude or longitude reading in standard NMEA format is there an easy way / formula to convert that reading to meters"
I haven't used Java in a while so I did the solution here in "PARI".
Just plug your point's latitude and longitudes
into the equations below to get
the exact arc lengths and scales
in meters per (second of Longitude)
and meters per (second of Latitude).
I wrote these equations for
the free-open-source-mac-pc math program "PARI".
You can just paste the following into it
and the I will show how to apply them to two made up points:
\\=======Arc lengths along Latitude and Longitude and the respective scales:
\p300
default(format,"g.42")
dms(u)=[truncate(u),truncate((u-truncate(u))*60),((u-truncate(u))*60-truncate((u-truncate(u))*60))*60];
SpinEarthRadiansPerSec=7.292115e-5;\
GMearth=3986005e8;\
J2earth=108263e-8;\
re=6378137;\
ecc=solve(ecc=.0001,.9999,eccp=ecc/sqrt(1-ecc^2);qecc=(1+3/eccp^2)*atan(eccp)-3/eccp;ecc^2-(3*J2earth+4/15*SpinEarthRadiansPerSec^2*re^3/GMearth*ecc^3/qecc));\
e2=ecc^2;\
b2=1-e2;\
b=sqrt(b2);\
fl=1-b;\
rfl=1/fl;\
U0=GMearth/ecc/re*atan(eccp)+1/3*SpinEarthRadiansPerSec^2*re^2;\
HeightAboveEllipsoid=0;\
reh=re+HeightAboveEllipsoid;\
longscale(lat)=reh*Pi/648000/sqrt(1+b2*(tan(lat))^2);
latscale(lat)=reh*b*Pi/648000/(1-e2*(sin(lat))^2)^(3/2);
longarc(lat,long1,long2)=longscale(lat)*648000/Pi*(long2-long1);
latarc(lat1,lat2)=(intnum(th=lat1,lat2,sqrt(1-e2*(sin(th))^2))+e2/2*sin(2*lat1)/sqrt(1-e2*(sin(lat1))^2)-e2/2*sin(2*lat2)/sqrt(1-e2*(sin(lat2))^2))*reh;
\\=======
To apply that to your type of problem I will make up
that one of your data points was at
[Latitude, Longitude]=[+30, 30]
and the other at
[Latitude, Longitude]=[+30:00:16.237796,30:00:18.655502].
To convert those points to meters in two coordinates:
I can setup a system of coordinates in meters
with the first point being at the origin: [0,0] meters.
Then I can define the coordinate x-axis as due East-West,
and the y-axis as due North-South.
Then the second point's coordinates are:
? [longarc(30*Pi/180,30*Pi/180,((18.655502/60+0)/60+30)*Pi/180),latarc(30*Pi/180,((16.237796/60+0)/60+30)*Pi/180)]
%9 = [499.999998389040060103621525561027349597207, 499.999990137812119668486524932382720606325]
Warning on precision:
Note however:
Since the surface of the Earth is curved,
2-dimensional coordinates obtained on it can't follow
the same rules as cartesian coordinates
such as the Pythagorean Theorem perfectly.
Also lines pointing due North-South
converge in the Northern Hemisphere.
At the North Pole it becomes obvious
that North-South lines won't serve well for
lines parallel to the y-axis on a map.
At 30 degrees Latitude with 500 meter lengths,
the x-coordinate changes by 1.0228 inches if the scale is set from [0,+500] instead of [0,0]:
? [longarc(((18.655502/60+0)/60+30)*Pi/180,30*Pi/180,((18.655502/60+0)/60+30)*Pi/180),latarc(30*Pi/180,((16.237796/60+0)/60+30)*Pi/180)]
%10 = [499.974018595036400823218815901067566617826, 499.999990137812119668486524932382720606325]
? (%10[1]-%9[1])*1000/25.4
%12 = -1.02282653557713702372872677007019603860352
?
The error there of 500meters/1inch is only about 1/20000,
good enough for most diagrams,
but one might want to reduce the 1 inch error.
For a completely general way to convert
lat,long to orthogonal x,y coordinates
for any point on the globe, I would chose to abandon
aligning coordinate lines with East-West
and North-South, except still keeping the center
y-axis pointing due North. For example you could
rotate the globe around the poles (around the 3-D Z-axis)
so the center point in your map is at longitude zero.
Then tilt the globe (around the 3-D y-axis) to
bring your center point to lat,long = [0,0].
On the globe points at lat,long = [0,0] are
farthest from the poles and have a lat,long
grid around them that is most orthogonal
so you can use these new "North-South", "East-West"
lines as coordinate x,y lines without incurring
the stretching that would have occurred doing
that before rotating the center point away from the pole.
Showing an explicit example of that would take a lot more space.
A: You need to convert the coordinates to radians to do the spherical geometry. Once converted, then you can calculate a distance between the two points. The distance then can be converted to any measure you want.
A: Based on average distance for degress in the Earth.
1° = 111km;
Converting this for radians and dividing for meters, take's a magic number for the RAD, in meters: 0.000008998719243599958;
then:
const RAD = 0.000008998719243599958;
Math.sqrt(Math.pow(lat1 - lat2, 2) + Math.pow(long1 - long2, 2)) / RAD;
A: If you want a simple solution then use the Haversine formula as outlined by the other comments. If you have an accuracy sensitive application keep in mind the Haversine formula does not guarantee an accuracy better then 0.5% as it is assuming the earth is a sphere. To consider that Earth is a oblate spheroid consider using Vincenty's formulae.
Additionally, I'm not sure what radius we should use with the Haversine formula: {Equator: 6,378.137 km, Polar: 6,356.752 km, Volumetric: 6,371.0088 km}. | unknown | |
d3177 | train | You code is right and you definitely get 0 for multiple result more than MAX_SIZE of Int value. You can get Int max size with:
Int.MAX_VALUE
So if this y * x cross Int.MAX_VALUE = 2147483647, fun will return 0 to you.
For number bigger than 16 func will return minus number and for greater than 33 it will return 0. you can check this by:
for(x in 5..50){
log.i("$x! : ${fact(x)}")
}
So you can handle this by changing variable from Int to Long
fun fact(x : Long) : Long {
fun factTail(y : Long , z :Long):Long {
return if (y == 0L) z
else return factTail(y-1 ,y*z)
}
return factTail(x ,1)
}
But Long also have its limitation. Hope you get the point. | unknown | |
d3178 | train | Compared to SVN, for which I recently worked with again after quite awhile, Mercurial is amazing. It gave me a feeling of "Why would anyone use SVN anymore". SVN is pretty good, but Mercurial really does just work better.
For personal projects I would switch without a doubt to a DVCS. It does everything SVN does but better, and much faster. The "learning curve" is just understanding some terminology.
In reality the difference between SVN and a DVCS is that everyone has a full working repository on their system. If you decide to have a "master server", it is exactly the same as what you have, except it is setup to continually serve over a network. To sync these all you do is send/receive(push/pull) the changes between these repositories.
A: About comparing Mercurial with Git - see this SO question: Git and Mercurial - Compare and Contrast (and my long answer there).
About comparing Mercurial with svn - see this SO question: For home projects, can Mercurial or Git (or other DVCS) provide more advantages over Subversion? (theoretically this question is limited in range, though; I wrote about Git vs Subversion in my answer).
A: As far as comparing to Git, Google recently published an interesting comparison of Git and Mercurial based on their evaluation: http://code.google.com/p/support/wiki/DVCSAnalysis
A: One thing not mentioned in the Google comparison was that Git appears to be much faster. Mercurial seems fast enough (with small projects at least) but Git is simply lightning-fast no matter what size the project.
A: It's probably just me, but I have been using Mercurial for six months, after several years of using SVN exclusively, and for some reason it just doesn't fit my mental model as well. I know exactly what I'm doing in SVN, and if something goes wrong, I pretty much always know how to fix it. Conceptually I have no problem with Mercurial - love that I have a local copy of the repository, for example - but in practise I am always losing things. I think it might be because a merge in SVN is quite a big deal, whereas in Hg it's the normal way of things. I want more control over my merges. In SVN it's always clear which changeset precedes which, but Mercurial seems to lack this. Even TortoiseHg, which is quite nice visually, doesn't seem to offer enough opportunities to see exactly what's being merged.
A: SVN has lots of support by 3rd party tools including IDE and bug tracking systems etc including the rather nice TortoiseSVN.
Most developers have used SVN in the past, so getting new developers up to speed on your team is quicker with SVN.
How important this sort of thing is to you, only you can decide. | unknown | |
d3179 | train | onCreate or in your onClick if you want you should new up a Handler
Handler mMainHandler = new Handler(Looper.prepareMainLooper());
then you can use
private void updateUI(){
mMainHandler.post(new Runnable(){
//touch dialog
}
} | unknown | |
d3180 | train | The default axis labeling policy puts the ticks a constant distance apart. You might try the CPTAxisLabelingPolicyAutomatic labeling policy. This policy will automatically adjust the tick spacing as the plot range changes. | unknown | |
d3181 | train | If your goal is essentially two combine your first two User classes into a single class, you could do this:
class User(suggestion: String? = null) {
private val name: NameGenerator = suggestion?.let { NameGenerator(it) } ?: NameGenerator()
fun sayName() {
println(this.name.fakeName)
}
}
A: Would it be acceptable to make the name provisioning part of the constructor signature by taking a function as a parameter? The following approach enables the desired abstraction imho:
class User(private val nameProvider: () -> Name) {
private val name: Name by lazy(nameProvider)
fun sayName() {
println(this.name.fakeName)
}
}
You can create Users with any nameProvider such as:
User { MockName() }
User { NameGenerator("blah") } | unknown | |
d3182 | train | The TypeConverterclass is the generic .NET way for converting types. The System.ComponentModel namespace includes implementation for primitive types and WPF ships with some more (but I am not sure in which namespace). Further there is the static Convert class offering primitive type conversions, too. It handles some simple conversions on itself and falls back to IConvertible.
A: First off, your using a struct to pass a complex data around. That is a really bad idea. Make that a class instead.
That said, it sounds like you need a factory to create instances of a parser interface:
interface IColumnTypeParser
{
// A stateles Parse method that takes input and returns output
DataColumn Parse(string input);
}
class ColumnTyeParserFactory
{
IColumnTypeParser GetParser(Type columnType)
{
// Implementation can be anything you want...I would recommend supporting
// a configuration file that maps types to parsers, and use pooling or caching
// so you are not constantly recreating instances of your parsers (make sure your
// parser implementations are stateless to support caching/pooling and reuse)
// Dummy implementation:
if (columnType == typeof(string)) return new StringColumnTypeParser();
if (columnType == typeof(float)) return new FloatColumnTypeParser();
// ...
}
}
Your FillDataRow implementation would hten use the factory:
m_columnTypeParserFactory = new ColumnTypeParserFactory();
private void FillDataRow(List<ColumnTypeStrinRep> rowInput, DataRow row)
{
foreach (ColumnTypeStrinRep columnInput in rowInput)
{
var parser = m_columnTypeParserFactory.GetParser(columnInput.type);
row[columnInput.columnName] parser.Parse(columnInput.stringRep);
}
}
A: In short there is nothing like that - no automagical conversion. You have to specify conversion yourselves. That said
static class Converters{
static Dictionary<Type, Func<string, object>> Converters = new ...
static Converters{
Converters.Add(typeof(string), input => input);
Converters.Add(typeof(int), input => int.Parse(input));
Converters.Add(typeof(double), double => double.Parse(input));
}
}
void FillDataRow(IList<string> rowInput, row){
for(int i = 0; i < rowInput.Length; i++){
var converter = Converters.Converters[Column[i].DataType];
row[i] = converter(rowInput[i])
}
}
A: How about Convert.ChangeType?
You might consider something fancy with generics btw.
foreach (ColumnTypeStrinRep columnInput in rowInput)
{
Debug.Assert(row.Table.Columns.Contains(columnInput.columnName));
Debug.Assert(row.Table.Columns[columnInput.columnName].DataType == columnInput.type);
...
row[columnInput.columnName] = Convert.ChangeType(columnInput.stringRep, columnInput.type);
}
More on Convert.ChangeType:
http://msdn.microsoft.com/en-us/library/dtb69x08.aspx | unknown | |
d3183 | train | How about that:
dict( [ (n, a.get(n, 0)+b.get(n, 0)) for n in set(a)|set(b) ] )
Or without creating an intermediate list (generator is enough):
dict( (n, a.get(n, 0)+b.get(n, 0)) for n in set(a)|set(b) )
Post Scriptum:
As a commentator addressed correctly, there is a way to implement that easier with the new (from Py2.7) collections.Counter class. As much I remember, this version was not available when I wrote the answer:
from collections import Counter
dict(Counter(a)+Counter(b))
A: One liner (as sortof requested): get key lists, add them, discard duplicates, iterate over result with list comprehension, return (key,value) pairs for the sum if the key is in both dicts, or just the individual values if not. Wrap in dict.
>>> dict([(x,a[x]+b[x]) if (x in a and x in b) else (x,a[x]) if (x in a) else (x,b[x]) for x in set(a.keys()+b.keys())])
{'aardvark': 3000, 'fish': 10, 'dog': 200, 'cat': 3}
A: result in a:
for elem in b:
a[elem] = a.get(elem, 0) + b[elem]
result in c:
c = dict(a)
for elem in b:
c[elem] = a.get(elem, 0) + b[elem]
A: Not in one line, but ...
import itertools
import collections
a = dict()
a['cat'] = 1
a['fish'] = 10
a['aardvark'] = 1000
b = dict()
b['cat'] = 2
b['dog'] = 200
b['aardvark'] = 2000
c = collections.defaultdict(int)
for k, v in itertools.chain(a.iteritems(), b.iteritems()):
c[k] += v
You can easily extend it to a larger number of dictionaries. | unknown | |
d3184 | train | i found a solution !
i used this to bypass the error:
On Error GoTo ErrorHandler
ErrorHandler:
If Err.Number = 5991 Or Err.Number = 5941 Then
Err.Clear
Resume byebye
End If
For Ro = 4 To ActiveDocument.Tables(4).Rows.Count
[My ugly code]
#in my case i can use a cell to determinate if it's a my row is merged or not
If Len(ActiveDocument.Tables(4).Cell(line, 5).Range.Text) > 0 Then
[My Ugly Code]
byebye:
Next Ro
Hope my solution can help you... | unknown | |
d3185 | train | jQuery:
Using jQuery to change all images on page http://jsfiddle.net/aamir/FnPd5/
$('img').each(function(){ this.src='prefix/'+this.src })
jQuery way to find images on in divs starting with div_: http://jsfiddle.net/aamir/FnPd5/2/
jQuery("[id^='div_']").each(function(){
var img = $(this).find('img')[0];
img.src='prefix/'+img.src
})
Vanilla Javascript
Vanilla Javascript: http://jsfiddle.net/aamir/FnPd5/1/
var items = document.getElementsByTagName("img");
for (var i = items.length; i--;) {
var img = items[i];
img.src='prefix/'+img.src;
}
Vanilla Javascript to find images on in divs starting with div_: http://jsfiddle.net/aamir/FnPd5/3/
var divs = document.getElementsByTagName("div");
for(var i = 0; i < divs.length; i++) {
if(divs[i].id.indexOf('div_') == 0) {
var img = divs[i].getElementsByTagName("img")[0];
img.src='prefix/'+img.src;
}
}
A: DEMO HERE
$("[id^='div_'] img").each(function(){
var src = $(this).attr("src");
var newSrc="prefix/"+src;
$(this).attr("src",newSrc)
})
A: $(function(){
$('img').each(function(i,e){
$(e).attr('src','prefix/'+$(e).attr('src'));
});
});
A: My solution based on your code:
$(document).ready(function(){
$('div div img').each(function(){
var imagen = $(this).attr('src') ;
$(this).attr("src","prefix/"+imagen);
});
});
Working fiddle: http://jsfiddle.net/robertrozas/kR5ZW/
A: Try this solution.
$('img').each(function(){
var imgSrc = $(this).attr('src')
$(this).attr('src','prefix/'+imgSrc);
})
Fiddle Demo
A: Another option:
$('img').attr('src', function(ii, oldSrc) {
return 'prefix/' + oldSrc;
});
A: function addPrefixToImg(prefix) {
$( "img" ).each(function( index ) {
this.src = prefix + this.src;
});
}
addPrefixToImg('prefix/');
you can test on http://jsfiddle.net/f32MQ/ | unknown | |
d3186 | train | This function works!
o.destroy = function(task) {
return $http.delete('/tasks/' + task.id + '.json').success(function(data){
console.log("Task " + task.title + " has been deleted!")
});
};
...I did have to make changes to my app/controllers/tasks_controller.rb:
def destroy
task = Task.find(params[:id])
task.destroy
respond_with task
end | unknown | |
d3187 | train | May be this package (or similar) required to run php scripts as cli?
A: Solution is: Everytime solve your own things.
Anyway,
My linux locale is UTF-8 so i've changed it to 8859-9 locale setting. There are many place to change locale settings in ubuntu. But the easiest way to change it in /etc/default/locale.
And i am happy, it's working now.
Thank you. | unknown | |
d3188 | train | From developer.mozilla.org:
clamp() enables selecting a middle value within a range of values between a defined minimum and maximum. It takes three parameters: a minimum value, a preferred value, and a maximum allowed value.
minmax() only accepts the two extreme values, plus the difference that you already mentioned.
A: minmax() may only be used in Grid and picks between two (2) parameters.
clamp() can be used anywhere in CSS and picks between three (3) parameters: minimum, preferred, and maximum, selecting the middle (preferred) parameter when possible. | unknown | |
d3189 | train | it might be because when you are casting explicitely from list to PDPage, it removes its acrofields.
A: Your code doesn't appear to be saving the result. Are you?
Here is my answer to a similar scenario which may help you. | unknown | |
d3190 | train | Try to add drop-shadow instead of shadow
<script src="https://cdn.tailwindcss.com"></script>
<div class="m-6 space-x-3">
<div v-for="m in media" class="w-[200px] h-[200px] inline-block">
<img class="object-contain w-full h-full drop-shadow-[0_5px_5px_rgba(0,0,0,0.5)]" src="http://placekitten.com/200/300">
</div>
<div v-for="m in media" class="w-[200px] h-[200px] inline-block">
<img class="object-contain w-full h-full drop-shadow-[0_5px_5px_rgba(0,0,0,0.5)]" src="http://placekitten.com/300/200">
</div>
</div>
A: You can instead just extend the theme and use tailwind internal custom properties to get the color you want for the shadow.
here is an example in play.tailwind.
EDIT: Tailwind sample code into a snipet
/* ! tailwindcss v3.2.6 | MIT License | https://tailwindcss.com */
/*
1. Prevent padding and border from affecting element width. (https://github.com/mozdevs/cssremedy/issues/4)
2. Allow adding a border to an element by just adding a border-width. (https://github.com/tailwindcss/tailwindcss/pull/116)
*/
*,
::before,
::after {
box-sizing: border-box;
/* 1 */
border-width: 0;
/* 2 */
border-style: solid;
/* 2 */
border-color: #e5e7eb;
/* 2 */
}
::before,
::after {
--tw-content: '';
}
/*
1. Use a consistent sensible line-height in all browsers.
2. Prevent adjustments of font size after orientation changes in iOS.
3. Use a more readable tab size.
4. Use the user's configured `sans` font-family by default.
5. Use the user's configured `sans` font-feature-settings by default.
*/
html {
line-height: 1.5;
/* 1 */
-webkit-text-size-adjust: 100%;
/* 2 */
-moz-tab-size: 4;
/* 3 */
tab-size: 4;
/* 3 */
font-family: ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
/* 4 */
font-feature-settings: normal;
/* 5 */
}
/*
1. Remove the margin in all browsers.
2. Inherit line-height from `html` so users can set them as a class directly on the `html` element.
*/
body {
margin: 0;
/* 1 */
line-height: inherit;
/* 2 */
}
/*
1. Add the correct height in Firefox.
2. Correct the inheritance of border color in Firefox. (https://bugzilla.mozilla.org/show_bug.cgi?id=190655)
3. Ensure horizontal rules are visible by default.
*/
hr {
height: 0;
/* 1 */
color: inherit;
/* 2 */
border-top-width: 1px;
/* 3 */
}
/*
Add the correct text decoration in Chrome, Edge, and Safari.
*/
abbr:where([title]) {
-webkit-text-decoration: underline dotted;
text-decoration: underline dotted;
}
/*
Remove the default font size and weight for headings.
*/
h1,
h2,
h3,
h4,
h5,
h6 {
font-size: inherit;
font-weight: inherit;
}
/*
Reset links to optimize for opt-in styling instead of opt-out.
*/
a {
color: inherit;
text-decoration: inherit;
}
/*
Add the correct font weight in Edge and Safari.
*/
b,
strong {
font-weight: bolder;
}
/*
1. Use the user's configured `mono` font family by default.
2. Correct the odd `em` font sizing in all browsers.
*/
code,
kbd,
samp,
pre {
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
/* 1 */
font-size: 1em;
/* 2 */
}
/*
Add the correct font size in all browsers.
*/
small {
font-size: 80%;
}
/*
Prevent `sub` and `sup` elements from affecting the line height in all browsers.
*/
sub,
sup {
font-size: 75%;
line-height: 0;
position: relative;
vertical-align: baseline;
}
sub {
bottom: -0.25em;
}
sup {
top: -0.5em;
}
/*
1. Remove text indentation from table contents in Chrome and Safari. (https://bugs.chromium.org/p/chromium/issues/detail?id=999088, https://bugs.webkit.org/show_bug.cgi?id=201297)
2. Correct table border color inheritance in all Chrome and Safari. (https://bugs.chromium.org/p/chromium/issues/detail?id=935729, https://bugs.webkit.org/show_bug.cgi?id=195016)
3. Remove gaps between table borders by default.
*/
table {
text-indent: 0;
/* 1 */
border-color: inherit;
/* 2 */
border-collapse: collapse;
/* 3 */
}
/*
1. Change the font styles in all browsers.
2. Remove the margin in Firefox and Safari.
3. Remove default padding in all browsers.
*/
button,
input,
optgroup,
select,
textarea {
font-family: inherit;
/* 1 */
font-size: 100%;
/* 1 */
font-weight: inherit;
/* 1 */
line-height: inherit;
/* 1 */
color: inherit;
/* 1 */
margin: 0;
/* 2 */
padding: 0;
/* 3 */
}
/*
Remove the inheritance of text transform in Edge and Firefox.
*/
button,
select {
text-transform: none;
}
/*
1. Correct the inability to style clickable types in iOS and Safari.
2. Remove default button styles.
*/
button,
[type='button'],
[type='reset'],
[type='submit'] {
-webkit-appearance: button;
/* 1 */
background-color: transparent;
/* 2 */
background-image: none;
/* 2 */
}
/*
Use the modern Firefox focus style for all focusable elements.
*/
:-moz-focusring {
outline: auto;
}
/*
Remove the additional `:invalid` styles in Firefox. (https://github.com/mozilla/gecko-dev/blob/2f9eacd9d3d995c937b4251a5557d95d494c9be1/layout/style/res/forms.css#L728-L737)
*/
:-moz-ui-invalid {
box-shadow: none;
}
/*
Add the correct vertical alignment in Chrome and Firefox.
*/
progress {
vertical-align: baseline;
}
/*
Correct the cursor style of increment and decrement buttons in Safari.
*/
::-webkit-inner-spin-button,
::-webkit-outer-spin-button {
height: auto;
}
/*
1. Correct the odd appearance in Chrome and Safari.
2. Correct the outline style in Safari.
*/
[type='search'] {
-webkit-appearance: textfield;
/* 1 */
outline-offset: -2px;
/* 2 */
}
/*
Remove the inner padding in Chrome and Safari on macOS.
*/
::-webkit-search-decoration {
-webkit-appearance: none;
}
/*
1. Correct the inability to style clickable types in iOS and Safari.
2. Change font properties to `inherit` in Safari.
*/
::-webkit-file-upload-button {
-webkit-appearance: button;
/* 1 */
font: inherit;
/* 2 */
}
/*
Add the correct display in Chrome and Safari.
*/
summary {
display: list-item;
}
/*
Removes the default spacing and border for appropriate elements.
*/
blockquote,
dl,
dd,
h1,
h2,
h3,
h4,
h5,
h6,
hr,
figure,
p,
pre {
margin: 0;
}
fieldset {
margin: 0;
padding: 0;
}
legend {
padding: 0;
}
ol,
ul,
menu {
list-style: none;
margin: 0;
padding: 0;
}
/*
Prevent resizing textareas horizontally by default.
*/
textarea {
resize: vertical;
}
/*
1. Reset the default placeholder opacity in Firefox. (https://github.com/tailwindlabs/tailwindcss/issues/3300)
2. Set the default placeholder color to the user's configured gray 400 color.
*/
input::placeholder,
textarea::placeholder {
opacity: 1;
/* 1 */
color: #9ca3af;
/* 2 */
}
/*
Set the default cursor for buttons.
*/
button,
[role="button"] {
cursor: pointer;
}
/*
Make sure disabled buttons don't get the pointer cursor.
*/
:disabled {
cursor: default;
}
/*
1. Make replaced elements `display: block` by default. (https://github.com/mozdevs/cssremedy/issues/14)
2. Add `vertical-align: middle` to align replaced elements more sensibly by default. (https://github.com/jensimmons/cssremedy/issues/14#issuecomment-634934210)
This can trigger a poorly considered lint error in some tools but is included by design.
*/
img,
svg,
video,
canvas,
audio,
iframe,
embed,
object {
display: block;
/* 1 */
vertical-align: middle;
/* 2 */
}
/*
Constrain images and videos to the parent width and preserve their intrinsic aspect ratio. (https://github.com/mozdevs/cssremedy/issues/14)
*/
img,
video {
max-width: 100%;
height: auto;
}
/* Make elements with the HTML hidden attribute stay hidden by default */
[hidden] {
display: none;
}
*, ::before, ::after {
--tw-border-spacing-x: 0;
--tw-border-spacing-y: 0;
--tw-translate-x: 0;
--tw-translate-y: 0;
--tw-rotate: 0;
--tw-skew-x: 0;
--tw-skew-y: 0;
--tw-scale-x: 1;
--tw-scale-y: 1;
--tw-pan-x: ;
--tw-pan-y: ;
--tw-pinch-zoom: ;
--tw-scroll-snap-strictness: proximity;
--tw-ordinal: ;
--tw-slashed-zero: ;
--tw-numeric-figure: ;
--tw-numeric-spacing: ;
--tw-numeric-fraction: ;
--tw-ring-inset: ;
--tw-ring-offset-width: 0px;
--tw-ring-offset-color: #fff;
--tw-ring-color: rgb(59 130 246 / 0.5);
--tw-ring-offset-shadow: 0 0 #0000;
--tw-ring-shadow: 0 0 #0000;
--tw-shadow: 0 0 #0000;
--tw-shadow-colored: 0 0 #0000;
--tw-blur: ;
--tw-brightness: ;
--tw-contrast: ;
--tw-grayscale: ;
--tw-hue-rotate: ;
--tw-invert: ;
--tw-saturate: ;
--tw-sepia: ;
--tw-drop-shadow: ;
--tw-backdrop-blur: ;
--tw-backdrop-brightness: ;
--tw-backdrop-contrast: ;
--tw-backdrop-grayscale: ;
--tw-backdrop-hue-rotate: ;
--tw-backdrop-invert: ;
--tw-backdrop-opacity: ;
--tw-backdrop-saturate: ;
--tw-backdrop-sepia: ;
}
::backdrop {
--tw-border-spacing-x: 0;
--tw-border-spacing-y: 0;
--tw-translate-x: 0;
--tw-translate-y: 0;
--tw-rotate: 0;
--tw-skew-x: 0;
--tw-skew-y: 0;
--tw-scale-x: 1;
--tw-scale-y: 1;
--tw-pan-x: ;
--tw-pan-y: ;
--tw-pinch-zoom: ;
--tw-scroll-snap-strictness: proximity;
--tw-ordinal: ;
--tw-slashed-zero: ;
--tw-numeric-figure: ;
--tw-numeric-spacing: ;
--tw-numeric-fraction: ;
--tw-ring-inset: ;
--tw-ring-offset-width: 0px;
--tw-ring-offset-color: #fff;
--tw-ring-color: rgb(59 130 246 / 0.5);
--tw-ring-offset-shadow: 0 0 #0000;
--tw-ring-shadow: 0 0 #0000;
--tw-shadow: 0 0 #0000;
--tw-shadow-colored: 0 0 #0000;
--tw-blur: ;
--tw-brightness: ;
--tw-contrast: ;
--tw-grayscale: ;
--tw-hue-rotate: ;
--tw-invert: ;
--tw-saturate: ;
--tw-sepia: ;
--tw-drop-shadow: ;
--tw-backdrop-blur: ;
--tw-backdrop-brightness: ;
--tw-backdrop-contrast: ;
--tw-backdrop-grayscale: ;
--tw-backdrop-hue-rotate: ;
--tw-backdrop-invert: ;
--tw-backdrop-opacity: ;
--tw-backdrop-saturate: ;
--tw-backdrop-sepia: ;
}
.m-6 {
margin: 1.5rem;
}
.inline-block {
display: inline-block;
}
.h-\[200px\] {
height: 200px;
}
.h-full {
height: 100%;
}
.w-\[200px\] {
width: 200px;
}
.w-full {
width: 100%;
}
.space-x-3 > :not([hidden]) ~ :not([hidden]) {
--tw-space-x-reverse: 0;
margin-right: calc(0.75rem * var(--tw-space-x-reverse));
margin-left: calc(0.75rem * calc(1 - var(--tw-space-x-reverse)));
}
.object-contain {
object-fit: contain;
}
.shadow-blue-400 {
--tw-shadow-color: #60a5fa;
--tw-shadow: var(--tw-shadow-colored);
}
.shadow-gray-600 {
--tw-shadow-color: #4b5563;
--tw-shadow: var(--tw-shadow-colored);
}
.drop-shadow-custom {
--tw-drop-shadow: drop-shadow(0 5px 5px var(--tw-shadow-color));
filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow);
}
<div class="m-6 space-x-3">
<div v-for="m in media" class="w-[200px] h-[200px] inline-block">
<img class="object-contain w-full h-full shadow-blue-400 drop-shadow-custom" src="http://placekitten.com/200/300">
</div>
<div v-for="m in media" class="w-[200px] h-[200px] inline-block">
<img class="object-contain w-full h-full shadow-gray-600 drop-shadow-custom" src="http://placekitten.com/300/200">
</div>
</div> | unknown | |
d3191 | train | I'm not clear on what exactly you're trying to do; maybe something like this?
// set the default value of yourTextField
yourTextField.defaultValue = '\u00AE';
// or set the actual value of yourTextField
yourTextField.Value = '\u00AE'; | unknown | |
d3192 | train | I'd go line by line down your string, and when a regex matches, then take the next line
string str;
// if you're not reading from a file, String.Split('\n'), can help you
using (StreamReader sr = new StreamReader("doc.txt"))
{
while ((str = sr.ReadLine()) != null)
{
if (str.Trim() == "Note:") // you may also use a regex here if applicable
{
str = sr.ReadLine();
break;
}
}
}
Console.WriteLine(str);
A: This can be done with a multiline regex, but are you sure you want to? It sounds more like a case for line-by-line processing.
The Regex would be something like:
new Regex(@"Note:$^(?<capture>.*)$", RegexOptions.MultiLine);
although you might need to make that first $ a \r?$ or a \s*$ because $ only matches \n not the \r in \r\n.
Your "Just some text on this line" would be in the group named capture. You might also need to strip a trailing \r from this because of the treatment of $.
A: You can do this
(?<=Note:(\r?\n)).*?(?=(\r?\n)|$)
---------------- --- ------------
| | |->lookahead for \r\n or end of string
| |->Gotcha
|->lookbehind for note followed by \r\n. \r is optional in some cases
So,it would be like this
string nextLine=Regex.Match(input,regex,RegexOptions.Singline | RegexOptions.IgnoreCase ).Value; | unknown | |
d3193 | train | Yup – you'll need to put it in ~/.bash_profile:
alias ls='ls --color=auto'
A: It depends on your shell/system e.g., for bash on Ubuntu check out ~/.bashrc, ~/.bash_profile files. | unknown | |
d3194 | train | This command is the most traditional and efficient one which works on any Unix
without the requirement to have GNU versions of grep with special features.
The efficiency is, that xargs feeds the grep command as many filenames as arguments as it is possible according to the limits of the system (how long a shell command may be) and it excecutes the grep command by this only as least as possible.
With the -l option of grep it shows you only the filename once on a successful pattern search.
find /path -type f -print | xargs grep -l pattern
A: Assuming you have GNU Grep, this will store all files containing the contents of $var1 in an array $var2.
for file in /volume1/nginx/sites-enabled/*
do
if grep --fixed-strings --quiet "$var1" "$file"
then
var2+=("$file")
fi
done
This will loop through NUL-separated paths:
while IFS= read -d'' -r -u9 path
do
…something with "$path"
done 9< <(grep --fixed-strings --files-without-match --recursive "$var1" /volume1/nginx/sites-enabled)
A: find and grep can be used to produce a list of matching files:
find /volume1/nginx/sites-enabled/ -type f -exec grep -le "${var1}" {} +
The ‘trick’ is using find’s -exec and grep’s -l.
If you only want the filenames you could use:
find /volume1/nginx/sites-enabled/ -type f -exec grep -qe "${var1}" {} \; -exec basename {} \;
If you want to assign the result to a variable use command substitution ($()):
var2="$(find …)"
Don’t forget to quote your variables! | unknown | |
d3195 | train | It seems like you can use the gem neography to achieve this. Just set it up with the ip of your standalone server and you should be good to go. | unknown | |
d3196 | train | You can do you sensitive stuff in api routes, getServerSideProps, getStaticProps. None of your code in /lib will be seen by the client unless your page actually imports code from there.
Since you were talking about db connections, it's very unlikely you'd be able to connect to your db from the browser by accident. Almost none of the libraries used to connect to db won't work from the browser and you also can only access env variables that start with NEXT_PUBLIC_ on the client.
You also need to keep in mind that every file under /api will be an api route, so you should put your helper files inside /lib instead of /api. If you put them under the /api that could lead to security vulnerabilities since anyone can trigger the default exported function of the files under /api.
If you for some reason need to be absolutely certain that some code isn't bundled to the files that clients will load even if you by accidentally to import it, it can be done with custom webpack config. Note that I'd only look into this option if the code in itself is very sensitive. As in that someone being able to read the code would lead to consequences. Not talking about code doing db queries or anything like that, even if you imported them by accident to client bundles, it wouldn't pose any threat as the client cannot connect to your database.
A: The /pages/api and lib should be safe enough. These files are not exposed by Next.js.
Next.js exposes the files in your public folder.
What you have said about the lib is correct. It is just a folder that can be used to house helper functions that you can reuse within your code.
getStaticProps only runs on the server-side. It will never run on the client-side. It won’t even be included in the JS bundle for the browser. That means you can write code such as direct database queries without them being sent to browser.
You can safely make your calls with this function.
There is a tool you can use to validate that code in getStaticProps only runs serverside and never gets exposed client side.
Link to tool: https://next-code-elimination.vercel.app/
A: I've used /lib in much the same way you intend on a number of Next projects and haven't had any problems. As others have mentioned, if you are server-side generating everything with getStaticProps, you should be fine.
One thing I've run into is client and server side getting out of sync between the actual client and server (especially with iFrame or data that gets manipulated after a fetch). That doesn't cause security issues but it is something to think through architecturally. Next exposes its router if you need to sync effects to URL changes. | unknown | |
d3197 | train | You can try following.
int @value= 19091507;
Console.WriteLine(@value.ToString("#,#.##", System.Globalization.CultureInfo.CreateSpecificCulture("en-US")));
For me following is also working.
int @value= 19091507;
Console.WriteLine(string.Format("{0:#,#.##}", @value));
You can also try.
int @value= 19091507;
Console.WriteLine(string.Format(System.Globalization.CultureInfo.CreateSpecificCulture("en-US"),"{0:#,#.##}", @value));
It depends on your culture. | unknown | |
d3198 | train | *
*First you need latitude and longitude your places. If you don't have a latitude longitude your places then use Geocoder. Geocoder return latlng from city names. Add this latlng to list.
*Use distance matrix API for find distance/durations between your places. Send latlng list to distance matrix api. You will get the url for connect to API.
*Parse JSON objects and get durations/distances. Create matrix with durations/distances.
*Send your matrix to your algorithm(Dijikstra or prim) | unknown | |
d3199 | train | When your information above are correct, I assume a filename typo: rename conf/META-INF/persistance.xml to conf/META-INF/persistence.xml. | unknown | |
d3200 | train | Yes, consider the following:
#include <iostream>
using namespace std;
class A
{
public:
static void func()
{
static int a = 10;
int b = 10;
a++;
b++;
std::cout << a << " " << b << endl;
}
};
int main() {
A a, b;
a.func();
b.func();
a.func();
return 0;
}
a is shared across all instances of func, but b is local to each instance, so the output is:
11 11
12 11
13 11
http://ideone.com/kwlra3
A: Yes both are different. For every call a will be created whereas b will be created only once and it is same for all the objects of type A. By same I mean, all the objects share a single memory of b. | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.