prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Difference between two forms of Perl filename wildcard command
There are two forms of Perl filename wildcard command: `<>` and `glob`. But I found there is difference between the effect of these two forms:
I want to retrieve all the files with similar names, using following code:
```
my @files = <"rawdata/*_${term}_*.csv">; #(1)
```
and another format:
```
my @files = glob "rawdata/*_${term}_*.csv"; #(2)
```
I expect to get the same result using these two codes. But there is difference: if the `$term` is a string without spaces (or to say, one word), then (2) works well, but (1) doesn't work; if the `$term` is a string with spaces (or to say, several words), then (1) works well, (2) doesn't work.
Is there any difference between these two expressions?
Thanks a lot.
|
`<SomeStuff>` is equivalent to `glob "SomeStuff"` (apart from all the ambiguities with `<>` also being used for reading from file handles -- see `perldoc perlop` and look for `I/O Operators` there). Therefore your examples aren't equivalent. You should use
```
my @files = glob "\"rawdata/*_${term}_*.csv\""; #(2)
```
instead.
However, as to why space in the pattern makes a difference: `perldoc -f glob` tells the story. The normal `glob` (and therefore `<>` which is implemented via `glob`) treat whitespace as a pattern separator. The documentation also mentions `File::Glob` and its function `bsd_glob` which does not treat spaces as pattern separators. Therefore consider using this instead:
```
use File::Glob ':glob';
my $term1 = "some stuff";
my @files1 = glob "dir/${term1}*";
my $term2 = "more";
my @files2 = glob "dir/${term2}*";
print join(' :: ', sort @files1), "\n", join(' :: ', sort @files2), "\n";
```
Possible output with some files I just created:
```
[0 mosu@tionne ~/tmp] ~/test/test1.pl
dir/some stuff is betther than other stuff.doc :: dir/some stuffy teachers.txt
dir/more beer.txt :: dir/more_is_less.csv
```
|
elementToBeClickable issues with Selenium + Java
So, I have an element which is hidden under an alert. Alert stays for 10 secs and the user can click the element after that. Here is my code to deal with this situation:
```
WebElement create = driver.findElement(By.cssSelector("div.action_menu_trigger"));
WebDriverWait wait = new WebDriverWait(driver, 20);
wait.until(ExpectedConditions.elementToBeClickable(create));
create.click();
```
but I get this exception as soon as WebDriver reaches here, seems like Selenium doesn't care about wait method:
```
org.openqa.selenium.ElementClickInterceptedException:
Element <div class="action_menu_trigger"> is not clickable at point (1710.224952697754,140) because another element <div class="noty_body"> obscures it
Build info: version: '3.13.0', revision: '2f0d292', time: '2018-06-25T15:24:21.231Z'
```
I have tried by using `Thread.sleep(10000)` and it works fine but I don't want to use sleep.
|
The problem here is that the element under the alert *IS* clickable as far as Selenium knows. It is visible and enabled so it should be clickable. Your code waits for the element to be clickable (assuming it will wait for the alert to disappear) but Selenium already thinks the element is clickable so it attempts a click immediately resulting in the error message because the alert is still up and blocks the click.
The way around this is to wait for the alert to appear then disappear, wait for the element to be clickable, and click it. I don't know that I have all the locators but the code below should get you pointed in the right direction.
```
// define locators for use later
// this also makes maintenance easier because locators are in one place, see Page Object Model
By alertLocator = By.cssSelector("div.noty_body");
By createLocator = By.cssSelector("div.action_menu_trigger");
// do something that triggers the alert
// wait for the alert to appear and then disappear
WebDriverWait shortWait = new WebDriverWait(driver, 3);
WebDriverWait longWait = new WebDriverWait(driver, 30);
shortWait.until(ExpectedConditions.visibilityOfElementLocated(alertLocator));
longWait.until(ExpectedConditions.invisibilityOfElementLocated(alertLocator));
// now we wait for the desired element to be clickable and click it
shortWait.until(ExpectedConditions.elementToBeClickable(createLocator)).click();
```
|
How to listen to camera's world position in A-Frame?
How can I get the current position of a camera? Such that I can rotate my sky entity.
Assume I have:
```
<a-scene>
<a-camera id="camera></a-camera>
<a-sky id="mySky"></a-sky>
</a-scene>
```
|
To get the position of the camera:
```
var pos = document.querySelector('#camera').getAttribute('position');
```
To get the world position of the camera, we can convert the local position of the camera:
```
var cameraEl = document.querySelector('#camera');
var worldPos = new THREE.Vector3();
worldPos.setFromMatrixPosition(cameraEl.object3D.matrixWorld);
console.log(worldPos.x);
```
To listen to changes, use `componentchanged` event:
```
cameraEl.addEventListener('componentchanged', function (evt) {
if (evt.detail.name !== 'position') { return; }
console.log(evt.detail.newData);
});
```
More performant may be to poll:
```
AFRAME.registerComponent('camera-listener', {
tick: function () {
var cameraEl = this.el.sceneEl.camera.el;
cameraEl.getAttribute('position');
cameraEl.getAttribute('rotation');
// Do something.
}
});
```
|
R cumulative sum with condition
(For those of you that are familiar with MCMC I am trying to write (a step of) the Metropolis-Hastings algorithm).
I am trying to do a cumulative sum of a vector of small random values with a starting value of 0.5. However, if the cumulative sum at any point gets under 0 or over 1 I need to copy the previous value and continue on the cumulative sum, without summing the values, which would break this condition.
Note: I need a vectorized solution (no loops or indices) for optimization purposes or something fast. Bonus points for using only base R functions.
Example:
```
set.seed(1)
temp=c(0.5,runif(20,-0.3,0.3))
cumsum(temp)
[1] 0.5000000 0.3593052 0.2825795 0.3262916 0.5712162 0.3922254 0.6312592
[8] 0.8980644 0.9945430 1.0720115 0.8090832 0.6326680 0.4386020 0.5508157
[15] 0.4812780 0.6431828 0.6418024 0.7723735 1.0675171 0.9955382 1.1620054
```
But what I need is
```
[1] 0.5000000 0.3593052 0.2825795 0.3262916 0.5712162 0.3922254 0.6312592
[8] 0.8980644 0.9945430 0.9945430 0.7316148 0.5551995 0.3611336 0.4733473
[15] 0.4038095 0.5657144 0.5643339 0.6949050 0.9900487 0.9180698 0.9180698
```
Using a for loop we could do this with
```
for (i in 2:21) {
temp[i]=temp[i-1]+temp[i]
if(temp[i]<0 | temp[i]>1) {
temp[i]=temp[i-1]
}
}
```
|
A faster C++ version:
```
library(Rcpp)
Cpp_boundedCumsum <- cppFunction('NumericVector boundedCumsum(NumericVector x){
int n = x.size();
NumericVector out(n);
double tmp;
out[0] = x[0];
for(int i = 1; i < n; ++i){
tmp = out[i-1] + x[i];
if(tmp < 0.0 || tmp > 1.0)
out[i] = out[i-1];
else
out[i] = tmp;
}
return out;
}')
```
Comparison with R version:
```
R_boundedCumsum <- function(x){
for (i in 2:length(x)){
x[i] <- x[i-1]+x[i]
if(x[i]<0 || x[i]>1)
x[i] <- x[i-1]
}
x
}
x <- runif(1000)
all.equal(R_boundedCumsum(x), Cpp_boundedCumsum(x))
[1] TRUE
library(microbenchmark)
microbenchmark(R_boundedCumsum(x), Cpp_boundedCumsum(x))
Unit: microseconds
expr min lq mean median uq max neval
R_boundedCumsum(x) 2062.629 2262.2225 2460.65661 2319.358 2562.256 4112.540 100
Cpp_boundedCumsum(x) 3.636 4.3475 7.06454 5.792 9.127 25.703 100
```
|
How to run React Native app on Android Phone
I am building React Native app.
It is working well when I launch the app in terminal on Mac using "react-native run-android".
But when I got the apk file and installed it on another android device manually, it does not work.
It looks like this.
[](https://i.stack.imgur.com/DBJY1.jpg)
|
Looks like the source code in your APK is looking for the package server.
Read this on how to build APKs for react-native: [React-Native - Generating Signed APK](https://facebook.github.io/react-native/docs/signed-apk-android.html#content)
If your devices is connected via cable:
>
> - If you're on a physical device connected to the same machine, run 'adb reverse tcp:8081 tcp:8081' to forward requests from your device
>
>
>
Otherwise, you can still do this via Wifi by following the last point in the error:
>
> - If your device is on the same Wi-Fi network, set 'Debug server host & port for device' in 'Dev settings' to your machine's IP address and
> the port of the local dev server -e.g. 10.0.1.1:8081
>
>
>
|
spring mvc declaring all beans singleton
I have this new mvc project where all beans are default scoped(no prototype or session).
with single application context.
i want to know
by making all beans to be default scoped are we trying to achieve the whole application to be run in single thread?
if so will that make each httprequest(from multiple or same sessions) to be queued until the previous one completes?How to avoid such scenario any advice or link would be helpful.
I am relatively new to spring and java development.
|
Because Spring beans are typically stateless, you can safely call them from multiple threads. That's how your application works: there is only one instance of every controller, service, DAO, etc. But your servlet container (through Spring) calls these beans from multiple threads - and it's completely thread safe.
In fact in plain servlets the situation is the same - there is only instance of each servlet and it can be accessed by infinite number of threads. As long as this servlet is stateless or properly synchronized.
Do not confuse Spring with stateless session beans in [ejb](/questions/tagged/ejb "show questions tagged 'ejb'") that are pooled and each client gets its own instance from the pool.1
1 - In fact that's a bit dumb - since the beans are stateless by the definition, there is no point in pooling them and preventing concurrent access...
|
frameworks for representing data processing as a pipeline
Most data processing can be envisioned as a pipeline of components, the output of one feeding into the input of another. A typical processing pipeline is:
```
reader | handler | writer
```
As a foil for starting this discussion, let's consider an object-oriented implementation of this pipeline where each segment is an object. The `handler` object contains references to both the `reader` and `writer` objects and has a `run` method which looks like:
```
define handler.run:
while (reader.has_next) {
data = reader.next
output = ...some function of data...
writer.put(output)
}
```
Schematically the dependencies are:
```
reader <- handler -> writer
```
Now suppose I want to interpose a new pipeline segment between the reader and the handler:
```
reader | tweaker | handler | writer
```
Again, in this OO implementation, `tweaker` would be a wrapper around the `reader` object, and the `tweaker` methods might look something like (in some pseudo-imperative code):
```
define tweaker.has_next:
return reader.has_next
define tweaker.next:
value = reader.next
result = ...some function of value...
return result
```
I'm finding that this is not a very composable abstraction. Some issues are:
1. `tweaker` can only be used on the left hand side of `handler`, i.e. I can't use the above implementation of `tweaker` to form this pipeline:
reader | handler | tweaker | writer
2. I'd like to exploit the associative property of pipelines, so that this pipeline:
reader | handler | writer
could be expressed as:
```
reader | p
```
where `p` is the pipeline `handler | writer`. In this OO implementation I would have to partially instantiate the `handler` object
1. Somewhat of a restatement of (1), the objects have to know if they "push" or "pull" data.
I'm looking for a framework (not necessarily OO) for creating data processing pipelines which addresses these issues.
I've tagged this with `Haskell` and `functional programming` because I feel functional programming concepts might be useful here.
As a goal, it would be nice to be able to create a pipeline like this:
```
handler1
/ \
reader | partition writer
\ /
handler2
```
For some perspective, Unix shell pipes solves a lot of these problems with the following implementation decisions:
1. Pipeline components run asynchronously in separate processes
2. Pipe objects mediate passing data between "pushers" and "pullers"; i.e. they block writers which write data too fast and readers who try to read too fast.
3. You use special connectors `<` and `>` to connect passive components (i.e. files) to the pipeline
I am especially interested in approaches which do not use threading or message-passing among agents. Maybe that's the best way to do this, but I'd like to avoid threading if possible.
Thanks!
|
Yeah, [arrows](http://haskell.org/arrows) are almost surely your man.
I suspect that you are fairly new to Haskell, just based on the kinds of things you are saying in your question. Arrows will probably seem fairly abstract, especially if what you are looking for is a "framework". I know it took me a while to really grok what was going on with arrows.
So you may look at that page and say "yes, that looks like what I want", and then find yourself rather lost as to how to begin to use arrows to solve the problem. So here is a little bit of guidance so you know what you are looking at.
Arrows will not solve your problem. Instead, they give you a language you can use in which you phrase your problem. You may find that some predefined arrow will do the job -- some kleisli arrow maybe -- but at the end of the day you are going to want to *implement* an arrow (the predefined ones just give you easy ways to implement them) which expresses what you mean by a "data processor". As a almost trivial example, let's say you want to implement your data processors by simple functions. You would write:
```
newtype Proc a b = Proc { unProc :: a -> b }
-- I believe Arrow has recently become a subclass of Category, so assuming that.
instance Category Proc where
id = Proc (\x -> x)
Proc f . Proc g = Proc (\x -> f (g x))
instance Arrow Proc where
arr f = Proc f
first (Proc f) = Proc (\(x,y) -> (f x, y))
```
This gives you the machinery to use the various arrow combinators `(***)`, `(&&&)`, `(>>>)`, etc., as well as the arrow notation which is rather nice if you are doing complex things. So, as Daniel Fischer points out in the comment, the pipeline you described in your question could be composed as:
```
reader >>> partition >>> (handler1 *** handler2) >>> writer
```
But the cool thing is that it is up to you what you mean by a processor. It is possible to implement what you mentioned about each processor forking a thread in a similar way, using a different processor type:
```
newtype Proc' a b = Proc (Source a -> Sink b -> IO ())
```
And then implementing the combinators appropriately.
So that is what you are looking at: a vocabulary for talking about composing processes, which has a little bit of code to reuse, but primarily will help guide your thinking as you implement these combinators for the definition of processor that is useful in your domain.
One of my first nontrivial Haskell projects was to implement an [arrow for quantum entanglement](http://hackage.haskell.org/package/quantum-arrow); that project was the one that caused me to really start to understand the Haskell way of thinking, a major turning point in my programming career. Maybe this project of yours will do the same for you? :-)
|
Automatically import modules when entering the python or ipython interpreter
I find myself typing `import numpy as np` almost every single time I fire up the python interpreter. How do I set up the python or ipython interpreter so that numpy is automatically imported?
|
Use the environment variable [PYTHONSTARTUP](http://docs.python.org/using/cmdline.html#envvar-PYTHONSTARTUP). From the official documentation:
>
> If this is the name of a readable file, the Python commands in that
> file are executed before the first prompt is displayed in interactive
> mode. The file is executed in the same namespace where interactive
> commands are executed so that objects defined or imported in it can be
> used without qualification in the interactive session.
>
>
>
So, just create a python script with the import statement and point the environment variable to it. Having said that, remember that 'Explicit is always better than implicit', so don't rely on this behavior for production scripts.
For Ipython, see [this](http://ipython.readthedocs.io/en/stable/config/intro.html) tutorial on how to make a ipython\_config file
|
curl: (2) Failed Initialization
I have installed libcurl 7.33.0 on Linux. I used the following commands to install:
```
./configure
make
make install
```
If I run `curl http://www.google.com` I get following error:
**curl: (2) Failed initialization**
curl is installed at /usr/local/bin and header files at /usr/local/include/curl.
curl-config:
```
sandesh@ubuntu:~$ curl-config --features
IPv6
libz
sandesh@ubuntu:~$ curl-config --protocols
DICT
FILE
FTP
GOPHER
HTTP
IMAP
POP3
RTSP
SMTP
TELNET
TFTP
sandesh@ubuntu:~$ curl-config --ca
/etc/ssl/certs/ca-certificates.crt
sandesh@ubuntu:~$ curl-config --cflags
-I/usr/local/include
sandesh@ubuntu:~$ curl-config --configure
sandesh@ubuntu:~$ curl-config --libs
-L/usr/local/lib -lcurl
sandesh@ubuntu:~$ curl-config --static-libs
/usr/local/lib/libcurl.a -lz -lrt
```
I believe it is something to do with my configuration.
|
At a wild guess, you've linked the `/usr/local/bin/curl` binary to the system curl library.
To verify that this is the case, you should do:
```
ldd /usr/local/bin/curl
```
If it indicates a line like:
```
libcurl.so.4 => /usr/lib/x86_64-linux-gnu/libcurl.so.4 (0x00007fea7e889000)
```
It means that the curl binary is picking up the system curl library. While it was linked at compile time to the correct library, at run-time it's picking up the incorrect library, which seems to be a pretty typical reason for this error happening.
If you run the configure with `--disable-shared`, then it will produce a `.a`, which, when linked to the curl binary will not depend on the system `libcurl.so`, but will instead have it's own private code.
If you're cross-compiling, then you'll also need to cross-compile all the dependent libraries, and that is another question.
|
Unknown KieSession name in drools 6.0 (while trying to add drools to existing maven/eclipse project)
I am trying to adapt drools6.0 for an existing code base (it is maven project under eclipse).
I didnt had need to learn drools or maven before (though they were part of my previous project), suffice to say I am lost in what I wanted to do.
Based on my understanding (googling), java class files get hooked to rules based on the package name(?). That takes care of compile time issues. But I am seeing null pointer exception at run time. Inorder to adapt drools into my existing code base: I 1)created helloworld drools project, ran it successfully 2)copied the java file to my existing package, 3)created rule file in Eclipse with correct package: FIle->New->other->Rule Resource; 3)converted existing project into drools package by right clicking project and configure->convert to drools project
This all takes care of compilation issues, but I get following run time error
```
[main] ERROR org.drools.compiler.kie.builder.impl.KieContainerImpl - Unknown KieSession name: ksession-rules
java.lang.NullPointerException
at main.java.com.harmonia.cbm.afloat.dataaquisition.dql.DroolsTest.main(DroolsTest.java:23)
```
This is because ksession that is returned from kcontainer is null and throws null pointer exception in last line
```
KieServices ks = KieServices.Factory.get();
KieContainer kContainer = ks.getKieClasspathContainer();
KieSession kSession = kContainer.newKieSession("ksession-rules");
// above line is returning null
Message message = new Message();
message.setMessage("Hello World");
message.setStatus(Message.HELLO);
kSession.insert(message);
```
Already spent more than a day trying to figure out how drools works and how above can be fixed. Pl suggest
1) am I taking the right approach to convert existing project into drools project. I want all existing functionality of my code base; but want to add rules based approach for future enhancements. Came across following link, but not clear if it helps my situation
<http://drools.46999.n3.nabble.com/Retrofitting-a-project-with-JBoss-Rules-td48656.html>
2)Any useful drools tutorials in better understanding following 3 lines (besides java docs)
```
KieServices ks = KieServices.Factory.get();
KieContainer kContainer = ks.getKieClasspathContainer();
KieSession kSession = kContainer.newKieSession("ksession-rules");
```
3)Any hints on resolving null pointer exception (assuming I am taking the right and easy approach of converting existing project into drools project)
**UPDATE**
@David: thanks for detailed post. I realized that converting existing project into maven project, while works, did not appeal to me since existing directory structure/naming is preserved (most likely different from what maven creates by default). I posted alternative solution where I thought this problem has to do with classpath issues <http://drools.46999.n3.nabble.com/Null-pointer-exception-when-adding-drools-to-existing-project-td4027944.html#a4028011>
|
I hit similar problems.
I think that part of the problem is trying to live in both worlds. The JBoss Drools eclipse plugin world and the maven world.
I have Eclipse 4.3.1 (Kepler) with various Jboss/Drools plugins installed.
I took a working eclipse example and made sure I could run it in maven.
1. Created a demo drools project File->New->Other..->Drools->Drools Project
2. Ensured you could run the test programs DroolsTest
3. Converted project to maven project - Configure->Convert To Maven Project
(This will create a pom.xml file with many dependencies. These can be prunes)
4. Removed the Drools Library from the build path - in the project properties Build Path -> Libraries - select Drools Library and click Remove
5. Disable the Drools builder - in project properties Builders -> uncheck Drools Builder
6. Comment out dependancy jsr94 from the pom.xml(not retrievable)
7. Run maven from the command line "mvm clean install".
This should give you a project that builds and runs entirely from Maven.
Add to your pom.xml
```
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2.1</version>
</plugin>
```
And
```
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.2</version>
<scope>runtime</scope>
</dependency>
```
Try:
mvn -e exec:java -Dexec.mainClass="com.sample.DroolsTest"
It should produce:
```
...
[com.sample.DroolsTest.main()] INFO org.drools.compiler.kie.builder.impl.ClasspathKieProject - Found kmodule: file:/Users/davidbernard/Projects/action-deducing-diff/xx/target/classes/META-INF/kmodule.xml
[com.sample.DroolsTest.main()] INFO org.drools.compiler.kie.builder.impl.KieRepositoryImpl - KieModule was added:FileKieModule[ ReleaseId=x:x:1.0file=/Users/davidbernard/Projects/action-deducing-diff/xx/target/classes]
[com.sample.DroolsTest.main()] INFO org.drools.compiler.kie.builder.impl.ClasspathKieProject - Found kmodule: file:/Users/davidbernard/Projects/action-deducing-diff/xx/target/classes/META-INF/kmodule.xml
[com.sample.DroolsTest.main()] INFO org.drools.compiler.kie.builder.impl.KieRepositoryImpl - KieModule was added:FileKieModule[ ReleaseId=x:x:1.0file=/Users/davidbernard/Projects/action-deducing-diff/xx/target/classes]
Hello World
Goodbye cruel world
...
```
You should now also be able to run DroolsTest from eclipse.
You will have a rules->Sample.drl file and a kmodule.xml file.
```
<?xml version="1.0" encoding="UTF-8"?>
<kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule">
<kbase name="rules" packages="rules">
<ksession name="ksession-rules"/>
</kbase>
</kmodule>
```
The "ksession" name should match the code creating the ksession:
```
KieSession kSession = kContainer.newKieSession("ksession-rules");
```
The "packages" should match the directory the rule file is in.
|
Sorted order in Flux of custom class
Suppose I have a class Student with attributes name and height.
```
class Student{ String name; double height;}
```
If I have a flux of objects of students and I want to the output to be sorted in ascending order of the student's name, then how do I do that?
|
Suppose you have an array of student objects as follows:
```
Student[] students = {studentObj1, studentObj2, studentObj3};
```
You just need to use a Comparator, which in Java 8/ reactive programming can be written as a lambda function, in the [sort](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#sort--) method provided by Flux.
Here obj1 and obj2 are the objects of Student class which are compared to each other.
obj1.compareTo(obj2) sorts them in ascending order.
```
Flux.fromIterable (Arrays.asList(students)).sort( (obj1, obj2) -> obj1.getName().compareTo(obj2.getName()));
```
|
Checking to see if a value exists in Javascript
How do I prevent a Javascript alert from firing if the alert value is undefined? In other words, something like this:
```
if (alert(message) != 'undefined') {
alert(message);
}
```
|
Use [`typeof`](https://developer.mozilla.org/en/JavaScript/Reference/Operators/Special/typeof):
```
if (typeof message !== 'undefined')
```
Don't put `alert(message)` into the `if` expression, otherwise you will *execute* `alert` (which we want to avoid before we know the type of `message`) and the return value (which is also `undefined` btw ;)) will be compared to `undefined`.
**Update** Clarification for `!==`:
This operator not only compares the value of two operands but also the **type**. That means no [*type coercion*](http://en.wikipedia.org/wiki/Type_conversion) is done:
```
42 == "42" // true
42 === "42" // false
```
In this case it is not really necessary because we know that `typeof` always returns a string but it is good practice and if you use it thoroughly and consistently, it is more clear where you really want to have type coercion and where not.
|
Can anyone give me an example for PHP's CURLFile class?
I had a very simple PHP code to upload a file to a remote server; the way I was doing it (as has been suggested here in some other solutions) is to use cUrl to upload the file.
Here's my code:
```
$ch = curl_init("http://www.remotesite.com/upload.php");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, array('fileupload' => '@'.$_FILES['Filedata']['tmp_name']));
echo curl_exec($ch);
```
The server is running PHP 5.5.0 and it appears that @filename has been deprecated in PHP >= 5.5.0 as stated [here](http://www.php.net/manual/en/function.curl-setopt.php) under the `CURLOPT_POSTFIELDS` description, and therefore, I'm getting this error:
```
Deprecated: curl_setopt(): The usage of the @filename API for file uploading is deprecated. Please use the CURLFile class instead in ...
```
Interestingly, there is absolutely nothing about this Class on php.net aside from a basic class overview. No examples, no description of methods or properties. It's basically blank [here](http://www.php.net/manual/en/class.curlfile.php). I understand that is a brand new class with little to no documentation and very little real-world use which is why practically nothing relevant is coming up in searches on Google or here on Stackoverflow on this class.
I'm wondering if there's anyone who has used this CURLFile class and can possibly help me or give me an example as to using it in place of @filename in my code.
Edit:
I wanted to add my "upload.php" code as well; this code would work with the traditional @filename method but is no longer working with the CURLFile class code:
```
$folder = "try/";
$path = $folder . basename( $_FILES['file']['tmp_name']);
if(move_uploaded_file($_FILES['file']['tmp_name'], $path)) {
echo "The file ". basename( $_FILES['file']['tmp_name']). " has been uploaded";
}
```
***Final Edit***:
Wanted to add Final / Working code for others looking for similar working example of the scarcely-documented CURLFile class ...
**curl.php (local server)**
```
<form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post" enctype="multipart/form-data">
<label for="file">Filename:</label> <input type="file" name="Filedata" id="Filedata" />
<br />
<input type="submit" name="submit" value="Submit" />
</form>
<?php
if ($_POST['submit']) {
$uploadDir = "/uploads/";
$RealTitleID = $_FILES['Filedata']['name'];
$ch = curl_init("http://www.remotesite.com/upload.php");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$args['file'] = new CurlFile($_FILES['Filedata']['tmp_name'],'file/exgpd',$RealTitleID);
curl_setopt($ch, CURLOPT_POSTFIELDS, $args);
$result = curl_exec($ch);
}
?>
```
**upload.php (remote server)**
```
$folder = "try/";
$path = $folder . $_FILES['file']['name'];
if(move_uploaded_file($_FILES['file']['tmp_name'], $path)) {
echo "The file ". basename( $_FILES['file']['name']). " has been uploaded";
}
```
|
There is a snippet on the RFC for the code: <https://wiki.php.net/rfc/curl-file-upload>
```
curl_setopt($curl_handle, CURLOPT_POST, 1);
$args['file'] = new CurlFile('filename.png', 'image/png', 'filename.png');
curl_setopt($curl_handle, CURLOPT_POSTFIELDS, $args);
```
You can also use the seemingly pointless function `curl_file_create( string $filename [, string $mimetype [, string $postname ]] )` if you have a phobia of creating objects.
```
curl_setopt($curl_handle, CURLOPT_POST, 1);
$args['file'] = curl_file_create('filename.png', 'image/png', 'filename.png');
curl_setopt($curl_handle, CURLOPT_POSTFIELDS, $args);
```
|
Docker Rocket chat Rest api upload file error 413 Entity too large
I am using rocket chat rest API, every thing works good, but when i upload file to rocket chat rest api, it shows error `413 Request Entity Too Large`, but when i upload file from website it uploaded any size of fie.
After checking all scenario, I concluded that file size less than and equal to 1 mb is uploaded successfully, and greater than 1 MB shows this error `413 Request Entity Too Large`.
I upload file from post man using this url
<https://rocket.chat.url/api/v1/rooms.upload/RoomId>
**Headers:**
>
> Content-Type:application/x-www-form-urlencoded
>
>
> X-Auth-Token:User-Token
>
>
> X-User-Id:User-Id
>
>
>
**Form-Data:**
>
> file - selected file
>
>
>
**Html result Error**
```
<html>
<head><title>413 Request Entity Too Large</title></head>
<body bgcolor="white">
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.10.3 (Ubuntu)</center>
</body>
</html>
```
when File successfully insert it shows following.
```
{
"success": true
}
```
|
After checking many scenarios and search many urls i get solution from [this](https://github.com/RocketChat/Rocket.Chat.Android/issues/1050#issuecomment-384117403).
I have used [rocket chat docker](https://rocket.chat/docs/installation/docker-containers/) and I append one line to [nginx](https://nginx.org/en/) config file.
**Solution:**
1. login to ubuntu server
2. write `sudo nano /etc/nginx/nginx.conf` and hit enter
3. Add or update `client_max_body_size` in
>
>
> ```
> http {
>
> client_max_body_size 8M; #used your exceeded limit instead of 8M
>
> #other lines...
>
> }
>
> ```
>
>
4. Restart [nginx](https://nginx.org/en/) by command `service nginx restart` or `systemctl restart nginx`
5. Uploading larger file again, and it is successful.
|
Receiving Chunked HTTP Data With Winsock
I'm having trouble reading in some chunked HTTP response data using winsock.
I send a request fine and get the following back:
```
HTTP/1.1 200 OK
Server: LMAX/1.0
Content-Type: text/xml; charset=utf-8
Transfer-Encoding: chunked
Date: Mon, 29 Aug 2011 16:22:19 GMT
```
using winsock recv. At this point however it just hangs. I have the listener running in an infinite loop but nothing is ever picked up.
I think it's a C++ issue but it could also be related to the fact that I pushing the connection through stunnel to wrap it up inside HTTPS. I have a test application using some libs in C# which works perfectly through stunnel. I'm confused as to why my loop is not receiving the C++ chunked data after the initial recv.
This is the loop in question...it is called after the chunked ok response above...
```
while(true)
{
recvBuf= (char*)calloc(DEFAULT_BUFLEN, sizeof(char));
iRes = recv(ConnectSocket, recvBuf, DEFAULT_BUFLEN, 0);
cout << WSAGetLastError() << endl;
cout << "Recv: " << recvBuf << endl;
if (iRes==SOCKET_ERROR)
{
cout << recvBuf << endl;
err = WSAGetLastError();
wprintf(L"WSARecv failed with error: %d\n", err);
break;
}
}
```
Any ideas?
|
You need to change your reading code. You cannot read `chunked` data using a fixed-length buffer like you are trying to do. The data is sent in variable-length chunks, where each chunk has a header that specifies the actual length of the chunk in bytes, and the final chunk of the data has a length of 0. You need to read the chunked headers in order to process the chunks properly. Please read [RFC 2616 Section 3.6.1](https://www.rfc-editor.org/rfc/rfc2616#section-3.6.1). Your logic needs to be more like the following pseudo-code:
```
send request;
status = recv() a line of text until CRLF;
parse status as needed;
response-code = extract response-code from status;
response-version = extract response-version from status;
do
{
line = recv() a line of text until CRLF;
if (line is blank)
break;
store line in headers list;
}
while (true);
parse headers list as needed;
if ((response-code is not in [1xx, 204, 304]) and (request was not "HEAD"))
{
if (Transfer-Encoding header is present and not "identity")
{
do
{
line = recv a line of text until CRLF;
length = extract length from line;
extensions = extract extensions from line;
process extensions as needed; // optional
if (length == 0)
break;
recv() length number of bytes into destination buffer;
recv() and discard bytes until CRLF;
}
while (true);
do
{
line = recv a line of text until CRLF;
if (line is blank)
break;
store line in headers list as needed;
}
while (true);
re-parse headers list as needed;
}
else if (Content-Length header is present)
{
recv() Content-Length number of bytes into destination buffer;
}
else if (Content-Type header starts with "multipart/")
{
boundary = extract boundary from Content-Type's "boundary" attribute;
recv() data into destination buffer until MIME termination boundary is reached;
}
else
{
recv() data into destination buffer until disconnected;
}
}
if (not disconnected)
{
if (response-version is "HTTP/1.1")
{
if (Connection header is "close")
close connection;
}
else
{
if (Connection header is not "keep-alive")
close connection;
}
}
check response-code for errors;
process destination buffer, per info in headers list;
```
|
Is a "object constructor" a shorter name for a "function with name `object` returning type `object`"?
I mean, it's a matter of choosing words more than there is any difference between function and constructor call. The thing which is named "constructor of an object" can also be named "function with name `object` returning type `object`".
One could argue that C++ does not allow one to have the same function and type name; however, there is workaround to do that. C++ has special syntactic sugar (which is named a constructor) with which you can create a function with name `object` returning type `object`. So I think a constructor can be seen and used as a free standing function.
Are there some important semantic differences I am missing here?
|
A constructor is basically a method, yes, but it is a special method.
For example, in C++ a constructor isn't simply a function that returns a new instance of that type. If it was, inheritance wouldn't work. You couldn't call into the base constructors, because they'd return a new instance as well. You'd end up with a new instance of A, which is then returned to the constructor of B which creates a new instance of B, which is then returned to the constructor of C, etc.
Instead a constructor is more of an initialization method that is automatically called by the instance allocator. When you, say, call "new" to make an object on the heap, it allocates the memory for the type you asked for (using a memory allocator, like malloc), then calls the constructor method on it. The compiler has special rules for how, and in what order, that constructor can call the other constructors. For example, in C#, if you don't explicitly call a base constructor the compiler will add a call to the base default constructor.
It is those rules about how the compiler handles constructors that make it different from "a function named .ctor which returns an instance of the type".
|
How can I populate a list box with many categories using recursive programming
I have a categories table which is set up to allow an infinite number of sub category levels. I would like to mimic the following:

It should be clarified that sub categories can have sub categories. E.g. Parent cat -> level 1 -> level 2 -> level 3 etc.
My categories table has two columns, `CategoryName` and `ParentID`.
This list box will be used when assigning the correct category to a product.
How can I write this?
**Edit**
In response to `thedugas` I had to modify your answer to work with my situation. I found some errors that needed to be fixed, but below is a final, working solution.
```
protected void Page_Load(object sender, EventArgs e)
{
using (DataClasses1DataContext db = new DataClasses1DataContext())
{
var c = db.Categories.Select(x => x);
List<Category> categories = new List<Category>();
foreach (var n in c)
{
categories.Add(new Category()
{
categoryID = n.categoryID,
title = n.title,
parentID = n.parentID,
isVisible = n.isVisible
});
}
List<string> xx = new List<string>();
foreach (Category cat in categories)
{
BuildCatString(string.Empty, cat, categories, xx);
}
ListBox1.DataSource = xx;
ListBox1.DataBind();
}
}
private void BuildCatString(string prefix, Category cat, IEnumerable<Category> categories, List<string> xx)
{
if (cat.parentID == 0)
{
xx.Add(cat.title);
prefix = cat.title;
}
var children = categories.Where(x => x.parentID == cat.categoryID);
if (children.Count() == 0)
{
return;
}
foreach (Category child in children)
{
if(prefix.Any())
{
xx.Add(prefix + "/" + child.title);
BuildCatString(prefix + "/" + child.title,
child, categories, xx);
}
}
}
```
Here is the almost finished work:

|
Nick asked me in a comment to [another question](https://stackoverflow.com/questions/4073713/is-there-a-good-linq-way-to-do-a-cartesian-product/4073806#4073806) how this sort of problem might be solved using LINQ to Objects without using any recursion. Easily done.
Let's suppose that we have a `Dictionary<Id, Category>` that maps ids to categories. Each category has three fields: Id, ParentId and Name. Let's presume that ParentId can be null, to mark those categories that are "top level".
The desired output is a sequence of strings where each string is the "fully-qualified" name of the category.
The solution is straightforward. We begin by defining a helper method:
```
public static IEnumerable<Category> CategoryAndParents(this Dictionary<Id, Category> map, Id id)
{
Id current = id;
while(current != null)
{
Category category = map[current];
yield return category;
current = category.ParentId;
}
}
```
And this helper method:
```
public static string FullName(this Dictionary<Id, Category> map, Id id)
{
return map.CategoryAndParents(id)
.Aggregate("", (string name, Category cat) =>
cat.Name + (name == "" ? "" : @"/") + name);
}
```
Or, if you prefer avoiding the potentially inefficient naive string concatenation:
```
public static string FullName(this Dictionary<Id, Category> map, Id id)
{
return string.Join(@"/", map.CategoryAndParents(id)
.Select(cat=>cat.Name)
.Reverse());
}
```
And now the query is straightforward:
```
fullNames = from id in map.Keys
select map.FullName(id);
listBox.DataSource = fullNames.ToList();
```
No recursion necessary.
|
How to set right method's parameter in Interface class
Each inherited class's method need different type of parameter.
In this case, how should I define parameter in Interface Method to able to all children method can accept?
```
public interface IPayment
{
void MakePayment(OrderInfo orderInfo); // !!
void MakeRefund (OrderInfo orderInfo); // !!
}
public class OrderInfo
{
protected string OrderNo {get; set;}
protected string CustomerNo { get; set;}
protected decimal Amount {get; set;}
}
public class CreditCardPaymentInfo : OrderInfo
{
string CCNum {get; set;}
string ExpDate { get; set;}
}
public class GooglePaymentInfo : OrderInfo
{
string GoogleOrderID {get; set;}
}
public class PaypalPaymentInfo : OrderInfo
{
string PaypalID {get; set;}
}
public void MakePayment()
{
IPayment paymentModule;
// Get Order Info
if(orderType == "Paypal"){
paymentModule = new PaypalPayment();
PaypalPaymentInfo orderInfo = new PaypalPaymentInfo();
orderInfo.PaypalID = "TEST";
}else if(orderType == "Google"){
paymentModule = new GooglePayment();
GooglePaymentInfo orderInfo = new GooglePaymentInfo();
orderInfo.GoogleOrderID = "TEST";
}else{
paymentModule = new CreditCardPayment();
CreditCardPaymentInfo orderInfo = new CreditCardPaymentInfo();
orderInfo.CCNum = "1111111111111111";
orderInfo.ExpDate = "11/11";
}
orderInfo.OrderNo = "123";
orderInfo.CustomerNo = "ABC";
orderInfo.Amount = 12.20m;
paymentModule.MakePayment();
}
```
It occur an error :
>
> Error 1 'com.WebUI.Models.CreditCardPaymentInfo' does not implement interface member 'com.WebUI.Models.IPaymentProcess.makeRefund(WebUI.Models.RefundModel)'
>
>
>
***[Edit]***
Oh I forgot my Model Code, it's like this,
```
public class CreditCardPayment: IPayment
{
public void MakePayment(CreditCardPaymentInfo creditCardPaymentInfo ){...}
//The parameter type is NOT OrderInfo
//public void MakePayment(OrderInfo orderInfo){...}
public void MakeRefund(CreditCardPaymentInfo creditCardPaymentInfo ){...}
}
```
but in CreditCardPayment case, I need to pass **CreditCardPaymentInfo parameter not OrderInfo** that only contain for common fields.
|
```
public interface IPayment<T>
where T: OrderInfo
{
void MakePayment( T orderInfo );
void MakeRefund ( T orderInfo );
}
```
Then:
```
public class CreditCardPayment
: IPayment<CreditCardPaymentInfo>
{
public void MakePayment( CreditCardPaymentInfo creditCardPaymentInfo ) {
// ...
}
public void MakeRefund( CreditCardPaymentInfo creditCardPaymentInfo ) {
// ...
}
}
```
And:
```
public class CreditCardPaymentInfo
: OrderInfo
{
public string CCNum { get; set; }
public string ExpDate { get; set; }
}
```
|
Random Sample with multiple probabilities in R
I need to get out a sample of subjects from a list to assign them as a Control Group for a study which has to have a similar composition of variables. I am trying to do this in R with the sample function but I don´t know how to specify the differetnt probabilities for each variable.
Lets say I have a table with the following headers:
>
> ID Name Campaign Gender
>
>
>
I need a sample of 10 subjects with the following composition of Campaign attributes:
>
> D2D --> 25%
>
>
> F2F --> 38%
>
>
> TM --> 17%
>
>
> WW --> 21%
>
>
>
This means from my data set I have 25% of subjects coming from a Door to Door Campaign (D2D), 38% from a Face to Face Campaign (F2F), etc
And the gender composition is as following:
>
> Male --> 54%
>
>
> Female --> 46%
>
>
>
When I get a random sample of 10 subjects I need it to have a similar composition.
I have been searching for hours and the closest I was able to get to anything similar was this answer: [taking data sample in R](https://stackoverflow.com/questions/10240991/taking-data-sample-in-r)
but I need to assign more than one probability.
I am sure that this could help anyone who wants to get a representative sample from a Data Set.
|
It sounds like you are interested in taking a random stratified sample. You could do this using the `stratsample()` function from the `survey` package.
In the example below, I create some fake data to mimic what you have, then I define a function to take a random proportional stratified random sample, then I apply the function to the fake data.
```
# example data
ndf <- 1000
df <- data.frame(ID=sample(ndf), Name=sample(ndf),
Campaign=sample(c("D2D", "F2F", "TM", "WW"), ndf, prob=c(0.25, 0.38, 0.17, 0.21), replace=TRUE),
Gender=sample(c("Male", "Female"), ndf, prob=c(0.54, 0.46), replace=TRUE))
# function to take a random proportional stratified sample of size n
rpss <- function(stratum, n) {
props <- table(stratum)/length(stratum)
nstrat <- as.vector(round(n*props))
nstrat[nstrat==0] <- 1
names(nstrat) <- names(props)
stratsample(stratum, nstrat)
}
# take a random proportional stratified sample of size 10
selrows <- rpss(stratum=interaction(df$Campaign, df$Gender, drop=TRUE), n=10)
df[selrows, ]
```
|
How to compute rowSums in rcpp
I'm converting an *R* function into Rcpp, where I have used the R function `rowSums`, which appears to not be a valid sugar expression in Rcpp. I found code for an Rcpp version of rowSums [here](http://adv-r.had.co.nz/Rcpp.html). But I'm getting
>
> error: use of undeclared identifier
>
>
>
when I use `rowSumsC()` in my main Rcpp function.
Is there a easy fix?
Edit: The code
```
cppFunction(
"NumericMatrix Expcpp(NumericVector x, NumericMatrix w,
NumericVector mu, NumericVector var, NumericVector prob, int k) {
for (int i=1; i<k; ++i){
w(_,i) = prob[i] * dnorm(x,mu[i], sqrt(var[i]));
}
w = w / rowSums(w)
return w;
}")
```
|
[Rcpp officially added `rowSum` support in 0.12.8](https://github.com/RcppCore/Rcpp/blob/5a99a862c132b21f0c728919cc9a227f4c528d18/inst/NEWS.Rd#L78). Therefore, there is no need to use `rowSumsC` function devised by Hadley in Advanced R.
Having said this, there are a few issues with the code.
---
Rcpp presently does *not* support `Matrix` to `Vector` or `Matrix` to `Matrix` computations. (Support for the later may be added per [#583](https://github.com/RcppCore/Rcpp/issues/583), though if needed one should consider using [`RcppArmadillo`](https://cran.r-project.org/package=RcppArmadillo) or [`RcppEigen`](https://cran.r-project.org/package=RcppEigen)). Therefore, the following line is problematic:
```
w = w / rowSums(w)
```
To address this, first compute the `rowSums` and then standardize the matrix using a traditional `for` loop. **Note:** Looping in C++ is very fast unlike *R*.
```
NumericVector summed_by_row = rowSums(w);
for (int i = 0; i < k; ++i) {
w(_,i) = w(_,i) / summed_by_row[i];
}
```
---
Next, C++ indices begin at `0` not `1`. Therefore, the following for loop is problematic:
```
for (int i=1; i<k; ++i)
```
The fix:
```
for (int i=0; i<k; ++i)
```
---
Lastly, the parameters of the function can be reduced as some of the values are not relevant or are overridden.
The function declaration goes from:
```
NumericMatrix Expcpp(NumericVector x, NumericMatrix w,
NumericVector mu, NumericVector var, NumericVector prob, int k)
```
To:
```
NumericMatrix Expcpp(NumericVector x, NumericVector mu, NumericVector var, NumericVector prob) {
int n = x.size();
int k = mu.size();
NumericMatrix w = no_init(n,k);
.....
```
---
Putting all of the above feedback together, we get the desired function.
```
Rcpp::cppFunction(
'NumericMatrix Expcpp(NumericVector x, NumericVector mu, NumericVector var, NumericVector prob) {
int n = x.size();
int k = mu.size();
NumericMatrix w = no_init(n,k);
for (int i = 0; i < k; ++i) { // C++ indices start at 0
w(_,i) = prob[i] * dnorm(x, mu[i], sqrt(var[i]));
}
Rcpp::Rcout << "Before: " << std::endl << w << std::endl;
NumericVector summed_by_row = rowSums(w);
Rcpp::Rcout << "rowSum: " << summed_by_row << std::endl;
// normalize by column to mimic R
for (int i = 0; i < k; ++i) {
w(_,i) = w(_,i) / summed_by_row[i];
}
Rcpp::Rcout << "After: " << std::endl << w << std::endl;
return w;
}')
set.seed(51231)
# Test values
n <- 2
x <- seq_len(n)
mu <- x
var <- x
prob <- runif(n)
mat <- Expcpp(x, mu, var, prob)
```
**Output**
```
Before:
0.0470993 0.125384
0.0285671 0.160996
rowSum: 0.172483 0.189563
After:
0.273066 0.661436
0.165623 0.849300
```
|
Passing list as multiple parameter URL in snap
Is it possible to pass list parameter from browser to a handler function in Snap?
How do I construct a multiple parameters URL from a list and send it to a handler function?
For instance, I need to delete some table rows or any other objects.
I can not do it with the usual REST route:
```
("/objects/:id", method DELETE deleteObject)
```
simply because there could be too many and deleting 100 rows one by one can get a bit tedious.
I chose the doomed objects via checkbox input, say [3,4,6,8] rows need to be deleted.
So how do I pass that list to the handler within URL and what would route look like for the action ?
### UPDATE
Well, I finally did it with jquery and ajax call.
Snap's "getParams" function can process multiple parameters URL but I still cannot figure out how to actually construct the URL without jquery and ajax.
I used javascript to collect the items to be deleted and build the array of the items.
I then used ajax to construct multiple parameters URL and send it to the handler.
Few things to note with this method and Snap:
-- Snaps's "getParams" function only supports old style multiple parameters URL:
```
"a=1&a=2&a=3&a=4"
```
and not the new one:
```
"a[]=1&a[]=2&a[]=3&a[]=4"
```
which makes passing complex parameters impossible.
-- The route should be:
```
("/objects/", method DELETE deleteObject)
```
and not the:
```
("/objects/:ids", method DELETE deleteObject)
```
I did not answer my question because I don't believe it is the only way to pass multiple parameters URL with snap.
Although "getParams" can process it, my question still stays: how do I construct the URL and send it off to a handler?
For instance, Rails uses "link\_to" function within view logic to construct the URL. Snap does not use any logic inside templates so how does it work then?
It just can't be that the only way to pass multiple parameters URL in snap is with the help of javascript...?
Please someone confirm this for me?
|
You're pretty much there. The following form...
```
<form action="/foo">
<ul>
<li>Row 1: <input type="checkbox" name="a" value="1"/></li>
<li>Row 2: <input type="checkbox" name="a" value="2"/></li>
<li>Row 3: <input type="checkbox" name="a" value="3"/></li>
<li>Row 4: <input type="checkbox" name="a" value="4"/></li>
<li>Row 5: <input type="checkbox" name="a" value="5"/></li>
</ul>
<input type="submit" name="submit" value="Submit"/>
</form>
```
...gets submitted like this.
```
http://localhost:8000/foo?a=2&a=3&a=5&submit=Submit
```
Then, inside your handler, this will get you a list of ByteStrings.
```
fooHandler = do
as <- getsRequest (rqParam "a")
```
So this doesn't require JavaScript at all. But it works with JavaScript as well. If you use jQuery to submit a list like this...
```
var fieldData = { rows: [0,1,4], cols: [2,3,5] };
$.getJSON('http://localhost:8000/foo', fieldData, ...);
```
...then you'll have to make an adjustment for the brackets
```
rs <- getsRequest (rqParam "rows[]")
cs <- getsRequest (rqParam "cols[]")
```
|
How to use google speech recognition api in c#?
I want to get the audio file from c# and send to google speech recognition API for get the "speech to text" answer.
My code is like this:
```
try
{
byte[] BA_AudioFile = GetFile(filename);
HttpWebRequest _HWR_SpeechToText = null;
_HWR_SpeechToText =
(HttpWebRequest)HttpWebRequest.Create(
"https://www.google.com/speech-api/v2/recognize?output=json&lang=" + DEFAULT_LANGUAGE + "&key=" + key);
_HWR_SpeechToText.Credentials = CredentialCache.DefaultCredentials;
_HWR_SpeechToText.Method = "POST";
_HWR_SpeechToText.ContentType = "audio/x-flac; rate=44100";
_HWR_SpeechToText.ContentLength = BA_AudioFile.Length;
Stream stream = _HWR_SpeechToText.GetRequestStream();
stream.Write(BA_AudioFile, 0, BA_AudioFile.Length);
stream.Close();
HttpWebResponse HWR_Response = (HttpWebResponse)_HWR_SpeechToText.GetResponse();
if (HWR_Response.StatusCode == HttpStatusCode.OK)
{
StreamReader SR_Response = new StreamReader(HWR_Response.GetResponseStream());
Console.WriteLine(SR_Response.ToString());
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
```
This part is for upload the file.wav and get the response for the google API, which I find from Internet.
But my code always catches the exceptions:
you must write content length bytes to the request stream before calling at \_HWR\_SpeechToText.GetResponse(); But I already wroteh the ContextLength.
So my question is why my program failed? It's because the google link or the HTTPWebRequest I used inappropriately?
Is this the right place I got the API key?

|
Just tested this myself, below is a working solution if you have a valid API key.
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Net;
using System.IO;
namespace GoogleRequest
{
class Program
{
static void Main(string[] args)
{
try
{
FileStream fileStream = File.OpenRead("good-morning-google.flac");
MemoryStream memoryStream = new MemoryStream();
memoryStream.SetLength(fileStream.Length);
fileStream.Read(memoryStream.GetBuffer(), 0, (int)fileStream.Length);
byte[] BA_AudioFile = memoryStream.GetBuffer();
HttpWebRequest _HWR_SpeechToText = null;
_HWR_SpeechToText =
(HttpWebRequest)HttpWebRequest.Create(
"https://www.google.com/speech-api/v2/recognize?output=json&lang=en-us&key=YOUR_API_KEY_HERE");
_HWR_SpeechToText.Credentials = CredentialCache.DefaultCredentials;
_HWR_SpeechToText.Method = "POST";
_HWR_SpeechToText.ContentType = "audio/x-flac; rate=44100";
_HWR_SpeechToText.ContentLength = BA_AudioFile.Length;
Stream stream = _HWR_SpeechToText.GetRequestStream();
stream.Write(BA_AudioFile, 0, BA_AudioFile.Length);
stream.Close();
HttpWebResponse HWR_Response = (HttpWebResponse)_HWR_SpeechToText.GetResponse();
if (HWR_Response.StatusCode == HttpStatusCode.OK)
{
StreamReader SR_Response = new StreamReader(HWR_Response.GetResponseStream());
Console.WriteLine(SR_Response.ReadToEnd());
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
Console.ReadLine();
}
}
}
```
|
Comparing double with literal value in C gives different results on 32 bit machines
Can someone please explain why:
```
double d = 1.0e+300;
printf("%d\n", d == 1.0e+300);
```
Prints "1" as expected on a 64-bit machine, but "0" on a 32-bit machine? (I got this using GCC 6.3 on Fedora 25)
To my best knowledge, floating point literals are of type `double` and there is no type conversion happening.
**Update:** This only occurs when using the `-std=c99` flag.
|
The C standard allows to silently propagate floating-point constant to `long double` *precision* in some expressions (notice: precision, not the type). The corresponding macro is `FLT_EVAL_METHOD`, defined in `<float.h>` since C99.
As by C11 (N1570), §5.2.4.2.2, the semantic of value `2` is:
>
> evaluate all operations and **constants** to the range and precision of
> the `long double` type.
>
>
>
From the technical viewpoint, on x86 architecture (32-bit) GCC compiles the given code into FPU instructions using x87 with 80-bit stack registers, while for x86-64 architecture (64-bit) it preffers SSE unit (as scalars within XMM registers).
The current implementation was introduced in GCC 4.5 along with `-fexcess-precision=standard` option. From the [GCC 4.5 release notes](https://gcc.gnu.org/gcc-4.5/changes.html):
>
> GCC now supports handling floating-point excess precision arising from
> use of the x87 floating-point unit in a way that conforms to ISO C99.
> This is enabled with `-fexcess-precision=standard` and with standards
> conformance options such as `-std=c99`, and may be disabled using
> `-fexcess-precision=fast`.
>
>
>
|
How to restart/stop arangodb server on mac osx
I'm following the first section of the documentation for arangodb 2.7.3. I've made it as far as
```
brew install
/usr/local/sbin/arangod &
```
The very next section after install on basic cluster setup is written for folks using linux. It asks you to modify the configuration file, which I've done, followed by restarting arango via `/etc/init.d/arangodb` What is the correct way to restart the arango daemon on mac osx?
|
You should use the [regular homebrew way to start/stop services](https://serverfault.com/questions/194832/how-to-start-stop-restart-launchd-services-from-the-command-line) which also works for ArangoDB.
Quoting `brew install arangodb`:
To have launchd start arangodb at login:
```
ln -sfv /usr/local/opt/arangodb/*.plist ~/Library/LaunchAgents
```
Then to load arangodb now:
```
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.arangodb.plist
```
Or, if you don't want/need launchctl, you can just run:
```
/usr/local/opt/arangodb/sbin/arangod --log.file -
```
You should refrain from killing services (be it ArangoDB or anything else) with `-9` unless its really neccessary - no clean shut down will be possible, and you may loose data integrity. Killing without a specified signal will default to signal 15 (`SIGTERM`) which will command the service to shut itself down.
|
Django Charfield null=False Integrity Error not raised
I have a model:
```
class Discount(models.Model):
code = models.CharField(max_length=14, unique=True, null=False, blank=False)
email = models.EmailField(unique=True)
discount = models.IntegerField(default=10)
```
In my shell when I try and save a Discount object with no input, it doesn't raise an error. What am I doing wrong?
```
> e = Discount()
> e.save()
```
|
No default Django behavior will save `CHAR` or `TEXT` types as `Null` - it will always use an empty string (`''`). `null=False` has no effect on these types of fields.
`blank=False` means that the field will be required by default when the model is used to render a ModelForm. It does not prevent you from manually saving a model instance without that value.
What you want here is a custom model validator:
```
from django.core.exceptions import ValidationError
def validate_not_empty(value):
if value == '':
raise ValidationError('%(value)s is empty!'), params={'value':value})
```
Then add the validator to your model:
```
code = models.CharField(max_length=14, unique=True, validators=[validate_not_empty])
```
This will take care of the form validation you want, but validators don't automatically run when a model instance is saved. [Further reading here.](https://docs.djangoproject.com/en/dev/ref/validators/#how-validators-are-run) If you want to validate this every time an instance is saved, I suggest overriding the default `save` behavior, checking the value of your string there, and interrupting the save by raising an error if necessary. [Good post on overriding `save` here.](https://stackoverflow.com/questions/9953427/django-custom-save-model)
[Further reading on `null`:](https://docs.djangoproject.com/en/dev/ref/models/fields/#null)
>
> Avoid using null on string-based fields such as CharField and TextField. If a string-based field has null=True, that means it has two possible values for “no data”: NULL, and the empty string. In most cases, it’s redundant to have two possible values for “no data;” the Django convention is to use the empty string, not NULL. One exception is when a CharField has both unique=True and blank=True set. In this situation, null=True is required to avoid unique constraint violations when saving multiple objects with blank values.
>
>
>
[And on validators.](https://docs.djangoproject.com/en/dev/ref/validators/)
|
Changing Joomla Database
I'm new with Joomla and I need to know:
- What happen if i change the default joomla installation database for
anotherone?
- What is stored in that database?
- Could I use my own database according to my needs?
|
```
What happen if i change the default joomla installation database for another one?
```
Joomla will be installed to the given database. Nothing else. If you change the database after you installed Joomla It's not gonna work. Joomla has a unique structured database.
```
What is stored in that database?
```
Data which is needed to run Joomla CMS. Also Component, Plugin, Module data will be stored there.
```
Could I use my own database according to my need ?
```
Yes you can do anything you like, to the table you created. It's not a good idea to change the core one's.
why don't you try it yourself and see ? If you have any specific issues let me know. This is too board to explain here.
Read Joomla Documentation and follow few tutorials.
|
How to improve the performance of CNN Model for a specific Dataset? Getting Low Accuracy on both training and Testing Dataset
We were given an assignment in which we were supposed to implement our own neural network, and two other already developed Neural Networks. I have done that and however, this isn't the requirement of the assignment but I still would want to know that what are the steps/procedure I can follow to improve the accuracy of my Models?
I am fairly new to Deep Learning and Machine Learning as a whole so do not have much idea.
The given dataset contains a total of 15 classes (airplane, chair etc.) and we are provided with about 15 images of each class in training dataset. The testing dataset has 10 images of each class.
Complete github repository of my code can be found here (Jupyter Notebook file): <https://github.com/hassanashas/Deep-Learning-Models>
I tried it out with own CNN first (made one using Youtube tutorials).
Code is as follows,
```
X_train = X_train/255.0
model = Sequential()
model.add(Conv2D(64, (3, 3), input_shape = X_train.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(16)) # added 16 because it model.fit gave error on 15
model.add(Activation('softmax'))
```
For the compiling of Model,
```
from tensorflow.keras.optimizers import SGD
model.compile(loss='sparse_categorical_crossentropy',
optimizer=SGD(learning_rate=0.01),
metrics=['accuracy'])
```
I used sparse categorical crossentropy because my "y" label was intenger values, ranging from 1 to 15.
I ran this model with following way,
```
model_fit = model.fit(X_train, y_train, batch_size=32, epochs=30, validation_split=0.1)
```
It gave me an accuracy of **0.2030** on `training dataset` and only **0.0733** on the `testing dataset` (both the datasets are present in the github repository)
Then, I tried out the **AlexNet CNN** (followed a Youtube tutorial for its code)
I ran the AlexNet on the same dataset for 15 epochs. It improved the accuracy on training dataset to **0.3317**, however accuracy on testing dataset was even worse than my own CNN, at only **0.06**
Afterwards, I tried out the **VGG16 CNN**, again following a Youtube Tutorial.
I ran the code on Google Colab for 10 Epochs. It managed to improve to **100%** accuracy on `training dataset` in the 8th epoch. But this model gave the worst accuracy of all three on testing dataset with only **0.0533**
I am unable to understand this contrasting behavior of all these models. I have tried out different epoch values, loss functions etc. but the current ones gave the best result relatively. My own CNN was able to get to 100% accuracy when I ran it on 100 epochs (however, it gave very poor results on the testing dataset)
What can I do to improve the performance of these Models? And specifically, what are the few crucial things that one should always try to follow in order to improve efficiency of a Deep Learning Model? I have looked up multiple similar questions on Stackoverflow but almost all of them were working on datasets provided by the tensorflow like mnist dataset and etc. and I didn't find much help from those.
|
**Disclaimer:** it's been a few years since I've played with CNNs myself, so I can only pass on some general advice and suggestions.
First of all, I would like to talk about the results you've gotten so far. The first two networks you've trained seem to at least learn something from the training data because they perform better than just randomly guessing.
However: the performance on the test data indicates that the network has not learned anything *meaningful* because those numbers suggest the network is as good as (or only marginally better than) a random guess.
As for the third network: high accuracy for training data combined with low accuracy for testing data means that your network has overfitted. This means that the network has memorized the training data but has not learned any meaningful patterns.
There's no point in continuing to train a network that has started overfitting. So once the training accuracy increases and testing accuracy decreases for a few epochs consecutively, you can stop training.
# Increase the dataset size
Neural networks rely on loads of good training data to learn patterns from. Your dataset contains 15 classes with 15 images each, that is very little training data.
Of course, it would be great if you could get hold of additional high-quality training data to expand your dataset, but that is not always feasible. So a different approach is to artificially expand your dataset. You can easily do this by applying a bunch of transformations to the original training data. Think about: mirroring, rotating, zooming, and cropping.
Remember to not just apply these transformations willy-nilly, they must make sense! For example, if you want a network to recognize a chair, do you also want it to recognize chairs that are upside down? Or for detecting road signs: mirroring them makes no sense because the text, numbers, and graphics will never appear mirrored in real life.
From the brief description of the classes you have (planes and chairs and whatnot...), I think mirroring horizontally could be the best transformation to apply initially. That will already double your training dataset size.
Also, keep in mind that an artificially inflated dataset is never as good as one of the same size that contains all authentic, real images. A mirrored image contains much of the same information as its original, we merely hope it will delay the network from overfitting and hope that it will learn the important patterns instead.
# Lower the learning rate
This is a bit of side note, but try lowering the learning rate. Your network seems to overfit in only a few epochs which is very fast. Obviously, lowering the learning rate will not combat overfitting but it will happen more slowly. This means that you can hopefully find an epoch with better overall performance before overfitting takes place.
Note that a lower learning rate will never magically make a bad-performing network good. It's just one way to locate a set of parameters that performs a tad bit better.
# Randomize the training data order
During training, the training data is presented in batches to the network. This often happens in a fixed order over all iterations. This may lead to certain biases in the network.
First of all, make sure that the training data is shuffled at least once. You do not want to present the classes one by one, for example first all plane images, then all chairs, etc... This could lead to the network unlearning much of the first class by the end of each epoch.
Also, reshuffle the training data between epochs. This will again avoid potential minor biases because of training data order.
# Improve the network design
You've designed a convolutional neural network with only two convolution layers and two fully connected layers. Maybe this model is too shallow to learn to differentiate between the different classes.
Know that the convolution layers tend to first pick up small visual features and then tend to combine these in higher level patterns. So maybe adding a third convolution layer may help the network identify more meaningful patterns.
Obviously, network design is something you'll have to experiment with and making networks overly deep or complex is also a pitfall to watch out for!
|
join multiple iterators in java
Does anybody know how to join multiple iterators in Java? The solution I found iterate through one iterator first, and then move on to the next one. However, what I want is when next() gets called, it first returns the first element from the first iterator. Next time when next() gets called, it returns the first element from the second iterator, and so on.
Thanks
|
Using [Guava's](http://github.com/google/guava/) [`AbstractIterator`](https://google.github.io/guava/releases/snapshot-jre/api/docs/com/google/common/collect/AbstractIterator.html) for simplicity:
```
final List<Iterator<E>> theIterators;
return new AbstractIterator<E>() {
private Queue<Iterator<E>> queue = new LinkedList<Iterator<E>>(theIterators);
@Override protected E computeNext() {
while(!queue.isEmpty()) {
Iterator<E> topIter = queue.poll();
if(topIter.hasNext()) {
E result = topIter.next();
queue.offer(topIter);
return result;
}
}
return endOfData();
}
};
```
This will give you the desired "interleaved" order, it's smart enough to deal with the collections having different sizes, and it's quite compact. (You may wish to use `ArrayDeque` in place of `LinkedList` for speed, assuming you're on Java 6+.)
If you really, really can't tolerate another third-party library, you can more or less do the same thing with some additional work, like so:
```
return new Iterator<E>() {
private Queue<Iterator<E>> queue = new LinkedList<Iterator<E>>(theIterators);
public boolean hasNext() {
// If this returns true, the head of the queue will have a next element
while(!queue.isEmpty()) {
if(queue.peek().hasNext()) {
return true;
}
queue.poll();
}
return false;
}
public E next() {
if(!hasNext()) throw new NoSuchElementException();
Iterator<E> iter = queue.poll();
E result = iter.next();
queue.offer(iter);
return result;
}
public void remove() { throw new UnsupportedOperationException(); }
};
```
For reference, the "all of iter1, all of iter2, etc" behavior can also be obtained using [`Iterators.concat(Iterator<Iterator>)`](https://google.github.io/guava/releases/snapshot-jre/api/docs/com/google/common/collect/Iterators.html#concat-java.util.Iterator-) and its overloads.
|
Javascript difference between {} and []
I am working on javascript and I run into this:
if i do
```
let object = {};
object.length
```
It will complain that object.length is undefined
But
```
let object = [];
object.length
```
works
Any know why?
Thanks
|
In JavaScript, virtually everything is an object (there are exceptions however, most notably `null` and `undefined`) which means that nearly all values have properties and methods.
```
var str = 'Some String';
str.length // 11
```
`{}` is shorthand for creating an empty object. You can consider this as the base for other object types. `Object` provides the last link in the prototype chain that can be used by all other objects, such as an `Array`.
`[]` is shorthand for creating an empty array. While also a data structure similar to an object (in fact `Object` as mentioned previously is in its prototype chain), it is a special form of an object that stores sequences of values.
```
typeof [] // "object"
```
When an array is created it automatically has a special property added to it that will reflect the number of elements stored: [`length`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array#Relationship_between_length_and_numerical_properties). This value is [automatically updated](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array#Relationship_between_length_and_numerical_properties) by certain methods while also used by others.
```
var arr = [];
arr.hasOwnProperty('length'); // true
arr.length; // 0
```
In fact there is nothing special about properties on arrays (although there are few if any good reasons to use them) aside from the engine using them for those methods.
```
var arr = [];
arr.foo = 'hello';
arr // []
arr.foo // "hello"
arr.length // 0
```
This is not true of an `Object` though. It does not have a `length` property added to it as it does not expect a sequence of values. This is why when you try to access `length` the value returned is `undefined` which is the same for any unknown property.
```
var obj = {};
obj.hasOwnProperty('length'); // false
obj.length; // undefined
obj.foo; // undefined
```
So, basically an array is a special data structure that expects a sequence of data. Because of this a property is added automatically that represents the length of the data structure.
BONUS: You can use `length` to trim an array:
```
var a = [1,2,3,4,5];
a.length; // 5
a.length = 2;
a; // [1, 2]
a.length; // 2
```
|
How can we make the saveFrame() method in ExtractMpegFramesTest more efficient?
[edit] Reformatting into question and answer format following fadden@ suggestion.
In [ExtractMpegFramesTest\_egl14.java.txt](http://bigflake.com/mediacodec/ExtractMpegFramesTest_egl14.java.txt), method saveFrame(), there is a loop for reordering RGBA into ARGB for Bitmap png compression (see below quotes from that file), how can this be optimised?
```
// glReadPixels gives us a ByteBuffer filled with what is essentially big-endian RGBA
// data (i.e. a byte of red, followed by a byte of green...). We need an int[] filled
// with little-endian ARGB data to feed to Bitmap.
//
```
...
```
// So... we set the ByteBuffer to little-endian, which should turn the bulk IntBuffer
// get() into a straight memcpy on most Android devices. Our ints will hold ABGR data.
// Swapping B and R gives us ARGB. We need about 30ms for the bulk get(), and another
// 270ms for the color swap.
```
...
```
for (int i = 0; i < pixelCount; i++) {
int c = colors[i];
colors[i] = (c & 0xff00ff00) | ((c & 0x00ff0000) >> 16) | ((c & 0x000000ff) << 16);
}
```
|
It turns out there's an even faster approach.
Using the suggestion in @elmiguelao's answer, I modified the fragment shader to do the pixel swap. This allowed me to remove the swap code from saveFrame(). Since I no longer needed a temporary copy of the pixels in memory, I eliminated the `int[]` buffer entirely, switching from this:
```
int[] colors = [... copy from mPixelBuf, swap ...]
Bitmap.createBitmap(colors, mWidth, mHeight, Bitmap.Config.ARGB_8888);
```
to this:
```
Bitmap bmp = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
bmp.copyPixelsFromBuffer(mPixelBuf);
```
As soon as I did that, all of my colors were wrong.
It turns out that `Bitmap#copyPixelsFromBuffer()` wants the pixels in RGBA order, *not* ARGB order. The values coming out of `glReadPixels()` are already in the right format. So by doing it this way I avoid the swap, avoid an unnecessary copy, and don't need to tweak the fragment shader at all.
|
CSS difference between attribute selectors with tilde and star?
Given the following CSS selectors
```
[attribute~=value] { }
[attribute*=value] { }
```
Should both of the selectors above do exactly the same thing? Or is there a difference?
I believe that there is some kind of a difference, but what? The only one which I spot is that the first of each pair is in the spec of CSS 2 and the second in spec of CSS 3.
Is there anything else?
[**Fiddle**](http://jsfiddle.net/4g0vx5t2/2/)
|
[The asterisk attribute selector `*=`](https://developer.mozilla.org/en-US/docs/Web/CSS/Attribute_selectors#Syntax) matches any substring occurrence. You can think of it as a string contains call.
| Input | Matches `*=bar` |
| --- | --- |
| `foo` | No |
| `foobar` | Yes |
| `foo bar` | Yes |
| `foo barbaz` | Yes |
| `foo bar baz` | Yes |
[The tilde attribute selector `~=`](https://developer.mozilla.org/en-US/docs/Web/CSS/Attribute_selectors#Syntax) matches whole words only.
| Input | Matches `~=bar` |
| --- | --- |
| `foo` | No |
| `foobar` | No |
| `foo bar` | Yes |
| `foo barbaz` | No |
| `foo bar baz` | Yes |
```
div {
padding: 10px;
border: 2px solid white;
}
[attribute*=bar] {
background: lightgray;
}
[attribute~=bar] {
border-color: red;
}
```
```
<div>no attribute</div>
<div attribute="foo">attribute="foo"</div>
<div attribute="foobar">attribute="foobar"</div>
<div attribute="foo bar">attribute="foo bar"</div>
<div attribute="foo barbaz">attribute="foo barbaz"</div>
<div attribute="foo bar baz">attribute="foo bar baz"</div>
```
|
Why can't I set a default output audio device in Ubuntu 19.10?
My laptop with Ubuntu 19.04 detected and set my HDMI output on every boot. But since I upgraded to 19.10, I need to set it manually every boot as follows:
From

to

I already tried every single solution proposed in [How do you set a default audio output device in Ubuntu 18.04?](https://askubuntu.com/questions/1038490/how-do-you-set-a-default-audio-output-device-in-ubuntu-18-04), but apparently there's something different in 19.10. What I did specifically:
1. pactl
```
$ pactl list short sinks
9 alsa_output.pci-0000_00_1f.3.hdmi-stereo-extra1 module-alsa-card.c s16le 2ch 48000Hz SUSPENDED
$ pactl set-default-sink 'alsa_output.pci-0000_00_1f.3.hdmi-stereo-extra1'
```
2. Add either the device number and device name in `/etc/pulse/default.pa` like:
```
set-default-sink 9
and
set-default-sink alsa_output.pci-0000_00_1f.3.hdmi-stereo-extra1
and
set-default-sink 'alsa_output.pci-0000_00_1f.3.hdmi-stereo-extra1'
```
3. Comment the line `load-module module-switch-on-connect`.
4. Switch profiles in PulseAudio Volume Control to HDMI2.
None of these persisted after reboot.
|
This is a bug reported here three days ago:
- [Audio / Sound reverts to HDMI when power event occurs](https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/1850887)
>
> PulseAudio reverts the sound to HDMI all the time when a HDMI related
> power event occurs. That means, although I have set another USB sound
> device plugged in and set as default under sound settings, when an
> application like Kodi or the system shuts off the HDMI monitor and I
> reactivate the monitor, the sound is set to HDMI output again and
> again.
>
>
> That probably has to do with the fix to the reported Bug # 1711101 and
> definitely not happened at Ubuntu 19.04. I switched to Ubuntu 19.10
> two days ago.
>
>
> Setting the USB device as default does not help, even when done by
> PulseAudio mixer (gui) and removing HDMI output from the alternatives
> option.
>
>
>
Only one person is effected by the bug (on November 4, 2019). Visit the link, click that it effects you and subscribe to the bug email.
11 people are now effected as of November 8, 2019. Comment #11 presents a solution though:
>
> I think i found a solution. I'm commenting this lines
>
>
>
> ```
> #load-module module-switch-on-port-available
> #load-module module-switch-on-connect
>
> ```
>
> in `etc/pulse/default.pa` and all
> work for me.
>
>
>
|
ipyparallel parallel function calls example in Jupyter Lab
I'm finding it difficult to figure out how to use ipyparallel from jupyter lab to execute two functions in parallel. Could someone please give me an example of how this should be done? For example, running these two functions at the same time:
```
import time
def foo():
print('foo')
time.sleep(5)
def bar():
print('bar')
time.sleep(10)
```
|
So first you will need to ensure that `ipyparallel` is installed and an `ipycluster` is running - [instructions here](https://ipython-books.github.io/59-distributing-python-code-across-multiple-cores-with-ipython/).
Once you have done that, here is some adapted code that will run your two functions in parallel:
```
from ipyparallel import Client
rc = Client()
def foo():
import time
time.sleep(5)
return 'foo'
def bar():
import time
time.sleep(10)
return 'bar'
res1 = rc[0].apply(foo)
res2 = rc[1].apply(bar)
results = [res1, res2]
while not all(map(lambda ar: ar.ready(), results)):
pass
print(res1.get(), res2.get())
```
N.B. I removed the print statements as you can't call back from the child process into the parent Jupyter session in order to print, but we can of course return a result - I block here until both results are completed, but you could instead print the results as they became available
|
C# max() not returning the natural highest Value
I have a Dictionary of `<string, Point>` and the keys in this dictionary look like this:
- char\_1\_1
- char\_1\_2
- ...
- char\_1\_9
- char\_1\_10
Now I am using the following statement to receive the heighest Value of the keys:
```
pointsWhichBelongTogether.Keys.Where(key => key.Contains(searchStringPattern[0] + "_" + searchStringPattern[1] + "_")).OrderBy(x => x, new NaturalStringComparer()).Max()
```
Which uses the following Compare-Class:
```
public class NaturalStringComparer : IComparer<string>
{
private static readonly Regex _re = new Regex(@"(?<=\D)(?=\d)|(?<=\d)(?=\D)", RegexOptions.Compiled);
public int Compare(string x, string y)
{
x = x.ToLower();
y = y.ToLower();
if (string.Compare(x, 0, y, 0, Math.Min(x.Length, y.Length)) == 0)
{
if (x.Length == y.Length) return 0;
return x.Length < y.Length ? -1 : 1;
}
var a = _re.Split(x);
var b = _re.Split(y);
int i = 0;
while (true)
{
int r = PartCompare(a[i], b[i]);
if (r != 0) return r;
++i;
}
}
private static int PartCompare(string x, string y)
{
int a, b;
if (int.TryParse(x, out a) && int.TryParse(y, out b))
return a.CompareTo(b);
return x.CompareTo(y);
}
}
```
So I assume that the List is natural sorted and the Max-Functions just gets the highest value. But the `max()` returns `char_1_9` as the highest value. Does this max()-function a re-sort?
|
You have two options:
- Take the `Last` from the sorted list or
- Order descending and take the first
Right now you sort using your comparer, but `Max` does not use the comparer but the default comparer to find the maximum.
```
pointsWhichBelongTogether.Keys.Where(key => key.Contains(searchStringPattern[0] + "_" + searchStringPattern[1] + "_"))
.OrderBy(x => x, new NaturalStringComparer()).Last();
```
or
```
pointsWhichBelongTogether.Keys.Where(key => key.Contains(searchStringPattern[0] + "_" + searchStringPattern[1] + "_"))
.OrderByDescending(x => x, new NaturalStringComparer()).First();
```
---
Edit: in a previous version I suggested to pass the comparer to `Max`, but it seems there is no overload that takes a comparer, only a selector.
|
How to make a line or curve emit light with Three JS?
I'm looking for a way to make a line emit light, creating an effect such as this:
[](https://i.stack.imgur.com/NJ5Cj.png)
Here is what I am doing to create my line:
```
createLine() {
// Create a curve with the points
var curve = new THREE.CatmullRomCurve3(this.points);
// Get the points
var curvePoints = curve.getPoints(this.pointCount);
// Create the geometry
var curveGeometry = new THREE.BufferGeometry().setFromPoints(curvePoints);
// Create the material
var curveMaterial = new THREE.LineBasicMaterial({
color : 0x00AAFF,
});
// Create the line
var line = new THREE.Line(curveGeometry, curveMaterial);
return line;
}
```
|
Three.js does not give materials a "glow" effect just through the materials alone. What you need is a post-processing effect called "bloom", which can be added after the first render pass. See this example: <https://threejs.org/examples/?q=bloom#webgl_postprocessing_unreal_bloom>
That example essentially does the following:
1. Sets up effect composer.
2. Renders normal scene
3. Takes result of first render, and adds "bloom" effect to it
4. Renders that result to screen
In that example's source code, the magic happens on lines 104 - 115, here it is commented for clarity:
```
// Set up an effect composer
composer = new THREE.EffectComposer( renderer );
composer.setSize( window.innerWidth, window.innerHeight );
// Tell composer that first pass is rendering scene to buffer
var renderScene = new THREE.RenderPass( scene, camera );
composer.addPass( renderScene );
// Tell composer that second pass is adding bloom effect
var bloomPass = new THREE.UnrealBloomPass( new THREE.Vector2( window.innerWidth, window.innerHeight ), 1.5, 0.4, 0.85 );
composer.addPass( bloomPass );
// Tells composer that second pass gets rendered to screen
bloomPass.renderToScreen = true;
```
|
What's the difference between : 1. (ajaxStart and ajaxSend) and 2. (ajaxStop and ajaxComplete)?
Basically that's the question (parentheses are important)
|
[`.ajaxStart()`](http://api.jquery.com/ajaxStart/) and [`.ajaxStop()`](http://api.jquery.com/ajaxStop/) are for *all* requests **together**, [`ajaxStart`](http://api.jquery.com/ajaxStart/) fires when the *first* simultaneous request starts, [`ajaxStop`](http://api.jquery.com/ajaxStop/) fires then the *last* of that simultaneous batch finishes.
So say you're making 3 requests all at once, `ajaxStart()` fires when the first starts, `ajaxStop()` fires when the last one (they don't necessarily finish in order) comes back.
These events *don't* get any arguments because they're for a batch of requests:
```
.ajaxStart( handler() )
.ajaxStop( handler() )
```
---
[`.ajaxSend()`](http://api.jquery.com/ajaxSend/) and [`.ajaxComplete()`](http://api.jquery.com/ajaxComplete/) fire once **per request** as they send/complete. This is why these handlers are passed arguments and the global/batch ones are not:
```
.ajaxSend( handler(event, XMLHttpRequest, ajaxOptions) )
.ajaxComplete( handler(event, XMLHttpRequest, ajaxOptions) )
```
For a single documentation source, the [Global Ajax Events](http://api.jquery.com/category/ajax/global-ajax-event-handlers/) section of [the API](http://api.jquery.com/) is what you're after.
|
More idiomatic line-by-line handling of a file in Clojure
I'm trying to read a file that (may or may not) have [YAML frontmatter](http://jekyllrb.com/docs/frontmatter/) line-by-line using Clojure, and return a hashmap with two vectors, one containing the frontmatter lines and one containing everything else (i.e., the body).
And example input file would look like this:
```
---
key1: value1
key2: value2
---
Body text paragraph 1
Body text paragraph 2
Body text paragraph 3
```
I have functioning code that does this, but to my (admittedly inexperienced with Clojure) nose, it reeks of code smell.
```
(defn process-file [f]
(with-open [rdr (java.io.BufferedReader. (java.io.FileReader. f))]
(loop [lines (line-seq rdr) in-fm 0 frontmatter [] body []]
(if-not (empty? lines)
(let [line (string/trim (first lines))]
(cond
(zero? (count line))
(recur (rest lines) in-fm frontmatter body)
(and (< in-fm 2) (= line "---"))
(recur (rest lines) (inc in-fm) frontmatter body)
(= in-fm 1)
(recur (rest lines) in-fm (conj frontmatter line) body)
:else
(recur (rest lines) in-fm frontmatter (conj body line))))
(hash-map :frontmatter frontmatter :body body)))))
```
Can someone point me to a more elegant way to do this? I'm going to be doing a decent amount of line-by-line parsing in this project, and I'd like a more idiomatic way of going about it if possible.
|
Firstly, I'd put line-processing logic in its own function to be called from a function actually reading in the files. Better yet, you can make the function dealing with IO take a function to map over the lines as an argument, perhaps along these lines:
```
(require '[clojure.java.io :as io])
(defn process-file-with [f filename]
(with-open [rdr (io/reader (io/file filename))]
(f (line-seq rdr))))
```
Note that this arrangement makes it the duty of `f` to realize as much of the line seq as it needs before it returns (because afterwards `with-open` will close the underlying reader of the line seq).
Given this division of responsibilities, the line processing function might look like this, assuming the first `---` must be the first non-blank line and all blank lines are to be skipped (as they would be when using the code from the question text):
```
(require '[clojure.string :as string])
(defn process-lines [lines]
(let [ls (->> lines
(map string/trim)
(remove string/blank?))]
(if (= (first ls) "---")
(let [[front sep-and-body] (split-with #(not= "---" %) (next ls))]
{:front (vec front) :body (vec (next sep-and-body))})
{:body (vec ls)})))
```
Note the calls to `vec` which cause all the lines to be read in and returned in a vector or pair of vectors (so that we can use `process-lines` with `process-file-with` without the reader being closed too soon).
Because reading lines from an actual file on disk is now decoupled from processing a seq of lines, we can easily test the latter part of the process at the REPL (and of course this can be made into a unit test):
```
;; could input this as a single string and split, of course
(def test-lines
["---"
"key1: value1"
"key2: value2"
"---"
""
"Body text paragraph 1"
""
"Body text paragraph 2"
""
"Body text paragraph 3"])
```
Calling our function now:
```
user> (process-lines test-lines)
{:front ("key1: value1" "key2: value2"),
:body ("Body text paragraph 1"
"Body text paragraph 2"
"Body text paragraph 3")}
```
|
Returning XML from query result in servlet
I'm trying to return an XML file based on my query results. I'm very new to this so I'm not really sure where I'm going wrong. Is this a realistic way to go about doing this or is there something simpler? Right now I'm getting these exceptions:
```
Error performing query: javax.servlet.ServletException: org.xml.sax.SAXParseException: Content is not allowed in prolog.
```
If I run my query in isql\*plus, it does execute
```
import java.io.*;
import java.util.*;
import java.sql.*; // JDBC packages
import javax.servlet.*;
import javax.servlet.http.*;
import javax.xml.parsers.*;
import org.xml.sax.*;
import org.xml.sax.helpers.*;
public class step5 extends HttpServlet {
public static final String DRIVER = "sun.jdbc.odbc.JdbcOdbcDriver";
public static final String URL = "jdbc:odbc:rreOracle";
public static final String username = "cm485a10";
public static final String password = "y4e8f7s5";
SAXParserFactory factory;
public void init() throws ServletException {
factory = SAXParserFactory.newInstance();
}
public void doGet (HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException
{
PrintWriter out = response.getWriter();
Connection con = null;
try {
Class.forName(DRIVER);
con = DriverManager.getConnection(URL,username,password);
try {
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT sale_id, home_id, agent_id, customer_id FROM sale");
String xml = "";
xml = xml + "<sales_description>";
xml = xml + "<sale>";
boolean courseDataDone = false;
while (rs.next()) {
String sale_id = rs.getString(1);
String home_id = rs.getString(2);
String agent_id = rs.getString(3);
String customer_id = rs.getString(4);
if (!courseDataDone) {
xml = xml + "<sale_id>" + sale_id + "</sale_id>" +
"<home_id>" + home_id + "</home_id>" +
"<agent_id>" + agent_id + "</agent_id>" +
"<customer_id>" + customer_id + "</customer_id>" +
"" +
"";
courseDataDone = true;
}
}
xml = xml + "</sale>" +
"</sales_description>";
try {
SAXParser parser = factory.newSAXParser();
InputSource input = new InputSource(new StringReader(xml));
parser.parse(input, new DefaultHandler());
} catch (ParserConfigurationException e) {
throw new ServletException(e);
} catch (SAXException e) {
throw new ServletException(e);
}
response.setContentType("text/xml;charset=UTF-8");
out.write(xml);
} catch(Exception ex) {
out.println("Error performing query: " + ex);
con.close();
return;
}
} catch(Exception ex) {
out.println("Error performing DB connection: " + ex);
return;
}
}
}
```
Any help/tips would be appreciated.
|
You're missing the prolog. Add this to beginning of your XML:
```
<?xml version="1.0" encoding="UTF-8"?>
```
By the way, you don't need the SAX parser here. You aren't modifying the XML at all. Get rid of the parser and just write `xml` directly to the response. You are also not handling JDBC resources correctly in [`try`-with-resources](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html). Here's a basic example of the improvement:
```
response.setContentType("text/xml;charset=UTF-8");
PrintWriter writer = response.getWriter();
writer.append("<?xml version=\"1.0\" encoding=\"UTF-8\"?>");
writer.append("<sales_description>");
try (
Connection connection = dataSource.getConnection();
Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery("SELECT sale_id, home_id, agent_id, customer_id FROM sale");
) {
if (resultSet.next()) {
writer.append("<sale>");
writer.append("<sale_id>").append(resultSet.getString("sale_id")).append("</sale_id>");
writer.append("<home_id>").append(resultSet.getString("home_id")).append("</home_id>");
writer.append("<agent_id>").append(resultSet.getString("agent_id")).append("</agent_id>");
writer.append("</sale>");
}
} catch (SQLException e) {
throw new ServletException(e);
}
writer.append("</sales_description>");
```
To write **all** records, just replace `if (resultSet.next())` by `while (resultSet.next())`.
To handle the exception more gracefully, i.e. throwing an `ServletException` which ends in an error page instead of a halfbaked XML, you'd like to build the XML using `StringBuilder`. Just replace `PrintWriter` by `new StringBuilder()` and then at end, do `response.getWriter().write(builder.toString())`.
|
fread Function in C Programming
I have two questions about C's `fread` function:
1. I have read that `fread` is used to read a binary file. However, when I read a binary file with `fgets` using read mode `"r"` and a text file with `fread` using `"rb"` mode, the results are the same as reading a text file with `fgets` and a binary file with `fread`. So, why are there different functions for reading binary and text files?
2. I am using `fread` to read 10 bytes of a file in one call. How should I stop reading at the end of file – i.e. how is `EOF` specified in `fread`?
|
answer of 1 question >
1>**fread**
```
size_t fread ( void * ptr, size_t size, size_t count, FILE * stream );
```
**Read block of data from stream** (try to understand this)
Reads an array of count elements, each one with a size of size bytes, from the stream and stores them in the block of memory specified by ptr.
The postion indicator of the stream is advanced by the total amount of bytes read.
The total amount of bytes read if successful is (size \* count).
**2>fgets**
```
char * fgets ( char * str, int num, FILE * stream );
```
**Get string from stream** (try to understand this)
Reads characters from stream and stores them as a C string into str until (num-1) characters have been read or either a newline or a the End-of-File is reached, whichever comes first.
A newline character makes fgets stop reading, but it is considered a valid character and therefore it is included in the string copied to str.
A null character is automatically appended in str after the characters read to signal the end of the C string.
---
**answer of 2nd question**
in fread return value is
The total number of elements successfully read is returned as a size\_t object, which is an integral data type.
**If this number differs from the count parameter, either an error occured or the End Of File was reached.**
**You can use either ferror or feof to check whether an error happened or the End-of-File was reached.**
|
FileNotFoundError: [Errno 2] No such file or directory: 'ffprobe': 'ffprobe'
When running the code snippet, I get the error seen in the title.
I have re-installed the package `pydub`,and `pip3 install ffprobe`.
```
from pydub.playback import play
from pydub import AudioSegment
def change_volume(file_name, alteration):
song = AudioSegment.from_mp3(file_name)
new_song = song + alteration
new_title = ("_%s") % (file_name)
new_song.export(new_title, format='mp3')
change_volume("test_sample.mp3", 3)
```
The output of the code should be a new mp3 file in the directory with slightly risen volume levels (`test.mp3` --> `_test.mp3`), instead I get the error:
```
FileNotFoundError: [Errno 2] No such file or directory: 'ffprobe': 'ffprobe'
```
|
First make sure you have ffprobe installed, which is part of FFmpeg, so actually you need to install ffmpeg. You can do that by following the instructions of one of those two sites.
<https://ffmpeg.org/download.html>
<https://github.com/adaptlearning/adapt_authoring/wiki/Installing-FFmpeg>
After that you need to add the libary to your system path for python to be able to find and to use it. That can be done by either actually adding the installation path of FFmpeg to your OS path (How to do that depends on your operating system), or by adding it to the temporary path variable that is used inside of python.
```
import sys
sys.path.append('/path/to/ffmpeg')
```
For the second option you have to make sure to append the path to FFmpeg before importing anything else. This is the better option if you have no option to change the configuration of the root system, but can become very inconsistent when used by different python scripts.
Finally make sure to have `ffprobe` installed (e.g. with `pip install ffprobe` inside a terminal, see <https://pypi.org/project/ffprobe>) so that `import ffprobe` should work inside the python environment.
|
Oracle SQL Loader - How to not display "Commit point reached - logical record count" counts
I'm loading big files via Oracle SQL Loader over vpn from home, and they're taking a lot of time. They were a lot faster to load when I loaded them from work. The files I'm loading are on my work server already.
So my thinking is that the slow down is because of the "Commit point reached - logical record count" that is printed for each row. Must be slow due to them having to be sent over the network. I googled but can't find any way to print less of them. Tried adding rows=5000 as a parameter, but I still get the prints for each row.
How can I print less of the "Commit point reached - logical record count" counts?
Thanks
|
You can use the keyword `silent`, which is available in the [options clause](http://docs.oracle.com/cd/E11882_01/server.112/e22490/ldr_control_file.htm#i1009196). You can set the [following things](http://docs.oracle.com/cd/E11882_01/server.112/e22490/ldr_params.htm#SUTIL004) to be silent:
>
> - HEADER - Suppresses the SQL\*Loader header messages that normally appear on the screen. Header messages still appear in the log file.
> - FEEDBACK - Suppresses the "commit point reached" feedback messages that normally appear on the screen.
> - ERRORS - Suppresses the data error messages in the log file that occur when a record generates an Oracle error that causes it to be
>
> written to the bad file. A count of rejected records still appears.
> - DISCARDS - Suppresses the messages in the log file for each record written to the discard file.
> - PARTITIONS - Disables writing the per-partition statistics to the log file during a direct load of a partitioned table.
> - ALL - Implements all of the suppression values: HEADER, FEEDBACK, ERRORS, DISCARDS, and PARTITIONS.
>
>
>
You would want to suppress `feedback`.
You can either use on the command line, for instance:
```
sqlldr schema/pw@db silent=(feedback, header)
```
**Or** in the options clause of the control file, for instance:
```
options (bindsize=100000, silent=(feedback, errors) )
```
|
Discrete Color Bar with Tick labels in between colors
I am trying to plot some data with a discrete color bar. I was following the example given (<https://gist.github.com/jakevdp/91077b0cae40f8f8244a>) but the issue is this example does not work 1-1 with different spacing. For example, the spacing in the example in the link is for only increasing by 1 but my data is increasing by 0.5. You can see the output from the code I have.[](https://i.stack.imgur.com/ppGjJ.png). Any help with this would be appreciated. I know I am missing something key here but cant figure it out.
```
import matplotlib.pylab as plt
import numpy as np
def discrete_cmap(N, base_cmap=None):
"""Create an N-bin discrete colormap from the specified input map"""
# Note that if base_cmap is a string or None, you can simply do
# return plt.cm.get_cmap(base_cmap, N)
# The following works for string, None, or a colormap instance:
base = plt.cm.get_cmap(base_cmap)
color_list = base(np.linspace(0, 1, N))
cmap_name = base.name + str(N)
return base.from_list(cmap_name, color_list, N)
num=11
x = np.random.randn(40)
y = np.random.randn(40)
c = np.random.randint(num, size=40)
plt.figure(figsize=(10,7.5))
plt.scatter(x, y, c=c, s=50, cmap=discrete_cmap(num, 'jet'))
plt.colorbar(ticks=np.arange(0,5.5,0.5))
plt.clim(-0.5, num - 0.5)
plt.show()
```
|
Not sure what version of matplotlib/pyplot introduced this, but `plt.get_cmap` now supports an `int` argument specifying the number of colors you want to get, for discrete colormaps.
This automatically results in the colorbar being discrete.
By the way, `pandas` has an even better handling of the colorbar.
```
import numpy as np
from matplotlib import pyplot as plt
plt.style.use('ggplot')
# remove if not using Jupyter/IPython
%matplotlib inline
# choose number of clusters and number of points in each cluster
n_clusters = 5
n_samples = 20
# there are fancier ways to do this
clusters = np.array([k for k in range(n_clusters) for i in range(n_samples)])
# generate the coordinates of the center
# of each cluster by shuffling a range of values
clusters_x = np.arange(n_clusters)
clusters_y = np.arange(n_clusters)
np.random.shuffle(clusters_x)
np.random.shuffle(clusters_y)
# get dicts like cluster -> center coordinate
x_dict = dict(enumerate(clusters_x))
y_dict = dict(enumerate(clusters_y))
# get coordinates of cluster center for each point
x = np.array(list(x_dict[k] for k in clusters)).astype(float)
y = np.array(list(y_dict[k] for k in clusters)).astype(float)
# add noise
x += np.random.normal(scale=0.5, size=n_clusters*n_samples)
y += np.random.normal(scale=0.5, size=n_clusters*n_samples)
### Finally, plot
fig, ax = plt.subplots(figsize=(12,8))
# get discrete colormap
cmap = plt.get_cmap('viridis', n_clusters)
# scatter points
scatter = ax.scatter(x, y, c=clusters, cmap=cmap)
# scatter cluster centers
ax.scatter(clusters_x, clusters_y, c='red')
# add colorbar
cbar = plt.colorbar(scatter)
# set ticks locations (not very elegant, but it works):
# - shift by 0.5
# - scale so that the last value is at the center of the last color
tick_locs = (np.arange(n_clusters) + 0.5)*(n_clusters-1)/n_clusters
cbar.set_ticks(tick_locs)
# set tick labels (as before)
cbar.set_ticklabels(np.arange(n_clusters))
```
[](https://i.stack.imgur.com/3yTIj.png)
|
Present UIPopoverController in same position with changing just arrow offset
My goal is to keep same coordinates for a UIPopoverController with just changing arrow offset.
So basically i have three buttons touching each of them shows up a popover. When presenting this popover it changes position on the screen, but i do not want that.
To be more clear look at screenoshots:



|
For my popover I wanted the arrow to be top-left instead of top-center (which is default).
I've managed to get the result below (screenshot) by setting the `popoverLayoutMargins` property of the UIPopoverController. You can use it to reduce the screen-area used in the internal calculations of the UIPopoverController to determine where to show the popover.

The code:
```
// Get the location and size of the control (button that says "Drinks")
CGRect rect = control.frame;
// Set the width to 1, this will put the anchorpoint on the left side
// of the control
rect.size.width = 1;
// Reduce the available screen for the popover by creating a left margin
// The popover controller will assume that left side of the screen starts
// at rect.origin.x
popoverC.popoverLayoutMargins = UIEdgeInsetsMake(0, rect.origin.x, 0, 0);
// Simply present the popover (force arrow direction up)
[popoverC presentPopoverFromRect:rect inView:self.view permittedArrowDirections:UIPopoverArrowDirectionUp animated:YES];
```
I think you'll be able to get the desired result by tweaking the above.
|
Metadata in DynamoDB stream event for delete operation?
I intend to use DynamoDB streams to implement a log trail that tracks changes to a number of tables (and writes this to log files on S3). Whenever a modification is made to a table, a lambda function will be invoked from the stream event.
Now, I need to record the user that made the modification.
For `put` and `update`, I can solve this by including an actual table attribute holding the ID of the caller. Now the record stored in the table will include this ID, which isn't really desirable as it's more meta-data about the operation than part of the record itself, but I can live with that.
So for example:
```
put({
TableName: 'fruits',
Item: {
id: 7,
name: 'Apple',
flavor: 'Delicious',
__modifiedBy: 'USER_42'
})
```
This will result in a lambda function invocation, where I can write something like the following to my S3 log file:
```
table: 'fruits',
operation: 'put',
time: '2018-12-10T13:35:00Z',
user: 'USER_42',
data: {
id: 7,
name: 'Apple',
flavor: 'Delicious',
}
```
However, for deletes, a problem arises - how can I log the calling user of the delete operation? Of course I can make two requests, one that updates the `__modifiedBy`, and another that deletes the item, and the stream would just fetch the `__modifiedBy` value from the `OLD_IMAGE` included in the stream event. However, this is really undesirable, having to spend 2 writes on a single delete of an item.
So is there a better way, such as attaching metadata to DynamoDB operations, that are carried over into stream events, without being part of the data written to the table itself?
|
Here are 3 different options. The right one will depend on the requirements of your application. It could be that none of these will work in your specific use case, but in general, these approaches will all work.
**Option 1**
If you’re using AWS IAM at a granular enough level, then you can get the user identity from the [Stream Record](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_Record.html#DDB-Type-streams_Record-userIdentity).
**Option 2**
If you can handle a small overhead when writing to dynamodb, you could set up a lambda function (or ec2-based service) which acts as a write proxy to your dynamodb tables. Configure your permissions so that only that Lambda can write to the table, and then you can accept any metadata you want and log it however you want. If all you need is logging of events, then you don’t need to write to S3, since AWS can handle Lambda logs for you.
Here’s an example pseudo code for a lambda function using logging instead of writing to S3.
```
handle_event(operation, item, user)
log(operation, item, user)
switch operation
case put:
dynamodb.put(item)
case update:
dynamodb.update(item)
case delete:
dynamodb.delete(item)
log(operation, item, user)
logEntry.time = now
logEntry.user = user
...
print(logEntry)
```
You are, of course, free to still log directly to S3, but if you do, you may find that the added latency is significant enough to impact your application.
**Option 3**
If you can tolerate some stale data in your table, set up [DynamoDB TTL](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html) on your table(s). Don’t set a TTL value when creating or updating an item. Then instead of deleting an item, update the item by adding the current time to the TTL field. As far as I can tell, DynamoDB does not use write capacity when removing items with an expired TTL, and expired items are removed with 24 hours of their expiry.
This will allow you to log the “add TTL” as the deletion and have a `last modified by` user for that deletion. You can safely ignore the actual delete that occurs when dynamodb cleans up the expired items.
In your application, you can also check for the presence of a TTL value so that you don’t present users with deleted data by accident. You could also add a filter expression to any queries that will omit items which have a TTL set.
|
How to troubleshoot connectivity when curl gets an \*empty response\*
I want to know how to proceed in troubleshooting why a curl request to a webserver doesn't work. I'm not looking for help that would be dependent upon my environment, I just want to know how to collect information about exactly what part of the communication is failing, port numbers, etc.
```
chad-integration:~ # curl -v 111.222.159.30
* About to connect() to 111.222.159.30 port 80 (#0)
* Trying 111.222.159.30... connected
* Connected to 111.222.159.30 (111.222.159.30) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.0 (x86_64-suse-linux-gnu) libcurl/7.19.0 OpenSSL/0.9.8h zlib/1.2.3 libidn/1.10
> Host: 111.222.159.30
> Accept: */*
>
* Empty reply from server
* Connection #0 to host 111.222.159.30 left intact
curl: (52) Empty reply from server
* Closing connection #0
```
So, I understand that an empty response means that curl didn't get any response from the server. No problem, that's precisely what I'm trying to figure out.
But what more specific info can I derive from cURL here?
It was able to successfully "connect", so doesn't that involve some bidirectional communication? If so, then why does the response not come also? Note, I've verified my service is up and returning responses.
Note, I'm a bit green at this level of networking, so feel free to provide some general orientation material.
|
You likely will need to troubleshoot this from the server side, not the client side. I believe you are confusing an 'empty response' with 'no response'. They do not mean the same thing. Likely you are getting a reply that does not contain any data.
You can test this by simply using telnet instead of going through curl:
```
telnet 111.222.159.30 80
```
Once connected, paste the following (taken from your curl output):
```
GET / HTTP/1.1
User-Agent: curl/7.19.0 (x86_64-suse-linux-gnu) libcurl/7.19.0 OpenSSL/0.9.8h zlib/1.2.3 libidn/1.10
Host: 111.222.159.30
Accept: */*
```
You should see the response exactly as curl sees it.
One possible reason you are getting an empty reply is that you're trying to hit a website that is a name-based virtual host. If that's the case, depending on server configuration (the site you're trying to hit happens to be configured as the default) you cannot reach the site by IP address without a little bit of work.
You can test that on the client side by simply changing the 'Host' line above; replace www.example.com with the site you're trying to reach:
```
GET / HTTP/1.1
User-Agent: curl/7.19.0 (x86_64-suse-linux-gnu) libcurl/7.19.0 OpenSSL/0.9.8h zlib/1.2.3 libidn/1.10
Host: www.example.com
Accept: */*
```
|
vertically center text in navigation bar
I am trying to make a navigation bar in which the buttons' text will be aligned in the center vertically.
Currently, everything is working fine with the navigation bar besides the vertical align.
I have tried many methods such as with line height, padding to the top and bottom (messes up my heights so the text divs overflow), flex, and table display.
```
html,
body {
height: 100%;
margin: 0px;
}
#nav {
height: 10%;
background-color: rgb(52, 152, 219);
position: fixed;
top: 0;
left: 0;
width: 100%;
color: white;
font-family: Calibri;
font-size: 200%;
text-align: center;
display: flex;
}
#nav div {
display: inline-block;
height: 100%;
align-items: stretch;
flex: 1;
}
#nav div:hover {
background-color: rgb(41, 128, 185);
cursor: pointer;
}
```
```
<div id="main">
<div id="nav">
<div><a>Home</a></div>
<div><a>Page2</a></div>
<div><a>Page3</a></div>
<div><a>Page4</a></div>
<div><a>Page5</a></div>
</div>
</div>
```
All help is appreciated, thank you!
|
You can use the table and table-cell method. Basically you need to add the css property `display: table` to the parent element and `display: table-cell; vertical-align: middle` to the children ones.
*Increased height for demo purpose.*
```
#nav {
height: 50%;
background-color: rgb(52, 152, 219);
position: fixed;
top: 0;
left: 0;
width: 100%;
color: white;
font-family: Calibri;
font-size: 200%;
text-align: center;
display: table;
}
#nav div {
display: table-cell;
height: 100%;
vertical-align: middle;
}
#nav div:hover {
background-color: rgb(41, 128, 185);
cursor: pointer;
}
```
```
<div id="main">
<div id="nav">
<div><a>Home</a>
</div>
<div><a>Page2</a>
</div>
<div><a>Page3</a>
</div>
<div><a>Page4</a>
</div>
<div><a>Page5</a>
</div>
</div>
</div>
```
|
Best method to determin distance with WI-FI signal strength
I start working in an android application that determin the position with the WI-FI Signal Strength.well i read a lot of theses and every time a found a new idea and new concepts ,and that made me confused about the method i'm going to choose.
Finely i found in my point of view that the geometric technique like triangulation and trilateration are not bad.(i knew that to use those methods we need to be root at an android phone).
I read a lot of question and answer about this topic , and i would like to know the opinion of people who worked in this domain, and what they think about my final conclusion.
PS: i remaked that those questions where on 2011,2012..now we are on 2014 ,I hope that there will be a solution :)
Thank you
|
FSPL depends on two parameters: First is the frequency of radio signals;Second is the wireless transmission distance. The following formula can reflect the relationship between them.
```
FSPL (dB) = 20log10(d) + 20log10(f) + K
d = distance
f = frequency
K= constant that depends on the units used for d and f
If d is measured in kilometers, f in MHz, the formula is:
FSPL (dB) = 20log10(d)+ 20log10(f) + 32.44
```
From the Fade Margin equation, Free Space Path Loss can be computed with the following equation.
Free Space Path Loss=Tx Power-Tx Cable Loss+Tx Antenna Gain+Rx Antenna Gain - Rx Cable Loss - Rx Sensitivity - Fade Margin
With the above two Free Space Path Loss equations, we can find out the Distance in km.
```
Distance (km) = 10^((Free Space Path Loss – 20log10(f) + 32.44)/20)
```
The Fresnel Zone is the area around the visual line-of-sight that radio waves spread out into after they leave the antenna. You want a clear line of sight to maintain strength, especially for 2.4GHz wireless systems. This is because 2.4GHz waves are absorbed by water, like the water found in trees. The rule of thumb is that 60% of Fresnel Zone must be clear of obstacles. Typically, 20% Fresnel Zone blockage introduces little signal loss to the link. Beyond 40% blockage the signal loss will become significant.
```
FSPLr=17.32*√(d/4f)
d = distance [km]
f = frequency [GHz]
r = radius [m]
```
Check this link
<http://www.tp-link.com/en/support/calculator/>
|
Ansible : using different sudo user for different hosts
Recently started using ansible. We have servers where the application is setup under different users, like in server xyz.com, unix user is xyz\_user and so on.
So in case of xyz.com,
```
ansible xyz.com -a 'command' -u xyz_user -K
```
How can we set the sudo user in ansible config so as to automatically sudo to the particular user defined for the server?
|
You can leverage the ansible playbooks for these kind of stuffs.
e.g.
```
---
- hosts: host1:host2
user: user1
sudo: yes
tasks:
- name: update package list
action: command /usr/bin/apt-get update
- name: upgrade packages
action: command /usr/bin/apt-get -u -y dist-upgrade
- hosts: host3
user: ubuntu
sudo: yes
tasks:
- name: update package list
action: command /usr/bin/apt-get update
- name: upgrade packages
action: command /usr/bin/apt-get -u -y dist-upgrade
```
Hope it works for you :)
|
What do I need to be able to "dock/tile" windows side by side
I installed Ubuntu 11.04 (Command Line System via Alternate CD) then packages I need including `ubuntu-desktop --no-install-recommends`. I find that I can't tile windows side by side by dragging a window to the side anymore. I guess I'm missing a package?
**UPDATE**: I am Ubuntu Classic so not using Unity
|
You might be missing [compizconfig-settings-manager](http://packages.ubuntu.com/compizconfig-settings-manager) if you use Unity/Compiz.
And then have a look at the `snap` plugin.
- **Snap Type** Here you can define what types of snapping are available.
- Checking Edge Resistance makes windows snap to edges and you must move the mouse further before they un-snap
- Checking Edge Attraction makes windows snap to edges as you get close to them
- **Edges** allows you to define what is an edge
- Checking Screen Edges makes windows snap to the edges of your screen
- Checking Window Edges makes windows snap to the edges of other windows
- The value Edge Resistance Distance defines how many pixels space you must move your mouse before the window un-snaps
- The value Edge Attraction Distance defines how many pixels space windows must be next to each other before they snap.
|
Types for parser combinators
If I have a parser `a : Parser A` and a parser `b : Parser B` then I can combine it into a parser `a | b : Parser (Either A B)`. This works but gets a little tricky when you start adding more alternatives and getting types like `Either A (Either B C)`. I can imagine flattening the previous type into something like `Alternative A B C`. Is there a standard transformation I can perform or am I stuck with generating a whole bunch of boilerplate for types like `Alternative A B C ...`.
|
So the interesting thing about `Either` is that you can use it as a type-level `cons` operator.
```
A `Either` (B `Either` (C `Either` (D `Either` Void))) --> [A,B,C,D]
```
So all we need do is make that explicit. You'll need ghc-7.8 to support closed data families:
```
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE DataKinds #-}
-- ...
type family OneOf (as :: [*]) :: * where
OneOf '[a] = a
OneOf (a ': as) = Either a (OneOf as)
```
Now you can write your types much more succinctly:
```
aorborc :: Parser (OneOf '[A, B, C])
aorborc = a | (b | c)
```
It's still `Either` under the hood, so you can still easily interoperate with all existing code that uses `Either`, which is nice.
|
Iterate enum values using values() and valueOf in kotlin
Am a newbie here. Can anyone give an example to iterate an enum with values and valueOf methods??
This is my enum class
```
enum class Gender {
Female,
Male
}
```
I know we can get the value like this
```
Gender.Female
```
But I want to iterate and display all the values of Gender. How can we achieve this? Anyhelp could be appreciated
|
You can use [`values`](https://kotlinlang.org/docs/reference/enum-classes.html#working-with-enum-constants) like so:
```
val genders = Gender.values()
```
Since Kotlin 1.1 there are also helper methods available:
```
val genders = enumValues<Gender>()
```
With the above you can easily iterate over all values:
```
enumValues<Gender>().forEach { println(it.name) }
```
To map enum name to enum value use `valueOf`/[`enumValueOf`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/enum-value-of.html) like so:
```
val male = Gender.valueOf("Male")
val female = enumValueOf<Gender>("Female")
```
|
How to load all dart DateFormat locale in flutter?
In Flutter app, intl package load only [15 languages](https://flutter.io/tutorials/internationalization/#setting-up). However Dart [DateFormat](https://www.dartdocs.org/documentation/intl/latest/intl/DateFormat-class.html) support many more. I do not need translation, but need correct `DateFormat`. How to load all `DateFormat` locale?
|
In your highest StatefulWidget add these imports
```
import 'package:intl/intl.dart';
import 'package:intl/date_symbol_data_local.dart';
```
in its State, override initState and add
```
@override
void initState() {
super.initState();
initializeDateFormatting();
}
```
For example
```
import 'package:flutter/material.dart';
import 'package:intl/intl.dart';
import 'package:intl/date_symbol_data_local.dart';
void main() => runApp(new MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return new MaterialApp(
title: 'Intl Demo',
theme: new ThemeData(
primarySwatch: Colors.blue,
),
home: new MyHomePage(title: 'Intl Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => new _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
DateFormat dateFormat;
DateFormat timeFormat;
@override
void initState() {
super.initState();
initializeDateFormatting();
dateFormat = new DateFormat.yMMMMd('cs');
timeFormat = new DateFormat.Hms('cs');
}
void _refresh() {
setState(() {});
}
@override
Widget build(BuildContext context) {
var dateTime = new DateTime.now();
return new Scaffold(
appBar: new AppBar(
title: new Text(widget.title),
),
body: new Center(
child: new Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
new Text(dateFormat.format(dateTime)),
new Text(timeFormat.format(dateTime)),
],
),
),
floatingActionButton: new FloatingActionButton(
onPressed: _refresh,
tooltip: 'Refresh',
child: new Icon(Icons.refresh),
),
);
}
}
```
Dart mostly uses the [ISO 631](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) 2 letter codes, with country variants if applicable. For example: `ja, en_US, en_GB, zh_HK, zh_CN`. Occasionally it uses a 3 letter code. To find the full list just search for `DateSymbols` in the file [date\_symbol\_data\_local.dart](https://github.com/dart-lang/intl/blob/master/lib/date_symbol_data_local.dart)
[](https://i.stack.imgur.com/4HwfS.png)
|
Calculate quantile on grouped data in spark Dataframe
I have the following Spark dataframe :
```
agent_id|payment_amount|
+--------+--------------+
| a| 1000|
| b| 1100|
| a| 1100|
| a| 1200|
| b| 1200|
| b| 1250|
| a| 10000|
| b| 9000|
+--------+--------------+
```
my desire output would be something like
```
agen_id 95_quantile
a whatever is 95 quantile for agent a payments
b whatever is 95 quantile for agent b payments
```
for each group of agent\_id I need to calculate the 0.95 quantile, I take the following approach:
```
test_df.groupby('agent_id').approxQuantile('payment_amount',0.95)
```
but I take the following error:
```
'GroupedData' object has no attribute 'approxQuantile'
```
I need to have .95 quantile(percentile) in a new column so later can be used for filtering purposes
I am using Spark 2.0.0
|
One solution would be to use `percentile_approx` :
```
>>> test_df.registerTempTable("df")
>>> df2 = sqlContext.sql("select agent_id, percentile_approx(payment_amount,0.95) as approxQuantile from df group by agent_id")
>>> df2.show()
# +--------+-----------------+
# |agent_id| approxQuantile|
# +--------+-----------------+
# | a|8239.999999999998|
# | b|7449.999999999998|
# +--------+-----------------+
```
**Note 1 :** This solution was tested with spark 1.6.2 and requires a `HiveContext`.
**Note 2 :** `approxQuantile` isn't available in Spark < 2.0 for `pyspark`.
**Note 3 :** `percentile` returns an approximate pth percentile of a numeric column (including floating point types) in the group. When the number of distinct values in col is smaller than second argument value, this gives an exact percentile value.
**EDIT :** From **Spark 2+**, `HiveContext` is not required.
|
Angular 2 - Countdown timer
I am willing to do a countdown timer in Angular 2 that start from 60 (i.e 59, 58,57, etc...)
For that I have the following:
```
constructor(){
Observable.timer(0,1000).subscribe(timer=>{
this.counter = timer;
});
}
```
The above, ticks every second, which is fine; however, it goes in an ascending order to an unlimited number.
I am not sure if there is a way to tweak it so I can have a countdown timer.
|
There are many ways to achieve this, a basic example is to use the `take` operator
```
import { Observable, timer } from 'rxjs';
import { take, map } from 'rxjs/operators';
@Component({
selector: 'my-app',
template: `<h2>{{counter$ | async}}</h2>`
})
export class App {
counter$: Observable<number>;
count = 60;
constructor() {
this.counter$ = timer(0,1000).pipe(
take(this.count),
map(() => --this.count)
);
}
}
```
### A better way is to create a counter directive!
```
import { Directive, Input, Output, EventEmitter, OnChanges, OnDestroy } from '@angular/core';
import { Subject, Observable, Subscription, timer } from 'rxjs';
import { switchMap, take, tap } from 'rxjs/operators';
@Directive({
selector: '[counter]'
})
export class CounterDirective implements OnChanges, OnDestroy {
private _counterSource$ = new Subject<any>();
private _subscription = Subscription.EMPTY;
@Input() counter: number;
@Input() interval: number;
@Output() value = new EventEmitter<number>();
constructor() {
this._subscription = this._counterSource$.pipe(
switchMap(({ interval, count }) =>
timer(0, interval).pipe(
take(count),
tap(() => this.value.emit(--count))
)
)
).subscribe();
}
ngOnChanges() {
this._counterSource$.next({ count: this.counter, interval: this.interval });
}
ngOnDestroy() {
this._subscription.unsubscribe();
}
}
```
**Usage:**
```
<ng-container [counter]="60" [interval]="1000" (value)="count = $event">
<span> {{ count }} </span>
</ng-container>
```
Here is a live [stackblitz](https://stackblitz.com/edit/angular-p9xny6)
|
How much RAM is available for Excel
How can I see how much RAM is available for Excel ?
I am using Excel 2010 Windows (32-bit) NT 6.02 Release:14.0
I know my laptop has 4GB of RAM, but I have been told Excel 32 bit will only be able to use 2GB of this.
|
>
> Excel 2010, 2013 and 2016 are available in 2 versions: 32-bit (2
> Gigabytes of virtual memory)
>
>
>
[Out of Memory, Memory Limits, Memory Leaks, Excel will not start.](http://www.decisionmodels.com/memlimitsc.htm)
>
> How can I see how much RAM is available for Excel?
>
>
>
Task Manager will indicate how much memory Excel is using.
>
> On the other hand, the 32-bit edition of Office is limited to 2 GB of
> virtual address space, and this space is shared by Excel, the
> workbook, and add-ins that run in the same process. (Worksheets
> smaller than 2 GB on disk might still contain enough data to occupy 2
> GB or more of addressable memory.)
>
>
>
Additionally,
>
> The 2-GB limitation is per windows process instance of Excel. You can
> run multiple files in one instance. However, if the files are really
> large and have to be open, consider opening multiple instances for the
> other files. For information about limits that you may encounter, go
> to the following website:
>
>
>
[Memory usage in the 32-bit edition of Excel 2013 and 2016](https://support.microsoft.com/en-us/help/3066990/memory-usage-in-the-32-bit-edition-of-excel-2013-and-2016)
Additionally,
>
> 32-bit versions of Microsoft Excel 2013 and Excel 2016 can take
> advantage of Large Address Aware (LAA) functionality after
> installation of the latest updates. (see the "Resolution" section)
> This change lets 32-bit installations of Excel 2016 consume double the
> memory when users work on a 64-bit Windows OS. The system provides
> this capability by increasing the user mode virtual memory from 2
> gigabytes (GB) to 4 GB. This change provides 50 percent more memory
> (for example, from 2 GB to 3 GB) when users work on a 32-bit system.
>
>
> [](https://i.stack.imgur.com/GMYWH.png)
>
>
>
Additionally,
>
> If you're running 64-bit Windows, this change is applied
> automatically. No action by you is required. The available memory for
> the Excel process is automatically doubled from 2 GB to 4 GB. This
> improves support for actions that use lots of memory.
>
>
>
[Large Address Aware capability change for Excel](https://support.microsoft.com/en-gb/help/3160741/large-address-aware-capability-change-for-excel)
|
How to use typescript on redux connected components?
How do I type props that are connected to a reducer?
Following code gives typescript errors but works in runtime.
```
class Sidebar extends React.Component {
constructor(props) {
super(props);
}
render() {
return (
<div id="test">
<p>{this.props.value}</p> // Typescript error
<button onClick={() => this.props.setValue(3)}> click</button> // Typescript error
</div>
);
}
}
const mapStateToProps = (state: any, ownProps: any) => ({
value: state.calcReducer.value as number,
});
export default connect(mapStateToProps, {
setValue,
})(Sidebar);
```
|
You can set `type` on your props like this:
```
interface IProps {
value: number;
setValue: (value: number) => void;
}
class Sidebar extends React.Component<IProps> {
constructor(props: IProps) {
super(props);
}
// Your code ...
}
```
---
If you want to get rid of the `any` keywords as well you can give this a try.
```
interface IYourState {/* Your code */}
interface IYourOwnProps {/* Your code */}
const mapStateToProps = (state: IYourState, ownProps: IYourOwnProps) => ({
value: state.calcReducer.value as number,
});
export default connect<IYourState, {}, IYourOwnProps>(mapStateToProps, {
setValue,
})(Sidebar);
```
Here is the type definition of the `connect()` function of `react-redux` in case you want to investigate further. Note that I have marked the line `109` so you should start around there first.
<https://github.com/DefinitelyTyped/DefinitelyTyped/blob/master/types/react-redux/index.d.ts#L109>
|
ASP.NET MVC 5 + Owin + SimpleInjector
A new asp.net mvc project using owin, webapi, mvc and DI (SimpleInjector) runs fine if I remove the DI lib from the project. However, once introduced, the app blows up when registering the OWIN components for DI. The OWIN startup configuration is being hit and runs without error, but when it comes time to register the dependencies (listed below) I receive the following error:
>
> An exception of type 'System.InvalidOperationException' occurred in Microsoft.Owin.Host.SystemWeb.dll but was not handled in user code
>
>
> Additional information: No owin.Environment item was found in the context.
>
>
>
SimpleInjector Registration Code:
```
container.RegisterPerWebRequest<IUserStore<ApplicationUser>>(() => new UserStore<ApplicationUser>());
container.RegisterPerWebRequest<HttpContextBase>(() => new HttpContextWrapper(HttpContext.Current));
// app fails on call to line below...
container.RegisterPerWebRequest(() => container.GetInstance<HttpContextBase>().GetOwinContext());
container.RegisterPerWebRequest(() => container.GetInstance<IOwinContext>().Authentication);
container.RegisterPerWebRequest<DbContext, ApplicationDbContext>();
```
**Update - Full Stack Trace**
>
> at
> System.Web.HttpContextBaseExtensions.GetOwinContext(HttpContextBase
> context) at
> WebApplication1.App\_Start.SimpleInjectorInitializer.<>c\_\_DisplayClass6.b\_\_2()
> in
> b:\temp\WebApplication1\WebApplication1\App\_Start\SimpleInjectorInitializer.cs:line
> 41 at lambda\_method(Closure ) at
> SimpleInjector.Scope.CreateAndCacheInstance[TService,TImplementation](ScopedRegistration`2
> registration) at
> SimpleInjector.Scope.GetInstance[TService,TImplementation](ScopedRegistration`2
> registration) at
> SimpleInjector.Scope.GetInstance[TService,TImplementation](ScopedRegistration`2
> registration, Scope scope) at
> SimpleInjector.Advanced.Internal.LazyScopedRegistration`2.GetInstance(Scope
> scope) at lambda\_method(Closure ) at
> SimpleInjector.InstanceProducer.GetInstance()
>
>
>
|
I think the exception is thrown when you call `Verify()`. Probably at that line, but only when the delegate is called.
Simple Injector allows making registrations in any order and will therefore not verify the existence and correctness of a registration’s dependencies. This verification is done the very first time an instance is requested, or can be triggered by calling `.Verify()` at the end of the registration process.
I suspect you're registrering the `OwinContext` only because you need it for getting the `IAuthenticationManager`.
The problem you face is that the `OwinContext` is only available when there is a `HttpContext`. This context is not available at the time the application is build in the composition root. What you need is a delegate which checks the stage of the application and returns a component that matches this stage. You could that by registering the `IAuthenticationManager` as:
```
container.RegisterPerWebRequest<IAuthenticationManager>(() =>
AdvancedExtensions.IsVerifying(container)
? new OwinContext(new Dictionary<string, object>()).Authentication
: HttpContext.Current.GetOwinContext().Authentication);
```
The delegate will return the Owin controlled `IAuthenticationManager` when the code runs at 'normal runtime stage' and there is a `HttpContext`.
But when making an explicit call the `Verify()` (which is highly [advisable](https://simpleinjector.readthedocs.org/en/2.7/howto.html#verify-configuration) to do!) at the end of registration process there is no `HttpContext`. Therefore we will create a new `OwinContext` during verifying the container and return the Authentication component from this newly created `OwinContext`. But only if the container is indeed verifying!
A full and detailed description can be read [here](https://simpleinjector.codeplex.com/discussions/564822) as already mentioned in the comments.
|
Hide image In responsive web design layout.
I want to hide an image only when it is viewed from mobile or tablets.
How can I do this?
|
Using 'display: none' on imgs will still download the image which is not recommended as it's an http request gone waste (you can see this by looking at the network tab in your browser's inspector[right click, inspect element in Chrome]. just refresh the page and you'll see each asset being downloaded and the time it takes to download it.).
Rather you could make those specific images that you do not wish to load as background-images on divs.
Then all you need to do is:
```
@media (max-width: x) {
.rwd-img {
background-image: none;
}
```
Of course you would have to define the height and width of the image for your div in your CSS, but that's a better option than sacrificing user experience. :)
Links of interest:
<http://css-tricks.com/on-responsive-images/>
<http://mattstow.com/responsive-and-retina-content-images-redux.html>
|
Why does the transpose function change numeric to character in R?
I've constructed a simple matrix in Excel with some character values and some numeric values ([Screenshot of data as set up in Excel](https://i.stack.imgur.com/UpQ1I.png)). I read it into R using the openxlsx package like so:
```
library(openxlsx)
data <- read.xlsx('~desktop/data.xlsx)
```
After that I check the class:
```
sapply(data, class)
x1 a b c
"character" "numeric" "numeric" "numeric"
```
Which is exactly what I want. My problem occurs when I try to transpose the matrix, and then check for class again:
```
data <- t(data)
```
When i check with sapply now, all values are "character". Why are the classes not preserved when transposing?
|
First off, I don't get your result when I read in your spreadsheet due to the fact the the cells with comma separated numbers appear as characters.
```
data <- read.xlsx("data.xlsx")
data
# X1 a b c
#1 x 0,1 3 4,5
#2 y 2,4 0 6,5
#3 z 24 0 0
sapply(data,class)
# X1 a b c
#"character" "character" "numeric" "character"
```
But the issue you are really seeing is that by transposing the data frame you are mixing types in the same column so R HAS TO convert the whole column to the broadest common type, which is character in this case.
```
mydata<-data.frame(X1=c("x","y","z"),a=c(1,2,24),b=c(3,0,0),c=c(4,6,0),stringsAsFactors = FALSE)
sapply(mydata,class)
# X1 a b c
#"character" "numeric" "numeric" "numeric"
# what you showed
t(mydata)
# [,1] [,2] [,3]
#X1 "x" "y" "z"
#a " 1" " 2" "24"
#b "3" "0" "0"
#c "4" "6" "0"
mydata_t<-t(mydata)
sapply(mydata_t,class)
# x 1 3 4 y 2 #0 6 z 24
#"character" "character" "character" "character" "character" "character" #"character" "character" "character" "character"
# 0 0
#"character" "character"
```
Do you want to work on the numbers in the transposed matrix and transpose them back after? If so, transpose a sub-matrix that has the character columns temporarily removed, then reassemble later, like so:
```
sub_matrix<-t(mydata[,-1])
sub_matrix
# [,1] [,2] [,3]
#a 1 2 24
#b 3 0 0
#c 4 6 0
sub_matrix2<-sub_matrix*2
sub_matrix2
# [,1] [,2] [,3]
#a 2 4 48
#b 6 0 0
#c 8 12 0
cbind(X1=mydata[,1],as.data.frame(t(sub_matrix2)))
# X1 a b c
#1 x 2 6 8
#2 y 4 0 12
#3 z 48 0 0
```
|
Is there a difference between and ?
I'm including as it's required by the MySQL C library.
The auto-complete in VS2010 is also showing a - any idea what this is?
Are they interchangeable, and are there any advantages of one over the other?
Thanks!
|
@cost's answer [links](http://cboard.cprogramming.com/networking-device-communication/71596-winsock-vs-winsock2.html) to a discussion that, amongst other things, asks this question that was never answered:
>
> Is there a reason why I can't include windows.h before winsock2.h, it gives me tons of errors, but once I switch their order everything is okay... why is that?
>
>
>
`windows.h` includes `winsock.h` by default (if `WIN32_LEAN_AND_MEAN` is not defined). The problem is not limited to just `windows.h`, though. Any time `winsock.h` gets included before `winsock2.h`, there will be compiler errors.
The reason is because the two files **DO NOT** co-exist very well. `winsock2.h` was designed to **replace** `winsock.h`, not extend it. Everything that is defined in `winsock.h` is also defined in `winsock2.h`.
If `winsock2.h` is included before `winsock.h`, `winsock2.h` defines `_WINSOCKAPI_` to prevent the compiler from processing subsequent `winsock.h` includes, and all is fine.
But if `winsock.h` is included before `winsock2.h`, `winsock2.h` does not detect that and tries to re-define everything that `winsock.h` has already defined, causing the compile to fail.
You have to be very careful when mixing code that uses `winsock.h` with code that uses `winsock2.h` in the same project. For instance, when writing your own socket code that uses `winsock2.h`, and using third-party libraries that still use `winsock.h`.
|
Why am I not getting a java.util.ConcurrentModificationException in this example?
Note: I am aware of the `Iterator#remove()` method.
In the following code sample, I don't understand why the `List.remove` in `main` method throws `ConcurrentModificationException`, but **not** in the `remove` method.
```
public class RemoveListElementDemo {
private static final List<Integer> integerList;
static {
integerList = new ArrayList<Integer>();
integerList.add(1);
integerList.add(2);
integerList.add(3);
}
public static void remove(Integer toRemove) {
for(Integer integer : integerList) {
if(integer.equals(toRemove)) {
integerList.remove(integer);
}
}
}
public static void main(String... args) {
remove(Integer.valueOf(2));
Integer toRemove = Integer.valueOf(3);
for(Integer integer : integerList) {
if(integer.equals(toRemove)) {
integerList.remove(integer);
}
}
}
}
```
|
Here's why:
As it is says in the Javadoc:
>
> The iterators returned by this class's iterator and listIterator
> methods are fail-fast: if the list is structurally modified at any
> time after the iterator is created, in any way except through the
> iterator's own remove or add methods, the iterator will throw a
> ConcurrentModificationException.
>
>
>
This check is done in the `next()` method of the iterator (as you can see by the stacktrace). But we will reach the `next()` method only if `hasNext()` delivered true, which is what is called by the for each to check if the boundary is met. In your remove method, when `hasNext()` checks if it needs to return another element, it will see that it returned two elements, and now after one element was removed the list only contains two elements. So all is peachy and we are done with iterating. The check for concurrent modifications does not occur, as this is done in the `next()` method which is never called.
Next we get to the second loop. After we remove the second number the hasNext method will check again if can return more values. It has returned two values already, but the list now only contains one. But the code here is:
```
public boolean hasNext() {
return cursor != size();
}
```
1 != 2, so we continue to the `next()` method, which now realizes that someone has been messing with the list and fires the exception.
Hope that clears your question up.
### Summary
`List.remove()` will not throw `ConcurrentModificationException` when it removes the second last element from the list.
|
SQL data not retrieved in Unicode Hindi
I am developing a web app in ASP.NET with C#. I am saving data in SQL tables as Unicode character as given by Google Transliteration. I am supposed to use Hindi. I have no issues regarding adding data. But when I use "SELECT" statements, no data is retrieved from the database tables in any case.
My Query is as follows:
```
SELECT uid, family_head, member_name, house_no, address, f_h_name, gender, caste, dob, occupation, literacy, end_date
FROM family
WHERE (member_name = 'समर्थ अग्रवाल')
```
It return null.
|
Change the string to start with `N` to signify it is a Unicode string:
```
SELECT uid, family_head, member_name, house_no, address, f_h_name, gender, caste, dob, occupation, literacy, end_date
FROM family
WHERE (member_name = N'समर्थ अग्रवाल')
```
Otherwise, the string will not be a Unicode string and the query will return no results.
See [Constants (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms179899.aspx) on MSDN:
>
> Unicode strings
>
>
> Unicode strings have a format similar to character strings but are preceded by an N identifier (N stands for National Language in the SQL-92 standard). The N prefix must be uppercase. For example, 'Michél' is a character constant while N'Michél' is a Unicode constant. Unicode constants are interpreted as Unicode data, and are not evaluated by using a code page.
>
>
>
|
What kind of User data I can collect
I was wondering if I ever need to collect any kind of user data (like their location, which I don't think I should collect now, their IMEI, their google account, app usage time etc), then how much of it I can collect without doing so illegally.
I know there will be issues with all of them, but since I couldn't find any document or question on SO, addressing this topic, telling me what kind of data I can collect, I am here with a question.
Hope to get nice answers.
Wish to mark it as a community wiki.
|
Without the user knowing it?
# NONE.
>
> 4.3 You agree that if you use the SDK to develop applications for general public users, you will protect the privacy and legal rights of those users. If the users provide you with user names, passwords, or other login information or personal information, your must make the users aware that the information will be available to your application, and you must provide legally adequate privacy notice and protection for those users. If your application stores personal or sensitive information provided by users, it must do so securely. If the user provides your application with Google Account information, your application may only use that information to access the user's Google Account when, and for the limited purposes for which, the user has given you permission to do so.
>
>
>
(from part 4 of the [Terms and Conditions](http://developer.android.com/sdk/terms.html))
Take a look also at [this page about searches](http://developer.android.com/guide/topics/search/index.html)
|
How to change the font color of a MenuBar?
How can I change the text color of the menu items of a QML `MenuBar`?
```
import QtQuick 2.4
import QtQuick.Controls 1.3
import QtQuick.Window 2.2
import QtQuick.Dialogs 1.2
import QtQuick.Controls.Styles 1.3 as QtQuickControlStyle
ApplicationWindow {
title: qsTr("Test")
width: 640
height: 480
visible: true
property color menuBackgroundColor: "#3C3C3C"
property color menuBorderColor: "#282828"
menuBar: MenuBar {
style: QtQuickControlStyle.MenuBarStyle {
padding {
left: 8
right: 8
top: 3
bottom: 3
}
background: Rectangle {
border.color: menuBorderColor
color: menuBackgroundColor
}
// font: // how to set font color to red?
// textColor: "red" /* does not work - results in Cannot assign to non-existent property "textColor" */
TextField { // does also not work
style: TextFieldStyle {
textColor: "red"
}
}
}
}
}
```
A similar question has been asked [here](https://stackoverflow.com/questions/18474447/change-text-color-for-qml-controls) but it seems not to work with menu items.
|
You have to redefine [`itemDelegate`](http://doc.qt.io/qt-5/qml-qtquick-controls-styles-menubarstyle.html#itemDelegate-prop) and [`itemDelegate.label`](http://doc.qt.io/qt-5/qml-qtquick-controls-styles-menustyle.html#itemDelegate-prop) for `menuStyle`. The former defines the style of the `MenuBar` text whereas the latter defines the style of menu items text.
In the following example I defined a full style for `MenuBar` and `Menu`s, not only for their text. [`scrollIndicator`](http://doc.qt.io/qt-5/qml-qtquick-controls-styles-menustyle.html#scrollIndicator-prop) is the only missing piece here. It can be represented as a `Text`/`Label` or an `Image`.
```
import QtQuick 2.4
import QtQuick.Controls 1.3
import QtQuick.Controls.Styles 1.3
import QtQuick.Window 2.2
ApplicationWindow {
title: qsTr("Test")
width: 640
height: 480
visible: true
property color menuBackgroundColor: "#3C3C3C"
property color menuBorderColor: "#282828"
menuBar: MenuBar {
Menu {
title: "File"
MenuItem { text: "Open..." }
MenuItem { text: "Close" }
}
Menu {
title: "Edit"
MenuItem { text: "Cut"; checkable: true}
MenuItem { text: "Copy" }
MenuItem { text: "Paste" }
MenuSeparator {visible: true }
Menu {
title: "submenu"
}
}
style: MenuBarStyle {
padding {
left: 8
right: 8
top: 3
bottom: 3
}
background: Rectangle {
id: rect
border.color: menuBorderColor
color: menuBackgroundColor
}
itemDelegate: Rectangle { // the menus
implicitWidth: lab.contentWidth * 1.4 // adjust width the way you prefer it
implicitHeight: lab.contentHeight // adjust height the way you prefer it
color: styleData.selected || styleData.open ? "red" : "transparent"
Label {
id: lab
anchors.horizontalCenter: parent.horizontalCenter
color: styleData.selected || styleData.open ? "white" : "red"
font.wordSpacing: 10
text: styleData.text
}
}
menuStyle: MenuStyle { // the menus items
id: goreStyle
frame: Rectangle {
color: menuBackgroundColor
}
itemDelegate {
background: Rectangle {
color: styleData.selected || styleData.open ? "red" : menuBackgroundColor
radius: styleData.selected ? 3 : 0
}
label: Label {
color: styleData.selected ? "white" : "red"
text: styleData.text
}
submenuIndicator: Text {
text: "\u25ba"
font: goreStyle.font
color: styleData.selected || styleData.open ? "white" : "red"
styleColor: Qt.lighter(color, 4)
}
shortcut: Label {
color: styleData.selected ? "white" : "red"
text: styleData.shortcut
}
checkmarkIndicator: CheckBox { // not strinctly a Checkbox. A Rectangle is fine too
checked: styleData.checked
style: CheckBoxStyle {
indicator: Rectangle {
implicitWidth: goreStyle.font.pixelSize
implicitHeight: implicitWidth
radius: 2
color: control.checked ? "red" : menuBackgroundColor
border.color: control.activeFocus ? menuBackgroundColor : "red"
border.width: 2
Rectangle {
visible: control.checked
color: "red"
border.color: menuBackgroundColor
border.width: 2
radius: 2
anchors.fill: parent
}
}
spacing: 10
}
}
}
// scrollIndicator: // <--- could be an image
separator: Rectangle {
width: parent.width
implicitHeight: 2
color: "white"
}
}
}
}
}
```
And here is the resulting `MenuBar` and `Menu`s:
[](https://i.stack.imgur.com/XezBz.png)
You can also choose to set a `MenuStyle` directly inside a `Menu`, in the `style` property. Something like this:
```
Menu {
title: "File"
MenuItem { text: "Open..." }
MenuItem { text: "Close" }
style: MenuStyle {
itemDelegate.label: Label {
color: "blue"
text: styleData.text
// stuff above here
}
}
```
In this last example only the "File" `Menu` items are styled with a `blue` color for text. One can argue how much ugly that would be, though.
|
Manage Threads Relationship in C#
Now, i am learning multi-threading and usage of it in C#. So, i face the problem as below:
(Sorry for my so simple question)
Suppose that, we have two classes named Producer and Consumer. Producer task is producing 4 numbers while program running and Consumer task is consuming and using those numbers and return the sum of them at the end of program.
Consumer Class definition:
```
class Consumer
{
private HoldInteger sharedLocation;
private Random randomSleepTime;
public Consumer(HoldInteger shared, Random random)
{
sharedLocation = shared;
randomSleepTime = random;
}
public void Consume()
{
int sum = 0;
for (int i = 1; i <= 4; i++)
{
Thread.Sleep(randomSleepTime.Next(1, 3000));
sum += sharedLocation.Buffer;
}
}
}
```
and the definition of Producer Class is as below :
```
class Producer
{
private HoldInteger sharedLocation;
private Random randomSleepTime;
public Producer(HoldInteger shared, Random random)
{
sharedLocation = shared;
randomSleepTime = random;
}
public void Produce()
{
for (int i = 1; i <= 4; i++)
{
Thread.Sleep(randomSleepTime.Next(1, 3000));
sharedLocation.Buffer = i;
}
}
}
```
And also, we have `HoldInteger` class contains Buffer variable that producer write this variable and consumer read from that. I combine these classes and program the below code in my main method:
```
static void Main(string[] args)
{
HoldInteger holdInteger = new HoldInteger();
Random random = new Random();
Producer producer = new Producer(holdInteger, random);
Consumer consumer = new Consumer(holdInteger, random);
Thread producerThread = new Thread(new ThreadStart(producer.Produce));
producerThread.Name = "producer";
Thread consumerThread = new Thread(new ThreadStart(consumer.Consume));
consumerThread.Name = "consumer";
producerThread.Start();
consumerThread.Start();
}
```
So, my question is that `How can i manage this relationship With Low Memory and Time Wasting ?`
Please note that, these threads management code will be placed in `HoldInteger` class body.
Thanks for your attention.
|
I would replace the `HoldInteger` class with a `BlockingQueue`, [you can find an implementation here](http://pastie.org/2418829) and for more details on the reason behind the implementation, [check this question](https://stackoverflow.com/questions/530211/creating-a-blocking-queuet-in-net/530228#530228). I think .NET 4.0 might have a blocking queue too. This approach will subsequently make things a lot easier to manage:
```
class Producer
{
//...
public void Produce()
{
for (int i = 1; i <= 4; i++)
{
Thread.Sleep(randomSleepTime.Next(1, 3000));
blockingIntQueue.Enqueue(i);
}
}
}
```
Your consumer will look like this now:
```
class Consumer
{
//...
public void Consume()
{
int value = 0;
for (int i = 1; i <= 4; i++)
{
if( blockingIntQueue.TryDequeue(out value) )
{
sum += value;
}
}
}
}
```
However, if you want to keep the `HoldInteger` (if this is some sort of requirement), then you can place the blocking queue inside the `HoldIntegerUnsynchronized` class instead of having a buffer (should be trivial to do) and you will achieve the same result.
**Note:** with this approach you no longer have to worry about missing a value or reading a stale value because the threads don't wake up at exactly the right time. Here is the potential problem with using a "buffer":
Even if your integer holder does handle the underlying "buffer" safely, you are still not guaranteed that you will get all the integers that you want. Take this into consideration:
**Case 1**
```
Producer wakes up and writes integer.
Consumer wakes up and reads integer.
Consumer wakes up and reads integer.
Producer wakes up and writes integer.
```
**Case 2**
```
Consumer wakes reads integer.
Producer wakes up and writes integer.
Producer wakes up and writes integer.
Consumer wakes up and reads integer.
```
Since the timer is not precise enough, this sort of thing is entirely possible and in the first case it will cause the consumer to read a stale value, while in the second case it will cause the consumer to miss a value.
|
Delphi: How to encode TIdBytes to Base64 string?
How to encode TIdBytes to Base64 string (not AnsiString) ?
```
ASocket.IOHandler.CheckForDataOnSource(5);
if not ASocket.Socket.InputBufferIsEmpty then
begin
ASocket.Socket.InputBuffer.ExtractToBytes(data);
// here I need to encode data to base64 string, how ? I don't need AnsiString!!
// var s:string;
s := EncodeBase64(data, Length(data)); // but it will be AnsiString :(
```
Or how to send AnsiString via AContext.Connection.Socket.Write() ?
Compiler saying **Implicit string cast from 'AnsiString' to 'string'**
"data" variable contains UTF-8 data from website.
|
You can use Indy's `TIdEncoderMIME` class to encode `String`, `TStream`, and `TIdByte` data to base64 (and `TIdDecoderMIME` to decode from base64 back to `String`, `TStream`, or `TIdBytes`), eg:
```
s := TIdEncoderMIME.EncodeBytes(data);
```
As for sending `AnsiString` data, Indy in D2009+ simply does not have any `TIdIOHandler.Write()` overloads for handling `AnsiString` data at all, only `UnicodeString` data. To send an `AnsiString` as-is, you can either:
1) copy the `AnsiString` into a `TIdBytes` using `RawToBytes()` and then call `TIdIOHandler.Write(TIdBytes)`:
```
var
as: AnsiString;
begin
as := ...;
AContext.Connection.IOHandler.Write(RawToBytes(as[1], Length(as)));
end;
```
2) copy the `AnsiString` data into a `TStream` and then call `TIdIOHandler.Write(TStream)`:
```
var
as: AnsiString;
strm: TStream;
begin
strm := TMemoryStream.Create;
try
strm.WriteBuffer(as[1], Length(as));
AContext.Connection.IOHandler.Write(strm);
finally
strm.Free;
end;
end;
```
Or:
```
var
as: AnsiString;
strm: TStream;
begin
as := ...;
strm := TIdMemoryBufferStream.Create(as[1], Length(as));
try
AContext.Connection.IOHandler.Write(strm);
finally
strm.Free;
end;
end;
```
|
How to protect power strips from accidental shut-offs/"cleaning lady" attack?
Are there any power strips or products designed to defend against a "cleaning lady" attack (cleaning crew is vacuuming the floor and bumps the power strip switch, turning off all the connected equipment)?
I have some curious felines in my house that always seem to work their way behind my desk and accidentally step on the power strip switch.
Is there some sort of switch cover I can get, or a power strip that is less susceptible to this kind of problem?
|
One cheap way is to get yourself a dozen or so 9-12" zip-ties.
- Use a staple-gun to place a 2-3 zip-ties on the underside/back of your desk. Place a couple staples spaced to the the width of your power strip.
- Look at your power strip to get the spacing right.
- Put the power-strip in place and then close the zip-tie.

A bit neater is to simply get a few cable mount rings. If get 5-6 and space these out about every 8-12" then you can collect a large number of cables to the under-side of your desk. This should make the cables/power strips immune from vacuums and so on.

|
Conditionally piping to Out-Null
I'm writing a PowerShell script to `msbuild` a bunch of solutions. I want to count how many solutions build successfully and how many fail. I also want to see the compiler errors, but only from the first one that fails (I'm assuming the others will usually have similar errors and I don't want to clutter my output).
My question is about how to run an external command (`msbuild` in this case), but conditionally pipe its output. If I'm running it and haven't gotten any failures yet, I don't want to pipe its output; I want it to output directly to the console, with no redirection, so it will color-code its output. (Like many programs, msbuild turns off color-coding if it sees that its stdout is redirected.) But if I have gotten failures before, I want to pipe to `Out-Null`.
Obviously I could do this:
```
if ($SolutionsWithErrors -eq 0) {
msbuild $Path /nologo /v:q /consoleloggerparameters:ErrorsOnly
} else {
msbuild $Path /nologo /v:q /consoleloggerparameters:ErrorsOnly | Out-Null
}
```
But it seems like there's got to be a way to do it without the duplication. (Okay, it doesn't have to be duplication -- I could leave off `/consoleloggerparameters` if I'm piping to null anyway -- but you get the idea.)
There may be other ways to solve this, but for today, I specifically want to know: is there a way to run a command, but only pipe its output if a certain condition is met (and otherwise not pipe it or redirect its output at all, so it can do fancy stuff like color-coded output)?
|
You can define the output command as a variable and use either `Out-Default` or `Out-Null`:
```
# set the output command depending on the condition
$output = if ($SolutionsWithErrors -eq 0) {'Out-Default'} else {'Out-Null'}
# invoke the command with the variable output
msbuild $Path /nologo /v:q /consoleloggerparameters:ErrorsOnly | & $output
```
---
UPDATE
The above code loses MSBuild colors. In order to preserve colors and yet avoid
duplication of code this approach can be used:
```
# define the command once as a script block
$command = {msbuild $Path /nologo /v:q /consoleloggerparameters:ErrorsOnly}
# invoke the command with output depending on the condition
if ($SolutionsWithErrors -eq 0) {& $command} else {& $command | Out-Null}
```
---
>
> is there a way to run a command, but only pipe its output if a certain condition is met (and otherwise not pipe it or redirect its output at all, so it can do fancy stuff like color-coded output)?
>
>
>
There is no such a way built-in, more likely. But it can be implemented with a function and the function is reused as such a way:
```
function Invoke-WithOutput($OutputCondition, $Command) {
if ($OutputCondition) { & $Command } else { $null = & $Command }
}
Invoke-WithOutput ($SolutionsWithErrors -eq 0) {
msbuild $Path /nologo /v:q /consoleloggerparameters:ErrorsOnly
}
```
|
Qt apps are very slow to load in Xubuntu 20.04 when export QT\_QPA\_PLATFORMTHEME=gtk2 is enabled
I have installed `qt5ct` to apply gtk2 theme on Qt apps (by default they follow the Fusion theme). However, Qt apps (I have tried [GNU Octave](https://www.gnu.org/software/octave/) and [Brightness Controller](https://github.com/lordamit/brightness)) are taking too much time to start up when the gtk2 theme is enabled.
For example, Brightness Controller is taking ~25 seconds to load in gtk2 theme in qt5ct, while it takes 1-2 seconds to load under Fusion or any other theme (I have measured this by looking at a watch after clicking the icon).
The same goes for GNU Octave.
How to fix this? Please let me know whether I need to post any logs.
`qt5ct` itself is also very slow to load.
I have used `qt5ct` before in previous versions of Xubuntu, and there it was much faster.
|
The startup speed of `qt` applications in **clean** installs of GNOME-based, **19.10+** versions of Ubuntu, its official flavors and derivatives is much longer if one tries to style them to appear consistent with native GNOME applications.
See
- [Why does forcing Qt applications to use GTK theme makes those apps startup slowly?](https://askubuntu.com/q/1185372/248158)
- [QT Applications very slow to open on fresh install of 19.10 #712](https://github.com/pop-os/pop/issues/712) and
- "7. Fix Qt5 applications style under GNOME Shell on Ubuntu 20.04" in [Top Things To Do After Installing Ubuntu 20.04 Focal Fossa To Make The Most Of It](https://www.linuxuprising.com/2020/04/top-things-to-do-after-installing.html)
For whatever reason, users who upgrade from 19.04 don't see this problem.
Anyway, one satisfactory workaround is based on using [Kvantum](https://github.com/tsujan/Kvantum/tree/master/Kvantum). See, for example, [Use Custom Themes For Qt Applications (And Fix Qt5 Theme On GNOME) On Linux With Kvantum](https://www.linuxuprising.com/2018/05/use-custom-themes-for-qt-applications.html).
As described in the preceding link, since Kvantum isn't installed by default, it can be installed on 20.04 using
```
sudo apt install qt5-style-kvantum qt5-style-kvantum-themes
```
A ppa is available:
```
sudo add-apt-repository ppa:papirus/papirus
sudo apt update
sudo apt install qt5-style-kvantum qt5-style-kvantum-themes
```
After that, run
```
echo "export QT_STYLE_OVERRIDE=kvantum" >> ~/.profile
```
Log out and log back in. The link also has instructions for using Kvantum system-wide (with `export QT_STYLE_OVERRIDE=kvantum in /etc/environment`) and for uninstalling it.
|
If $\hat{e}$ are the OLS residuals, what is random in $\hat{\beta}\_{OLS}|\hat{e} = e\_0$?
Suppose $y \sim N(\mu, \Sigma)$, where $y \in \mathbb{R}^n$. Let $X \in \mathbb{R}^{n \times p}$ denote a full rank design matrix. By ordinary least squares, the residuals are $$\hat{e} = (I - X(X^TX)^{-1}X^T)y$$ Let $u\_1 \in \mathbb{R}^p$ denote a unit vector in $\mathbb{R}^p$. Let $\hat{\beta}\_1 = u\_1^T (X^TX)^{-1}X^Ty$ denote the OLS estimate for the first covariate in $X$. Suppose I'm interested in the condition distribution of $$\hat{\beta}\_1 | \hat{e} = e\_0$$
My question is, once I condition on a specific set of the residuals $\hat{e} = e\_0$, is there still randomness left in $\hat{\beta}\_1$? I'm confused about this because $\hat{\beta}\_1$ is a function of $y$, so any variability in $\hat{\beta}\_1$ stems from the fact that $y$ is random. Since $\hat{e}$ is a function of $y$, does conditioning on a specific value of $\hat{e}$ imply that there's no more variability left in $\hat{\beta}\_1$? In other words, is it possible for different $y$ vectors to give rise to the same residuals, $e\_0$?
|
We know that the vector $\begin{bmatrix} (X^TX)^{-1} X^T \\ I - X (X^TX)^{-1}X^T \end{bmatrix}y$ has a multivariate normal distribution, since it's a linear transformation of a normal distribution. Now, in the case of a multivariate normal, we know that dependence is characterized simply by covariance, so let's investigate the covariance between the first and second block of this multivariable normal distribution.
We will use the result that $\mathrm{cov}(Ay, By) = A \mathrm{cov}(y) B^T$ for conformable matrices $A,B$ and assume that $\mathrm{cov}(y) = \sigma^2 I$. Then the covariance is $$(X^TX)^{-1} X^T \left( \sigma^2 I \right) \left( I - X (X^TX)^{-1} X^T \right) = 0.$$ This means that the blocks are uncorrelated and hence independent. Therefore conditioning on the second half of the block does not change the distribution of the first half the block, i.e. $\hat\beta \stackrel{d}{=} \hat\beta \mid \hat{e}$. So the answer to the question of what is random: all of it.
|
What exactly happens when you use the 'copy /b' command?
Today, I just discovered I could merge certain files using the `copy /b` command. In particular, I noticed that when I merged two mp3 files, the VLC player exhibited funny behaviours with the timing:

Here, it's quite normal but the first music was getting to an end... then the funny part followed....


Here, the time seek was literally running as it was playing.
On combining pictures or PDF with this technique, I discovered that there will be a correct increase in file size but only the first picture will be displayed.
So my question is: What exactly does the `copy /b` command do? Is it really meant to be used for merging files or is this a hack?
|
The `/b` flag of the `copy` command treats the files as binary (i.e., a raw stream of meaningless bytes), and copies them byte for byte instead of the default (or the `/a`) behavior which treats them as lines of text (with end-of-line characters, end-of-files, etc.)
You can merge text files with either the default text behavior or the binary switch, but pretty much *any* binary file will not work. You cannot simply copy the bytes from two binary files and expect them to work because binary files usually have [headers](http://en.wikipedia.org/wiki/Header_%28computing%29), [metadata](http://en.wikipedia.org/wiki/Metadata), [data structures](http://en.wikipedia.org/wiki/Data_structure), etc. that define the format of the file. If you do a binary copy, you will simply be copying all the bytes as is which ends up putting these structures in places that they should not be, so when you open them, the parsing function will have trouble and see what is essentially corrupt data. Some program will ignore the parts that don’t make sense and simply show what they can (which allows for stereography to work), but some will throw an error and complain that the file is corrupt. The ability to detect corruption depends on the file-type.
As an example, let’s invent a simplified PDF format:
```
Byte(s) Meaning
---------------------
File header:
0-1 # of Pages
2-3 Language
4-5 Font
6-EOF Data (each page encoded separately)
Page data:
0-1 Page number
2-3 # of characters on page
4-#chars Letters contained on the page
```
As you can see, each file will contain a file-level header with some general information, followed by data blocks for each page containing the page data. If you then take two files, each containing one page and merge them as binary files, you will not be creating one two-page file, but instead one corrupt file that starts out with one page, then has a bunch of junk (the file header makes no sense when the program tries to read page two).
The same thing happens for your MP3s. When you combined them like that, the [ID3 tags](http://en.wikipedia.org/wiki/Id3) at the start and/or end of the of the second file are retained, and when the player tries to read the next frame, it is expecting audio data, but is finding the header of the second file which does not match the expected format for audio data, so it doesn’t know what to do. Some players will play the header as audio data (which will likely play as static/noise/pops/etc.), some will cut the sound for until the next correct frame, some may stop playing the song altogether, and some may even crash.
The `copy` command knows nothing about file-types other than plain-text (and even then, only ASCII text), so only plain-text can be combined correctly with it. Binary files must be combined using an editor that knows how to parse and interpret the contents correctly.
|
Is there any way to make ntpd to iteratively consult the server addresses given in the ntp.conf file?
Suppose the machine has an ntp.conf file that looks like this:
driftfile <`path-to-drift-file`>
server <`NTP-server-1`>
server <`NTP-server-2`>
server <`NTP-server-2`>
For some reason, let us say that the NTP server is not running at the first query to all servers. Can we make ntpd reiterate querying these sources (i.e. again consult server-1 to server-3 in a loop)? How do we do it?
Edit: Is there any way to quantitatively determine which server caused the actual time sync from the list of servers given in the ntp.conf in the machine?
|
All defined servers in `/etc/ntp.conf` are used to synchronize time. There's no need to have it "loop" through the servers as the algorithm already handles multiple sources.
>
> The ntpd program operates by exchanging messages with one or more configured servers at designated poll intervals.
>
>
>
From: [man ntpd](http://linux.die.net/man/8/ntpd)
You can see this by executing `ntpq -p` on the command-line to show your peers and their status.
You might see output like shown here:
```
remote refid st when poll reach delay offset disp
========================================================================
+128.4.2.6 132.249.16.1 2 131 256 373 9.89 16.28 23.25
*128.4.1.20 .WWVB. 1 137 256 377 280.62 21.74 20.23
-128.8.2.88 128.8.10.1 2 49 128 376 294.14 5.94 17.47
+128.4.2.17 .WWVB. 1 173 256 377 279.95 20.56 16.40
```
The output is explained in the man pages, too. But, over time I collected some notes:
>
> **remote**: peers specified in the ntp.conf file
>
> **\* = current time source**
>
> # = source selected, distance exceeds maximum value
>
> o = source selected, Pulse Per Second (PPS) used
>
> + = source selected, included in final set
>
> x = source false ticker
>
> . = source selected from end of candidate list
>
> - = source discarded by cluster algorithm
>
> blank = source discarded high stratum, failed sanity
>
>
> **refid**: remote source’s synchronization source
>
>
> **stratum**: stratum level of the source
>
>
> **t**: types available
>
> l = local (such as a GPS, WWVB)
>
> u = unicast (most common)
>
> m = multicast
>
> b = broadcast
>
> - = netaddr
>
>
> **when**: number of seconds passed since last response
>
>
> **poll**: polling interval, in seconds, for source
>
>
> **reach**: indicates success/failure to reach source, 377 all attempts successful
>
>
> **delay**: indicates the round trip time, in milliseconds, to receive a reply
>
>
> **offset**: indicates the time difference, in milliseconds, between the client server and source
>
>
> **disp/jitter**: indicates the difference, in milliseconds, between two samples
>
>
>
Finally, to answer the last question;
>
> Is there any way to quantitatively determine which server caused the
> actual time sync from the list of servers given in the ntp.conf in the
> machine?
>
>
>
The host indicated with the (\*) is your currently selected time source. This can change during polling.
|
What do I need? a digital pen or a pen tablet or something else?
I use lot of regular pen and paper. Most of my work includes
1. I write a lot of lengthy articles in my native language (not English).
2. Solving maths problems => writing a lot of mathematical equations
3. Solving chemistry problems => chemistry especially organic chemistry
4. Design Diagrams (UML, Class, E-R etc) : simple stuff. No fancy art!
5. I read a lot of technical books. I use a regular notebook as well as a Word document for note taking. The Word document is for capturing screen shots and pictures/paragraphs from the e-book of the same. The regular notebook is for complex annotations like diagrams, etc. Now these two notes are not synchronized. I want to have one notebook. I prefer it to be in the computer because I can backup and share.
As a result of this heavy use of regular paper I've a pile of regular notebooks. I'm finding it difficult to search through them. They are very important to me. I'm worried that I may lose them (I already lost few)
I've looked for some solutions. I've seen videos of Pen Tablet on youtube. It seems to have too many functionalities that I don't need. I do not need different levels of pressure sensitivity. It seems that it was made for professional designers. I don't do any art stuff. I'm a bit tight on budget. So, trying to choose only that which serves my purpose.
**Queries regarding Pen Tablet:**
1. Can I use it like a regular pen? I mean, I've only seen people (on the web) using it as a marker (writing in huge sizes). As, I said I write a lot of lengthy articles in notebooks. Can it write in normal size (like on paper. see picture below)? 
2. In my mind, the size of working area on pen tablet is the size of the page. Is that right? Do I need a large size tablet for writing lengthy pages? If not, then I can make it fit in to my budget by purchasing the smallest available tablet. *I don't mind scrolling*
3. What kind of disadvantages do I have if I purchase the smallest possible size (3" x 4") pen tablet?
The only problem with Pen Tablet is that it *needs* the tablet. It doesn't work on plain paper. I cannot afford losing hard copies of the notes for an online version. It doesn't feel like reading while reading online.
I felt Digital Pen is more close to my needs after seeing [this](https://superuser.com/questions/85228/good-digital-pen-that-integrates-with-microsoft-onenote/85231#85231). There is no need of tablet. It works on plain paper (**It does work on plain paper. right?**) With this I can use a [carbon paper](http://en.wikipedia.org/wiki/Carbon_paper) underneath and make soft copy & hard copy simultaneously.
I don't care if it works for single software (MS Word, MS OneNote or what ever). I need some form of online notes as well as offline notes.
What's the best suited gadget for these purposes?
PS: Kindly try to answer all the questions. That really helps me choosing the best option.
|
For archiving your existing notebooks I suggest either a digital camera or a scanner. The Camera is a much faster way to archive dead-tree media as it takes the image at once. A basic camera setup is using a tripod or other mount, and affixing your notebook in place. If necessary, use two small weights to keep the pages you're currently on flat. Then just flip through and take photos.
I don't think you want a wacom (or other pen) tablet. The input would be awkward for your particular use and it's not particularly portable.
The [LiveScribe](http://www.livescribe.com/en-us/) pen would be a good way to go, however it requires special paper. But they do make [notebooks](http://rads.stackoverflow.com/amzn/click/B001AALJ2M) of the style it appears you use and at a similar cost to moleskines. The Livescribe uses the same technology (from [Anoto](http://en.wikipedia.org/wiki/Anoto)) as the Logitech io, Leapfrog and Adapx pens (as well as others), so it is a mature technology. As a bonus, the Livescribe also integrates seamlessly with [Evernote](http://www.evernote.com) which is an excellent note archiving platform which you could also upload your existing notebooks into.
|
How to Print Max Value of Character?
Although it is a very Initial level question but I find it complex one. Actually I want to know **what is happening behind the scene**? Why Character.MAX\_VALUE does not print the Max Value of char(Which is 65535) and MAX\_VALUE-1 does.
```
System.out.println("Byte Max Value: "+Byte.MAX_VALUE);//print 127 Ok!
System.out.println("Character Max Value: "+Character.MAX_VALUE);//print ?(Question Mark)
System.out.println(Character.MAX_VALUE-1);//print 65534
```
|
Because in the second line, `Character.MAX_VALUE` is concatenated with the String.
As the [JLS](http://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.18.1) states:
>
> The string concatenation operator + (§15.18.1), which, when given a
> String operand and an integral operand, will convert the integral
> operand to a String representing its value in decimal form, and then
> produce a newly created String that is the concatenation of the two
> strings
>
>
>
As `Character.MAX_VALUE` is not printable, you don't see it.
In the third case, your doing a substraction with an `int`, thus the whole expression is casted to `int` and it prints an `int` value.
Also as the [JLS](http://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.18) states:
>
> The binary + operator performs addition when applied to two operands
> of numeric type, producing the sum of the operands.
>
>
> [...]
>
>
> Binary numeric promotion is performed on the operands (§5.6.2).
>
>
> When an operator applies binary numeric promotion to a pair of operands, each of which must denote a value that is convertible to a numeric type, the following rules apply, in order:
>
>
> 1. [...]
> 2. Widening primitive conversion (§5.1.2) is applied to convert either or
> both operands as specified by the following rules:
>
>
> If either operand is of type double, the other is converted to double.
>
>
> Otherwise, if either operand is of type float, the other is converted
> to float.
>
>
> Otherwise, if either operand is of type long, the other is converted
> to long.
>
>
> **Otherwise, both operands are converted to type int**.
>
>
>
If you've done
```
System.out.println("Character Max Value: "+(Character.MAX_VALUE+0));
```
It would print `Character Max Value: 65535`
|
Can't create an array from the output of git pull
```
use Cwd;
use strict;
use warnings;
no warnings 'numeric';
chdir("/root/dev/git_repo");
qx{git checkout master};
my $pull = qx{git pull};
if ($pull != "Already up-to-date."){
my @output = $pull;
chomp @output;
foreach my $line (@output)
{
print "<<$line>>\n";
}
}
```
`if ($pull == "Already up-to-date.")` everything is fine, but
`if ($pull != "Already up-to-date."){`
then I won't be able to enter the loop, I'm not being able to convert the output to an array correctly. I tried other things like `split /^/m, $pull`.
I need to be able to parse it in order to get the files I need. I don't want to use any external libraries, because I have to use this script on old servers that are not connected to the internet.
|
This is a simplified version of your code:
```
use Cwd;
use strict;
use warnings;
chdir("/root/dev/git_repo");
qx{git checkout master};
my @lines = qx{git pull};
chomp @lines;
if ($lines[0] ne 'Already up-to-date.') {
foreach my $line (@lines) {
print "<<$line>>\n";
}
}
```
You can directly populate an array using `qx`. This eliminates the need for multiple variables.
---
On my system, `qx{git pull}` returns `Already up-to-date.` with a newline character (`\n`) appended. Yours might as well. If so, your comparison would fail.
`!=` is only for numeric comparisons, but you are trying to do a string comparison. You would have gotten a warning message, but you intentionally suppressed that warning using this line:
```
no warnings 'numeric';
```
The warning tells you that your comparison will not behave as expected. Since this is the major source of your problem, don't suppress that warning. Perl warnings almost always mean you have a bug in your code.
Another issue with your code is this line:
```
my @output = $pull;
```
It populates the array variable with a single item. It does not magically split the `git` output into several lines by `\n`.
|
Unit testing post controller .NET Web Api
I don't have much experience with .NET Web Api, but i've been working with it a while now, following John Papa's SPA application tutorial on Pluralsight. The application works fine, but the thing i'm struggling with now, is unit testing POST-controllers.
I have followed this incredible guide on how to unit test web api controllers. The only problem for me is when it comes to test the POST method.
My controller looks like this:
```
[ActionName("course")]
public HttpResponseMessage Post(Course course)
{
if (course == null)
throw new HttpResponseException(HttpStatusCode.NotAcceptable);
try
{
Uow.Courses.Add(course);
Uow.commit();
}
catch (Exception)
{
throw new HttpResponseException(HttpStatusCode.InternalServerError);
}
var response = Request.CreateResponse(HttpStatusCode.Created, course);
string uri = Url.Link(routeName: "ControllerActionAndId",
routeValues: new { id = course.Id });
response.Headers.Location = new Uri(uri);
return response;
}
```
And my unit test looks like this:
```
[Test]
public void PostShouldReturnHttpResponse()
{
var populatedPostController = new CoursesController(new TestUOW());
SetupPostControllerForTest(populatedPostController);
var course = new Course
{
Id = 12,
Author = new UserProfile()
{
Firstname = "John",
Lastname = "Johnson",
},
Description = "Testcourse",
Title = "Test Title"
};
var responses = populatedPostController.Post(course);
ObjectContent content = responses.Content as ObjectContent;
Course result = (Course)content.Value;
Assert.AreSame(result, course);
}
```
With the help function:
```
public static void SetupPostControllerForTest(ApiController controller)
{
var config = new HttpConfiguration();
var request = new HttpRequestMessage(HttpMethod.Post, "http://localhost/api/courses/course");
var route = config.Routes.MapHttpRoute(
name: "ControllerActionAndId",
routeTemplate: "api/{controller}/{action}/{id}",
defaults: null,
constraints: new { id = @"^\d+$" }
);
var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "controller", "courses" }, { "action", "course" } });
controller.ControllerContext = new HttpControllerContext(config, routeData, request);
controller.Request = request;
controller.Request.Properties[HttpPropertyKeys.HttpConfigurationKey] = config;
}
```
When i debug the unit test, it seems to fail at:
```
string uri = Url.Link(routeName: "ControllerActionAndId",
routeValues: new { id = course.Id });
response.Headers.Location = new Uri(uri); //Exception because uri = null
```
It seems like the Url.Link can't find the route.
I tried [this](http://blogs.msdn.com/b/youssefm/archive/2013/01/28/writing-tests-for-an-asp-net-webapi-service.aspx) guide aswell, but i really want the example i have above to work.
Am i missing something really basic here?
|
Yes, you are missing the one line in the configuration as Nemesv mentioned.
```
controller.Request.Properties[HttpPropertyKeys.HttpRouteDataKey] = routeData
```
As you can see, configuring a controller just for using the UrlHelper is extremely complex. I tend to avoid the use of UrlHelper in the controller classes for that reason. I usually introduce an external dependency to make testing easier like an IUrlHelper, which allows me to mock the behavior in an unit test.
```
public interface IUrlHelper
{
string Link(string routeName, object routeValues);
string Route(string routeName, object routeValues);
}
public class UrlHelperWrapper : IUrlHelper
{
UrlHelper helper;
public UrlHelperWrapper(UrlHelper helper)
{
this.helper = helper;
}
public string Link(string routeName, object routeValues)
{
return this.helper.Link(routeName, routeValues);
}
public string Route(string routeName, object routeValues)
{
return this.helper.Route(routeName, routeValues);
}
}
```
I inject this UrlHelperWraper in the real Web API, and a mock of the IUrlHelper interface in the tests. By doing that, you don't need all that complex configuration with the routes.
Regards,
Pablo.
|
Storing a message from a queue
I am doing a code review and I have encountered a class in an application that is throwing an exception in the constructor:
```
class QueueMessage
{
private:
std::string m_bucketName;
std::string m_objectName;
public:
QueueMessage(const std::string& messageIn)
{
std::stringstream ss;
ss << messageIn;
PTree pt;
json::read_json(ss, pt);
m_bucketName = pt.get<std::string>("bucket");
m_objectName = pt.get<std::string>("path");
if (m_bucketName.empty() || m_objectName.empty())
{
throw QueueMessageException("Empty fields in queue message");
}
}
std::string getBucketName() const { return m_bucketName; }
std::string getObjectName() const { return m_objectName; }
};
```
The class is used to store the message from the queue (that is a JSON-like text) in its members. Is it ok to throw an exception in the constructor if the members of the class are `string`, `int`, `vector`, `smart pointers`, etc.?
|
Throwing exceptions from a constructor is not only a perfectly legitimate pattern to use. It is also necessary to correctly implement [Resource Acquisition Is Initialization (RAII)](http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization) for some classes (this appears to be the case in your code). And trust me, you really do want RAII :)
However, please be aware of the caveat that the destructor of the object will not be executed if the constructor throws. But the individual members of the class will be destructed.
On a related note, it is also okay for the constructor to perform work if the work is necessary for RAII. Again under the caveat that you adhere to [Single Responsibility Principle (SRP)](http://en.wikipedia.org/wiki/Single_responsibility_principle) and [Dependency Injection (DI)](http://en.wikipedia.org/wiki/Dependency_Injection). See this question: <https://stackoverflow.com/questions/7048515/is-doing-a-lot-in-constructors-bad>
|
How to copy file contents to the local clipboard from a file in a remote machine over ssh
To solve this problem I always have to use `scp` or `rsync` to copy the file into my local computer to open the file and simply copy the contents of the text file into my local clipboard. I was just wondering if there is a more clever way to do this without having the need of copying the file.
|
Of course you have to read the file, but you could
```
</dev/null ssh USER@REMOTE "cat file" | xclip -i
```
though that still means to open a ssh connection and copy the contents of the file. But finally you don't see anything of it anymore ;)
And if you are connecting from an OS X computer you use `pbcopy` instead:
```
</dev/null ssh USER@REMOTE "cat file" | pbcopy
```
PS: Instead of `</dev/null` you can use `ssh -n` but I don't like expressing things in terms of software options, where I can use the system to get the same.
PPS: The `</dev/null` pattern for ssh is extremely usefull for loops
```
printf %s\\n '-l user host1' '-l user host2' | while read c
do </dev/null ssh $u "ip address; hostname; id"
done
```
|
Async await is not waiting for response inside useEffect
I am creating simple clone for Hacker news, I want to use `useEffect` and inside it I return ids then use these ids to get first 50 posts.
The problem is with second function inside `fetchHackerNewsPosts` that it doesn't wait it to add data inside the `postsArray` and it prints it empty.
```
import React from "react";
import logo from "./logo.svg";
import "./App.css";
function App() {
const [posts, setPosts] = React.useState([]);
React.useEffect(() => {
let postsArray = [];
async function fetchHackerNewsPosts() {
try {
// it waits this function
let postsIDsArray = await fetch(
"https://hacker-news.firebaseio.com/v0/topstories.json"
)
.then((res) => res.json())
.then((result) => {
return result;
});
// it doesn't wait this function
await postsIDsArray.slice(0, 50).map((postId) => {
fetch(`https://hacker-news.firebaseio.com/v0/item/${postId}.json`)
.then((res) => res.json())
.then((result) => {
postsArray.push(result);
});
});
} catch (error) {
console.log(error);
}
}
fetchHackerNewsPosts();
console.log(postsArray);
}, []);
return (
<div className="p-10">
<header className="">
<h1> Hacker news API </h1>
</header>
<div className="bg-blue-600">list</div>
</div>
);
}
export default App;
```
|
I think there's two problems here.
1 - the function you're awaiting isn't an async function. You're doing post `postsIDsArray.slice(0, 50).map(...)` - which will just resolve to an empty array (you aren't returning the fetch). You could return the fetch, and then you'd have an array of promises, which you could then await via Promise.all:
```
await Promise.all(postsIDsArray.slice(0, 50).map((postId) => fetch(`https://hacker-news.firebaseio.com/v0/item/${postId}.json`)
.then((res) => res.json())
.then((result) => {
postsArray.push(result);
}));
```
This will still log an empty array where you `console.log(postsArray);` though - because you're not awaiting fetchHackerNewsPosts.
It might be worth considering using all await/async style or all promise style, rather than mixing the two. ~~This does however usually mean switching to regular loops rather than map/forEach, so you can wait in between.~~ - EDIT - removed, cheers Lione
```
React.useEffect(() => {
async function fetchHackerNewsPosts() {
const posts = [];
try {
const postIDsResponse = await fetch(
"https://hacker-news.firebaseio.com/v0/topstories.json"
);
const postIDs = await postIDsResponse.json();
for (const postID of postIDs) {
const postResponse = await fetch(`https://hacker-news.firebaseio.com/v0/item/${postId}.json`);
const post = await post.json();
posts.push(post);
}
return posts;
} catch (error) {
console.log(error);
}
}
setPosts(await fetchHackerNewsPosts());
}, []);
```
As pointed out - Promise.all could actually be a lot quicker here as you don't need to wait for the previous request to finish before firing the next one.
```
React.useEffect(() => {
async function fetchHackerNewsPosts() {
try {
const postIDsResponse = await fetch(
"https://hacker-news.firebaseio.com/v0/topstories.json"
);
const postIDs = await postIDsResponse.json();
const postRequests = postIDs.map(async postID => {
const postResponse = await fetch(`https://hacker-news.firebaseio.com/v0/item/${postId}.json`);
return await post.json();
});
return await Promise.all(postRequests);
} catch (error) {
console.log(error);
}
}
const posts = await fetchHackerNewsPosts();
setPosts(posts);
}, []);
```
|
To call a component inside Dialog component in angular material
I have 2 components called
1) `demo`
2) `add-customer`
In `demo` component i have an **button** called add. On clicking the button an function (ex **openDialog()** )is called to open an **dialog** window(i,e op-up window).Now i want to call `add-customer` component inside this **dialog** window.
How can i do this. Here is the [stackblitz](https://stackblitz.com/edit/angular-material2-issue-sovrog?file=app/demo/demo.component.html) link.
|
Demo.component.ts you need to "insert" the component into the dialog.
```
import {AddCustomerComponent} from '../add-customer/add-customer.component'
openDialog(): void {
const dialogRef = this.dialog.open(AddCustomerComponent, {
width: '450px',
});
```
app.module.ts, add the component loaded in the dialog to the entryComponents
```
declarations: [
AppComponent,
DemoComponent,
AddCustomerComponent,
],
entryComponents: [
AddCustomerComponent
],
```
EDIT: to close on cancel you must add a click function to the cancel button on the add-customer.component.html
```
<button mat-raised-button type="button" class="Discard-btn" (click)="cancel()">Cancel</button>
```
Then on the .ts file add the function and also inject the dialogRef on the constructor
```
import {MatDialogRef} from '@angular/material';
constructor(private fb: FormBuilder,
private dialogRef: MatDialogRef<AddCustomerComponent>) {}
public cancel() {
this.dialogRef.close();
}
```
|
How do I calculate lambda to use scipy.special.boxcox1p function for my entire dataframe of 500 columns?
I have a dataframe with total sales of around 500 product categories in each row. So there are 500 columns in my dataframe. I am trying to find the highest correlated category with my another dataframe columns.
So I will use Pearson correlation method for this.
But the Total sales for all the categories are highly skewed data, with the skewness level ranging from 10 to 40 for all the category columns. So I want to log transform this sales data using boxcox transformation.
Since, my sales data has 0 values as well, I want to use boxcox1p function.
Can somebody help me, how do I calculate lambda for boxcox1p function, since it is a mandatory parameter for this function?
Also, Is this the correct approach for my problem statement to find highly correlated categories?
|
Assume `df` is Your dataframe with many columns containing numeric values, and lambda parameter of box-cox transformation equals 0.25, then:
```
from scipy.special import boxcox1p
df_boxcox = df.apply(lambda x: boxcox1p(x,0.25))
```
Now transformed values are in `df_boxcox`.
Unfortunately there is no built-in method to find lambda of `boxcox1p` but we can use `PowerTransformer` from `sklearn.preprocessing` instead:
```
import numpy as np
from sklearn.preprocessing import PowerTransformer
pt = PowerTransformer(method='yeo-johnson')
```
Note method 'yeo-johnson' is used because it works with both positive and negative values. Method 'box-cox' will raise error: `ValueError: The Box-Cox transformation can only be applied to strictly positive data`.
```
data = pd.DataFrame({'x':[-2,-1,0,1,2,3,4,5]}) #just sample data to explain
pt.fit(data)
print(pt.lambdas_)
[0.89691707]
```
then apply calculated lambda:
```
print(pt.transform(data))
```
result:
```
[[-1.60758267]
[-1.09524803]
[-0.60974999]
[-0.16141745]
[ 0.26331586]
[ 0.67341476]
[ 1.07296428]
[ 1.46430326]]
```
|
Spring Data Elasticsearch id vs. \_id
I am using Spring Data Elasticsearch 2.0.1 with Elastic version 2.2.0.
My DAO is similar to:
```
import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;
@Document(indexName = "myIndex")
public class MyDao {
@Id
private String id;
public String getId() { return id; }
public void setId(String id) { this.id = id; }
<other fields, setters, getters omitted>
}
```
Saving the object to ES using a repository the `_id` metadata field gets populated correctly. The getter and setter methods for the `id` field correctly return the value of the `_id` metadata field. But the id field within the `_source` field is null.
2 questions:
1) Why is the id field null?
2) Does it matter that the id field is null?
|
Since you're letting ES generate its own IDs, i.e. you're never calling `MyDao.setId("abcdxyz")` then the `_source` cannot have a value in the `id` field.
What is happening is that if you generate your own IDs and call `setId("yourid")`, then Spring Data ES will use it as the value for the `_id` of your document and also persist that value into the `_source.id` field. Which means that `_source.id` will not be null.
If you don't call `setId()`, then `_source.id` will be null and ES will generate its own ID. When you then call `getId()`, Spring Data ES will make sure to return you the value of the `_id` field and not `_source.id` since it's annotated with `@Id`
To answer your second question, it doesn't matter that the `_source.id` field is null... as long as you don't need to reference it. Spring Data ES will always populate it when mapping the JSON documents to your Java entities, even if the underlying `id` field in ES is null.
|
Fedora vs. Ubuntu installation
I am using Fedora 13, and it was my friend who did the installation for me. I have used Ubuntu also and I found it more easy to install than fedora. Ubuntu uses a WUBI installer (if I am correct) and its more easy for the users to install and remove Ubuntu. For a person who knows how to install and remove an application in Windows, can they install/remove Ubuntu also.
Why is it that its not the same with Fedora. Are there any steps being taken for making it more user-friendly.
|
Most of the significant (including Fedora and Ubuntu) distributions prefer to install from a boot cd-rom or usb-stick these days. Windows need not part of the process at all.
[Wubi](http://wubi-installer.org/) is a windows application that can run Linux from a Windows file pretending to be a boot disk. Its purpose is to be have zero-impact on the Windows system:
>
> You keep Windows as it is, Wubi only
> adds an extra option to boot into
> Ubuntu. Wubi does not require you to
> modify the partitions of your PC, or
> to use a different bootloader, and
> does not install special drivers. It
> works just like any other application.
>
>
>
The [Fedora LiveCD](http://fedoraproject.org/get-fedora) and [Ubuntu LiveCD](http://www.ubuntu.com/desktop/get-ubuntu/download) and even the tiny [DSL LiveCD](http://damnsmalllinux.org/) are the simplest installation methods.
|
When am I allowed cross-thread access and when am I not?
It seems that C# throws exceptions for cross-threading operations sometimes. For example, you can try running this sample code in a Windows Forms program:
```
public Form1()
{
InitializeComponent();
Thread setTextT = new Thread(TextSetter);
setTextT.Start();
}
private void TextSetter()
{
//Thread.Sleep(4000);
textBox1.Text = "Hello World";
}
```
It doesn't throw any exceptions, everything seems to be working fine. Now, when you uncomment the Thread.Sleep line, it'll throw an exception for attempted cross-threading access.
Why is this the case?
|
It is because the cross thread check only happens after [the window's handle](http://msdn.microsoft.com/en-us/library/system.windows.forms.control.handle%28v=vs.110%29.aspx) has been created, as the thing you are checking for is cross thread accessing of the window handle. When you don't have the sleep in there the code runs quickly enough before the control is displayed for the the first time (the handle gets created the first time the control is displayed).
The easyist way to know if you need to be careful of cross thread access of a UI control is just check [`InvokeRequired`](http://msdn.microsoft.com/en-us/library/system.windows.forms.control.invokerequired%28v=vs.110%29.aspx) and if it is true you need to call [`Invoke`](http://msdn.microsoft.com/en-us/library/zyzhdc6b%28v=vs.110%29.aspx) (or `BeginInvoke` if you don't want to wait for it to complete) to get on the thread that created the control.
|
Disabling RightBarButtonItems
My viewController is functioning as a container and has its own UINavigationBar. It is not in a naviagtion controller. My nav bar items are set up like so...
```
self.navigationItem.leftBarButtonItems = leftItems;
self.navigationItem.rightBarButtonItems = @[logout, settings];
[self.navBar setItems:@[self.navigationItem]];
```
At various points in the application this navigation bar is locked down until the user completes a task. This snippet works fine for toggling the enabled property of the buttons in the navigation bar but only on the leftBarButtonItems! Why?
```
for(UIBarButtonItem *rightButton in self.navigationItem.rightBarButtonItems){
[rightButton setEnabled:!rightButton.enabled];
}
for(UIBarButtonItem *leftButton in self.navigationItem.leftBarButtonItems){
[leftButton setEnabled:!leftButton.enabled];
}
```
|
**UPDATE:**
I had created a test demo, it works well. Here is the sreenshots and code, hope to give you some help!


ViewController.h
```
#import <UIKit/UIKit.h>
@interface ViewController : UIViewController
@property(nonatomic,strong) UINavigationItem * navItem;
@property(nonatomic,assign) IBOutlet UINavigationBar * navBar;
@end
```
ViewController.m
```
#import "ViewController.h"
@implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
UIBarButtonItem* barItem1 = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(barItemClicked:)];
UIBarButtonItem* barItem2 = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(barItemClicked:)];
UIBarButtonItem* barItem3 = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(barItemClicked:)];
UIBarButtonItem* barItem4 = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(barItemClicked:)];
self.navItem = [[UINavigationItem alloc] init];
self.navItem.leftBarButtonItems = @[barItem1,barItem2];
self.navItem.rightBarButtonItems = @[barItem3,barItem4];
[self.navBar setItems:@[self.navItem]];
}
- (IBAction)anableSwitch:(id)sender{
UISegmentedControl * swith = (UISegmentedControl *)sender;
for(UIBarButtonItem *rightButton in self.navItem.leftBarButtonItems){
[rightButton setEnabled:(swith.selectedSegmentIndex == 0)];
}
for(UIBarButtonItem *leftButton in self.navItem.rightBarButtonItems){
[leftButton setEnabled:(swith.selectedSegmentIndex == 0)];
}
}
- (void)barItemClicked:(id)sender{
NSLog(@"barItemClicked");
}
```
|
Why is X-Forwarded-Proto always set to HTTP on Elastic Beanstalk?
## Setup
Hi. I'm deploying an ASP.Net Core application to AWS Elastic Beanstalk. The platform I'm running on is 64bit Amazon Linux 2/2.1.5 using Nginx as the proxy server software. I've got a pair of listeners for my load balancer set up in the environment configuration. They are set up as follows:
- `Port=443 Protocol=HTTPS SSL=certificate Process=default`
- `Port=80 Protocal=HTTP Process=default`
And I've got a single process:
`Name=default Port=80 Protocol=HTTPS`
## Problem
On my ASP.Net Core server, I'm trying to check if the original client to the server is communicating over HTTPS or HTTP. As I understand, the `X-Forwarded-Proto` header for requests should carry this information. However, the value of `X-Forwarded-Proto` is always `http` regardless of how a client connects to the server. **Why is the `X-Forwarded-Proto` not ever set to `https` even when connected as so from my web browser?**
Thanks in advance for any help!
|
The problem was in the Nginx configuration as pointed out by @MarkB. AWS Elastic Beanstalk has a default configuration file `00_application.conf` in `/etc/nginx/conf.d/elasticbeanstalk` that is the culprit. It has a declaration:
```
proxy_set_header X-Forwarded-Proto $scheme;
```
that needed to be changed to:
```
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
```
---
To overwrite this file, I used the method detailed here: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html>.
I added a file `.platform/nginx/conf.d/elasticbeanstalk` to the root of my deployed project. It contains:
```
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
}
```
---
I also had to add a middleware to my ASP.Net Core application to use the forwarded headers as noted in this answer: [Redirect URI sent as HTTP and not HTTPS in app running HTTPS](https://stackoverflow.com/questions/50468033/redirect-uri-sent-as-http-and-not-https-in-app-running-https/50505373#50505373).
I added the following to my `Startup.cs`:
```
public void ConfigureServices(IServiceCollection services)
{
//...
services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders =
ForwardedHeaders.XForwardedFor |
ForwardedHeaders.XForwardedProto;
options.KnownNetworks.Clear();
options.KnownProxies.Clear();
});
//...
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
//...
app.UseForwardedHeaders();
//...
}
```
I hope this helps others!
|
Adding a Custom Object to an ArrayAdapter. How to grab Data?
This is what I have so far:
The Custom Object:
```
class ItemObject {
List<String> name;
List<String> total;
List<String> rating;
public ItemObject(List<ItemObject> io) {
this.total = total;
this.name = name;
this.rating = rating;
}
}
```
The Call to the Adapter:
```
List<String> names, ratings, totals;
ItemObject[] io= new ItemObject[3];
io[0] = new ItemObject(names);
io[1] = new ItemObject(rating);
io[2] = new ItemObject(totals);
adapter = new ItemAdapter(Items.this, io);
setListAdapter(adapter);
```
Assuming the above looks ok, my questions is how would I set up the ItemAdapter, it's constructor, and unwrap the three List's from the object. And then, in the getView, assign these things:
each matching position to:
```
TextView t1 = (TextView) rowView.findViewById(R.id.itemName);
TextView t2 = (TextView) rowView.findViewById(R.id.itemTotal);
RatingBar r1 = (RatingBar) rowView.findViewById(R.id.ratingBarSmall);
```
For example, position 0 in the array "names" to t1.
Position 0 in the array "totals" to t1.
Position 0 in the array "ratings" to r1.
EDIT: I don't want someone to write the entire Adapter. I just need to know how to unwrap the Lists from the Custom Object so I can use the data. (*Something not even brought up or asked about in another question*)
|
Your code will not work in its actual form. Do you really need lists of data in the `ItemObject`? My guess is no and you simply want a `ItemObject` that holds 3 Strings corresponding to the 3 views from your row layout. If this is the case:
```
class ItemObject {
String name;
String total;
String rating;// are you sure this isn't a float
public ItemObject(String total, String name, String rating) {
this.total = total;
this.name = name;
this.rating = rating;
}
}
```
Then your lists will be merged into a list of `ItemObject`:
```
List<String> names, ratings, totals;
ItemObject[] io= new ItemObject[3];
// use a for loop
io[0] = new ItemObject(totals.get(0), names.get(0), ratings(0));
io[1] = new ItemObject(totals.get(1), names.get(1), ratings(1));
io[2] = new ItemObject(totals.get(2), names.get(2), ratings(2));
adapter = new ItemAdapter(Items.this, io);
setListAdapter(adapter);
```
And the adapter class:
```
public class ItemAdapter extends ArrayAdapter<ItemObject> {
public ItemAdapter(Context context,
ItemObject[] objects) {
super(context, 0, objects);
}
@Override
public View getView(int position, View convertView, ViewGroup parent) {
// do the normal stuff
ItemObject obj = getItem(position);
// set the text obtained from obj
String name = obj.name; //etc
// ...
}
}
```
|
How to use decimal type in MongoDB
How can I store decimals in MongoDB using the standard C# driver? It seems that all decimals are stored inside the database as strings.
|
MongoDB doesn't properly support decimals until MongoDB v3.4. Before this version it stored decimals as strings to avoid precision errors.
**Pre v3.4**
Store decimals as strings, but this prevents arithmetic operations. Operators as `$min`, `$avg`, ... won't be available. If precision is not a big deal, then you might be able to switch to `double`.
**v3.4+**
You need to make sure the following preconditions are true:
- MongoDB server should be at least v3.4.
- MongoCSharpDriver should be at least v2.4.3.
- Database should have `featureCompatibilityVersion` set to `'3.4'`. If your database has been created by an older MongoDB version and you have upgraded your server to v3.4 your database might still be on an older version.
If you have all the properties set, then register the following serializers to use the `decimal128` type:
```
BsonSerializer.RegisterSerializer(typeof(decimal), new DecimalSerializer(BsonType.Decimal128));
BsonSerializer.RegisterSerializer(typeof(decimal?), new NullableSerializer<decimal>(new DecimalSerializer(BsonType.Decimal128)));
```
|
Combining Expressions in an Expression Tree
How can I build an expression tree when parts of the expression are passed as arguments?
E.g. what if I wanted to create expression trees like these:
```
IQueryable<LxUser> test1(IQueryable<LxUser> query, string foo, string bar)
{
query=query.Where(x => x.Foo.StartsWith(foo));
return query.Where(x => x.Bar.StartsWith(bar));
}
```
but by creating them indirectly:
```
IQueryable<LxUser> test2(IQueryable<LxUser> query, string foo, string bar)
{
query=testAdd(query, x => x.Foo, foo);
return testAdd(query, x => x.Bar, bar);
}
IQueryable<T> testAdd<T>(IQueryable<T> query,
Expression<Func<T, string>> select, string find)
{
// how can I combine the select expression with StartsWith?
return query.Where(x => select(x) .. y => y.StartsWith(find));
}
```
**Result:**
While the samples didn't make much sense (sorry but I was trying to keep it simple), here's the result (thanks Quartermeister).
It can be used with Linq-to-Sql to search for a string that starts-with or is equal to the findText.
```
public static IQueryable<T> WhereLikeOrExact<T>(IQueryable<T> query,
Expression<Func<T, string>> selectField, string findText)
{
Expression<Func<string, bool>> find;
if (string.IsNullOrEmpty(findText) || findText=="*") return query;
if (findText.EndsWith("*"))
find=x => x.StartsWith(findText.Substring(0, findText.Length-1));
else
find=x => x==findText;
var p=Expression.Parameter(typeof(T), null);
var xpr=Expression.Invoke(find, Expression.Invoke(selectField, p));
return query.Where(Expression.Lambda<Func<T, bool>>(xpr, p));
}
```
e.g.
```
var query=context.User;
query=WhereLikeOrExact(query, x => x.FirstName, find.FirstName);
query=WhereLikeOrExact(query, x => x.LastName, find.LastName);
```
|
You can use [Expression.Invoke](http://msdn.microsoft.com/en-us/library/bb355170.aspx) to create an expression that represents applying one expression to another, and [Expression.Lambda](http://msdn.microsoft.com/en-us/library/bb336566.aspx) to create a new lambda expression for the combined expression. Something like this:
```
IQueryable<T> testAdd<T>(IQueryable<T> query,
Expression<Func<T, string>> select, string find)
{
Expression<Func<string, bool>> startsWith = y => y.StartsWith(find);
var parameter = Expression.Parameter(typeof(T), null);
return query.Where(
Expression.Lambda<Func<T, bool>>(
Expression.Invoke(
startsWith,
Expression.Invoke(select, parameter)),
parameter));
}
```
The inner Expression.Invoke represents the expression `select(x)` and the outer one represents calling `y => y.StartsWith(find)` on the value returned by `select(x)`.
You could also use [Expression.Call](http://msdn.microsoft.com/en-us/library/bb336138.aspx) to represent the call to StartsWith without using a second lambda:
```
IQueryable<T> testAdd<T>(IQueryable<T> query,
Expression<Func<T, string>> select, string find)
{
var parameter = Expression.Parameter(typeof(T), null);
return query.Where(
Expression.Lambda<Func<T, bool>>(
Expression.Call(
Expression.Invoke(select, parameter),
"StartsWith",
null,
Expression.Constant(find)),
parameter));
}
```
|
Redirect stdout of two processes to another process's stdin in Linux C
I'm running into problem about redirect stdout of multi process.
Assuming I have process A, I use fork() in A and then I get process A and B. And I use fork() in B, finally I get process A, B and C. Both B and C are implementing other program by exec().
Now, I try to redirect the stdout of A and B to stdin of C with two pipes.
```
#include<unistd.h>
#include<stdio.h>
#include<sty/types.h>
int main()
{
int AtoC [2];
pipe(AtoC);
int fd1,fd2;
fd1=fork();
if(fd1>0)
{
/***In process A, I do the following steps: ***/
close(AtoC[0]);
dup2(AtoC[1], STDOUT_FILENO);
/* program running in process A */
}
else
{
int BtoC [2];
pipe(BtoC);
fd2=fork();
if(fd2>0)
{
/***In process B, I do the following steps: ***/
close(AtoC[1]);
close(BtoC[0]);
dup2(BtoC[1], STDOUT_FILENO);
/*** execute another program in process B using execl(); ***/
}
else
{
/*** In process C, I do the following steps: ***/
close(AtoC[1]);
close(BtoC[1]);
dup2(AtoC[0],STDIN_FILENO);
dup2(BtoC[0],STDIN_FILENO);
/*** execute another different program in process C using execl(); ***/
}
}
}
```
Now, after these two statements :
```
dup2(AtoC[0],STDIN_FILENO);
dup2(BtoC[0],STDIN_FILENO);
```
the stdin of process C is finally redirect to `BtoC[0]` ,which is the stdout of process B. And the stdout of process A is not passed into process C's stdin.
My question is whether there is any solution can let me redirect both process A and B's stdout into process C's stdin at the same time.
Another question is if I also want to print the stdout of process A in screen, what should I do? I know the command `tee` in command line. I try to use the corresponding function `tee(int fd_in, int fd_out, size_t len, unsigned int flags)`in process A, but I failed to print anything of stdout of process A.
Any suggestion is appreciated, thanks.
|
I'm not sure that it's a good idea, but it is perfectly feasible. The key observation is that you only need *one* pipe. Several processes (with a common parent) can write to a single pipe. However, only one file descriptor can be used as the standard input. In the original code, only one of the pipes was connected to C as its standard input (the others were still connected, but mainly because you hadn't closed enough descriptors).
- Rule of thumb: if you connect one end of a pipe to standard input or standard output via `dup2()` (or `dup()`), you should close both of the file descriptors returned by `pipe()`.
Try this code for size. I've reduced the bushiness of the tree, removed unused variables (nothing uses the process IDs returned by `fork()`), renamed the pipe, made sure it is closed properly in each process, and provided some writing activity in processes A and B and some reading activity in process C in lieu of running commands. I'm assuming that `usleep()` (micro-sleep, sleep time expressed in microseconds) is available; if not, try `nanosleep()`, but it has a more complex interface).
```
#include <unistd.h>
int main(void)
{
int ABtoC[2];
pipe(ABtoC);
if (fork() > 0)
{
// Process A
close(ABtoC[0]);
dup2(ABtoC[1], STDOUT_FILENO);
close(ABtoC[1]); // Close this too!
// Process A writing to C
for (int i = 0; i < 100; i++)
{
write(STDOUT_FILENO, "Hi\n", sizeof("Hi\n")-1);
usleep(5000);
}
}
else if (fork() > 0)
{
// Process B
close(ABtoC[0]);
dup2(ABtoC[1], STDOUT_FILENO);
close(ABtoC[1]);
// Process B writing to C
for (int i = 0; i < 100; i++)
{
write(STDOUT_FILENO, "Lo\n", sizeof("Lo\n")-1);
usleep(5000);
}
}
else
{
char buffer[100];
ssize_t nbytes;
close(ABtoC[1]);
dup2(ABtoC[0], STDIN_FILENO);
close(ABtoC[0]);
// Process C reading from both A and B
while ((nbytes = read(STDIN_FILENO, buffer, sizeof(buffer))) > 0)
write(STDOUT_FILENO, buffer, nbytes);
}
return(0);
}
```
The tail end of a run on my Mac (Mac OS X 10.7.5, GCC 4.7.1) produced:
```
Lo
Hi
Lo
Hi
Lo
Hi
Lo
Hi
Lo
Hi
Hi
Lo
Lo
Hi
Hi
Lo
Hi
Lo
Hi
Lo
Hi
Lo
Hi
Lo
Lo
Hi
```
|
Best way to handle Java Calendar with working days?
I need to implement a labor calendar able to count **working** days and, of course, natural days. The calendar must be able to handle national holidays and these days must be submitted by the user.
So, if I need to calculate the difference between two days the counting must ignore Saturdays, Sundays, and Holidays.
The Java class [Calendar](https://docs.oracle.com/javase/7/docs/api/java/util/Calendar.html), doesn't handle holidays or working days, so I need to make it by myself. I have think two possible ways:
# First way:
I could implement a new `Day` class which would have a boolean `isHoliday` to check if that is a working day or not, then create a new class with all the methods I'd need to handle/count the days.
### Pros:
- **Easy to handle**
- **I can override/create methods like toString, toDate, etc...**
### Cons:
- **Heavy** (Maybe?)
My doubt about this approach is how to store it. It'd mean to make 365 objects and store them in a `List`or `Linked List` and that's a lot of data to handle.
# Second way:
My second idea is to make it more simple. Create an array of `Strings` or [Dates](https://docs.oracle.com/javase/7/docs/api/java/util/Date.html) where I'd store the holidays.
Example `new ArrayList<String> freeDays = ["01/01/2019", "05/01/2019", "06/01/2019"...]` and with work with it using a new CalendarUtils class or something like that.
### Pros:
- **More readable**
- **Light**
## Cons:
- **Hard to work with**
For me the first option looks better, however, I don't want to waste memory or use bad practices.
Which option looks better? Are there any third option?
|
# Avoid legacy date-time classes
**Never use `Date` or `Calendar` classes.** Those terribly troublesome old classes are now legacy, supplanted by the *java.time* classes, specifically `Instant` and `ZonedDateTime`. You may find `LocalDate` helpful too.
# Smart objects, not dumb strings
Never use strings to represent date-time within your Java code. Use objects, the *java.time* classes.
When exchanging date-time values as text, always use the standard ISO 8601 formats. The *java.time* classes use these formats by default when parsing/generating strings. For a date that would be YYYY-MM-DD such as `2018-01-23`.
# `TemporalAdjuster` interface
To skip weekends, use the [`TemporalAdjuster`](https://docs.oracle.com/javase/10/docs/api/java/time/temporal/TemporalAdjuster.html) implementation found in the [*ThreeTen-Extra*](http://www.threeten.org/threeten-extra/) project.
- [`nextWorkingDay`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/Temporals.html#nextWorkingDay--)
- [`previousWorkingDay`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/Temporals.html#previousWorkingDay--)
Example:
```
LocalDate // Represent a date-only value, without a time-of-day and without a time zone.
.now( // Capture the current date.
ZoneId.of( "Africa/Tunis" ) // Time zone required. For any given moment the date varies around the globe by zone.
)
.with( // Invoke a `TemporalAdjuster` implementation.
org.threeten.extra.Temporals.nextWorkingDay()
) // Returns a `LocalDate`. Using immutable objects pattern, producing a fresh object based on the values of another while leaving the original unaltered.
```
To skip holidays, you must write your own code. No two people, companies, or countries share the same definition of holidays.
You’ll need to define your own list of holidays. I suggest writing that as an implementation of [`TemporalAdjuster`](https://docs.oracle.com/javase/10/docs/api/java/time/temporal/TemporalAdjuster.html) for working neatly with the *java.time* classes. Perhaps `nextBusinessDay` and `previousBusinessDay`. That *ThreeTen-Extra* project mentioned above is open-source, so look to there for code to guide you. And I vaguely recall posting one or more implementations of `TemporalAdjuster` myself here on Stack Overflow.
You might store those holiday dates in a database for persistence. And represent them at runtime in chronological order as a `List< LocalDate >`, sorted with `Collections.sort` and searching with [`Collections.binarySearch`](https://docs.oracle.com/javase/10/docs/api/java/util/Collections.html#binarySearch(java.util.List,T)). But beware of [thread-safety](https://en.m.wikipedia.org/wiki/Thread_safety). You’ll likely need to update that list during runtime. Writing while reading must be protected. Search for more info. And read the excellent book, [*Java Concurrency in Practice* by Brian Goetz et al.](https://www.goodreads.com/book/show/127932)
You can combine your holiday-skipping code with weekend-skipping code. Use a search engine to find my Answers on weekend-skipping using `EnumSet` and `DayOfWeek` enum. (The search feature built into Stack Overflow unfortunately skews towards Questions while ignoring Answers.)
Search Stack Overflow. All of this has been asked and answered before.
|
Example of how to use String.Create in .NET Core 2.1
Does anyone know how this method is intended to be used? The documentation is somewhat 'light'!
```
public static string Create<TState> (int length, TState state, System.Buffers.SpanAction<char,TState> action);
```
<https://learn.microsoft.com/en-us/dotnet/api/system.string.create?view=netcore-2.2>
|
The [`String.Create()` method](https://learn.microsoft.com/en-us/dotnet/api/system.string.create?view=netstandard-2.1) needs three things:
1. The final `length` of the string. You must know this in advance, because the method needs it to *safely* create an internal fixed-length buffer for the `Span<char>` instance used to construct the final string.
2. The data (`state`) which will become your string. For example, you might have an array buffer (of, say, ascii integers received over the network), but it could be *anything*. This is the raw data that will be transformed into the final string. There is an example buried deep in [this MSDN article](https://msdn.microsoft.com/en-us/magazine/mt814808.aspx?f=255&MSPPError=-2147217396) that even uses a `Random` instance. I've also seen an incomplete example used to create a base-64 encoded hash value (fixed length) of bitmap images (variable sized `state` input), but sadly I can't find it again.
3. The `action` lambda function that transforms `state` into the characters for the final string. The `Create()` method will call this function, passing the internal `Span<char>` it created for the string and your `state` data as the arguments.
For a very simple example, we can `Create()` a string from an array of characters like this:
```
char[] buffer = {'f', 'o', 'o'};
string result = string.Create(buffer.Length, buffer, (chars, buf) => {
for (int i=0;i<chars.Length;i++) chars[i] = buf[i];
});
```
Of course, the basic `string(char[])` constructor would also work here, but that shows what a correct function might look like. Or we can map an array of ascii `int` values to a new string like this:
```
int[] buffer = {102, 111, 111};
string result = string.Create(buffer.Length, buffer, (chars, buf) => {
for (int i=0;i<chars.Length;i++) chars[i] = (char)buf[i];
});
```
The function exists because there are some significant potential performance wins for this technique over traditional methods. For example, rather than reading a Stream into a buffer, you could pass the Stream object directly to `String.Create()` (assuming you know the final length). This avoids needing to allocate a separate buffer and avoids one round of copying values (stream=>buffer=>string becomes just stream=>string).
---
What happens when you call `string.Create()` is the function allocates a new string that already has the size determined by your `length` argument. This is one (and only one) heap allocation. Because `Create()` is a member of the string type, it has access to private string data for this new object you and I normally can't see. It now uses this access to create an internal `Span<char>` instance pointed at the new string's internal character data.
This `Span<char>` lives on the stack, but acts on the heap memory from the new string... there is no additional allocation, and it's completely out of scope as soon as the `Create()` function returns, so everything is legal and safe. And because it's basically a pointer-with-benefits, there's virtually no risk of overflowing the stack unless you've done something else horribly wrong.
Now `Create()` calls your `action` function to do the heavy lifting of populating the string. Your `action` lambda can write into the `Span<char>`... for the duration of your lamdba's execution, strings are less-immutable than you may have heard!
When the `action` lamdba is finished, `Create()` can return the new, ready-to-use, string reference. Everything is good: we minimized heap allocations, preserved type safety and memory safety; the `Span<char>` is no longer accessible anywhere, and as a stack value is already destroyed. We also minimized unnecessary copying between buffers, depending on your `action` implementation.
|
Checking Exception Type Performance
I have a method that checks the exception passed in and returns a bool value.
Currently my implementation is like this
```
private bool ExceptionShouldNotify(Exception e)
{
return
e is FirstCustomException ||
e is SecondCustomException ||
e is ThirdCustomException ||
e is FourthCustomException ||
e is FifthCustomException ||
e is SixthCustomException ||
e is SeventhCustomException;
}
```
However is it better performance-wise to use a dictionary lookup rather than several `OR` statements and the `is` check?
Something like this:
```
private bool ExceptionShouldNotify(Exception e)
{
var dict = new Dictionary<String, int> {
{ "FirstCustomException", 1 },
{ "SecondCustomException", 1 },
{ "ThirdCustomException", 1 },
{ "FourthCustomException", 1 },
{ "FifthCustomException", 1 },
{ "SixthCustomException", 1 },
{ "SeventhCustomException", 1 }
};
return dict.ContainsKey(e.GetType().Name);
}
```
|
*Hardcoding* (1st solution) is a bad practice, that's why I vote for the dictionary (2nd solution), but I suggest different implementation of the idea:
```
// HashSet - you don't use Value in the Dictionary, but Key
// Type - we compare types, not their names
private static HashSet<Type> s_ExceptionsToNotify = new HashSet<Type>() {
typeof(FirstCustomException),
typeof(SecondCustomException),
...
};
// static: you don't use "this" in the method
private static bool ExceptionShouldNotify(Exception e) {
return s_ExceptionsToNotify.Contains(e.GetType());
}
```
Having an exception caught (which include *stack tracing*) you've already had a big *overhead*; that's why performace (7 simple comparisons versus computing a hash) is not the main issue in the context
|
Swift get vs \_read
What's the difference between the following 2 subscripts?
```
subscript(position: Int) {
get { ... }
}
```
```
subscript(position: Int) {
_read { ... }
}
```
|
`_read` is part of the Swift Ownership story that has been in development for a while now. Since `read` (the likely name once it goes through Swift Evolution) is fairly advanced concept of the language you will probably want to read at least where it is described in the Ownership Manifesto [here](https://github.com/apple/swift/blob/master/docs/OwnershipManifesto.md#generalized-accessors) to get a more full answer than I'll provide here.
It is an alternative to `get` on subscripts that allows you to `yield` a value instead of `return` a value. This is essential for move only types because they cannot be copied (that is their entire purpose) which is what happens when you `return` a value. By using `read` it makes it so you could have for example have an `Array` of move only types and still use the values in it without taking the ownership of them by moving them. The easiest (and not technically correct since it is a coroutine) way to conceptually think about it is that you get a pointer to the object that `read` `yields`.
The sibling of `read` is `modify` which is currently in the pitch phase of Swift Evolution [here](https://forums.swift.org/t/modify-accessors/31872) so that can also give you some helpful insight into what `read` is since it is a coroutine as well.
So for now if Xcode gives you a `_read` to implement simply change it to `get` since it is a bug since it isn't an official part of the language yet.
|
distinct() function (not select qualifier) in postgres
I just came across a SQL query, specifically against a Postgres database, that uses a function named "distinct". Namely:
```
select distinct(pattern) as pattern, style, ... etc ...
from styleview
where ... etc ...
```
Note this is NOT the ordinary DISTINCT qualifier on a SELECT -- at least it's not the normal syntax for the DISTINCT qualifier, note the parentheses. It is apparently using DISTINCT as a function, or maybe this is some special syntax.
Any idea what this means?
I tried playing with it a little and if I write
```
select distinct(foo)
from bar
```
I get the same results as
```
select distinct foo
from bar
```
When I combine it with other fields in the same select, it's not clear to me exactly what it's doing.
I can't find anything in the Postgres documentation.
Thanks for any help!
|
*(The question is old, but comes high in Google results for “sql distinct is not a function” (second, first of Stack Overflow) and yet is still missing a satisfying answer, so...)*
Actually this *is* the ordinary DISTINCT qualifier on a SELECT -- but with a misleading syntax (you are right about that point).
DISTINCT is never a function, always a keyword. Here it is used (wrongly) as if it were a function, but
```
select distinct(pattern) as pattern, style, ... etc ...
from styleview
where ... etc ...
```
is in fact equivalent to all the following forms:
*-- add a space after `distinct`:*
```
select distinct (pattern) as pattern, style, ... etc ...
from styleview
where ... etc ...
```
*-- remove parentheses around column name:*
```
select distinct pattern as pattern, style, ... etc ...
from styleview
where ... etc ...
```
*-- indent clauses contents:*
```
select distinct
pattern as pattern, style, ... etc ...
from
styleview
where
... etc ...
```
*-- remove redundant alias identical to column name:*
```
select distinct
pattern, style, ... etc ...
from
styleview
where
... etc ...
```
Complementary reading:
- <http://weblogs.sqlteam.com/jeffs/archive/2007/10/12/sql-distinct-group-by.aspx>
- <https://stackoverflow.com/a/1164529>
---
Note: OMG Ponies in [an answer to the present question](https://stackoverflow.com/questions/3408037/distinct-function-not-select-qualifier-in-postgres#3408113) mentioned the `DISTINCT ON` extension featured by PostgreSQL.
But (as Jay rightly remarked in a comment) it is not what is used here, because the query (and the results) would have been different, e.g.:
```
select distinct on(pattern) pattern, style, ... etc ...
from styleview
where ... etc ...
order by pattern, ... etc ...
```
equivalent to:
```
select distinct on (pattern)
pattern, style, ... etc ...
from
styleview
where
... etc ...
order by
pattern, ... etc ...
```
Complementary reading:
- <http://www.noelherrick.com/blog/postgres-distinct-on>
---
Note: Lukas Eder in [an answer to the present question](https://stackoverflow.com/questions/3408037/distinct-function-not-select-qualifier-in-postgres#20630778) mentioned the syntax of using the DISTINCT keyword inside an aggregate function:
the `COUNT(DISTINCT (foo, bar, ...))` syntax featured by HSQLDB
(or `COUNT(DISTINCT foo, bar, ...)` which works for MySQL too, but also for PostgreSQL, SQL Server, Oracle, and maybe others).
But (clearly enough) it is not what is used here.
|
C++ lambda operator ==
How do I compare two lambda functions in C++ (Visual Studio 2010)?
```
std::function<void ()> lambda1 = []() {};
std::function<void ()> lambda2 = []() {};
bool eq1 = (lambda1 == lambda1);
bool eq2 = (lambda1 != lambda2);
```
I get a compilation error claiming that operator == is inaccessible.
EDIT: I'm trying to compare the function instances. So lambda1 == lambda1 should return true, while lambda1 == lambda2 should return false.
|
You can't compare `std::function` objects because [`std::function` is not equality comparable](https://stackoverflow.com/questions/3629835/why-is-stdfunction-not-equality-comparable). The closure type of the lambda is also not equality comparable.
However, if your lambda does not capture anything, the lambda itself can be converted to a function pointer, and function pointers are equality comparable (however, to the best of my knowledge it's entirely unspecified whether in this example `are_1and2_equal` is `true` or `false`):
```
void(*lambda1)() = []() { };
void(*lambda2)() = []() { };
bool are_1and1_equal = (lambda1 == lambda1); // will be true
bool are_1and2_equal = (lambda1 == lambda2); // may be true?
```
[Visual C++ 2010 does not support this conversion](https://connect.microsoft.com/VisualStudio/feedback/details/572138/visual-c-2010-does-not-permit-conversion-of-captureless-lambda-to-function-pointer). The conversion wasn't added to C++0x until just before Visual C++ was released.
|
{% extends "base.html" %} and {% block content %} inside if statement (Django app)
In a Django template, I want to show different things to authenticated and unauthenticated users.
Specifically, something like the following:
```
{% if not user.is_authenticated %}
<div class="margin">
{{ group.topic }}
</div>
{% else %}
{% extends "base.html" %}
{% load humanize %}
{% block content %}
<div class="margin">
{{ group.topic }}
<br>
<b>members:</b>
{% for member in members %}
<a href="{% url 'profile' slug=member.username %}">{{ member.username }}</a>,
{% endfor %}
<hr size=1>
<form action="{% url 'private_group_reply' slug=unique %}" method="POST" enctype="multipart/form-data">
{% csrf_token %}
<input type="hidden" id="id_link" name="unique" class="hidden_id" value="{{ unique }}">
<br>{{ form.image }}<br>
<br>{{ form.text }}<br>
<input class="button" type="submit" value="OK" id="id_submit">
</form>
{% for reply in replies %}
{{ reply.writer }}: {{ reply.text }},{{ reply.submitted_on|naturaltime }}<br>
{% endfor %}
{% endblock %}
{% endif %}
```
In other words, I want to extend base.html only for authenticated users, for non-authenticated ones, I want to have a plain template.
I keep getting a **TemplateSyntaxError: invalid block tag {% endif %}**
How can I accomplish this?
*The app in question is a legcy Django app, using Django 1.5*.
|
`{% extends var %}` must always be the first tag in a template.
You can use an empty template and extend that for your purpose.
Try inverting the logic and pulling out the block to the top level:
**empty.html**
```
{% block content %}
{% endblock %}
```
**your file.html**
```
{% extends user.is_authenticated|yesno:"base.html,empty.html" %}
{% load humanize %}
{% block content %}
{% if not user.is_authenticated %}
<div class="margin">
{{ group.topic }}
</div>
{% else %}
<div class="margin">
{{ group.topic }}
<br>
<b>members:</b>
{% for member in members %}
<a href="{% url 'profile' slug=member.username %}">{{ member.username }}</a>,
{% endfor %}
<hr size=1>
<form action="{% url 'private_group_reply' slug=unique %}" method="POST" enctype="multipart/form-data">
{% csrf_token %}
<input type="hidden" id="id_link" name="unique" class="hidden_id" value="{{ unique }}">
<br>{{ form.image }}<br>
<br>{{ form.text }}<br>
<input class="button" type="submit" value="OK" id="id_submit">
</form>
{% for reply in replies %}
{{ reply.writer }}: {{ reply.text }},{{ reply.submitted_on|naturaltime }}<br>
{% endfor %}
{% endif %}
{% endblock %}
```
|
Class problems with Haskell
This time, I have these definitions:
```
data Color = Red | Green | Blue
deriving (Show, Eq)
data Suit = Club | Spade | Diamond | Heart
deriving (Show, Eq)
class Eq a => Eq (Cycle a) where
step :: a -> a
stepMany :: Integer -> a -> a
stepMany 0 x = x
stepMany steps x = stepMany (steps - 1) (step x)
instance Eq Color => Cycle Color where
step color
| color == Red = Green
| color == Green = Blue
| color == Blue = Red
instance Eq Suit => Cycle Suit where
step suit
| suit == Club = Spade
| suit == Spade = Diamond
| suit == Diamond = Heart
| suit == Heart = Club
```
My problem is that the line
```
class Eq a => Eq (Cycle a) where'='"
```
produces the error
```
Unexpected type `Cycle a'
In the class declaration for `Eq'
A class declaration should have form
class Eq a where ...
|
7 | class Eq a => Eq (Cycle a) where
|
```
Q: What am I doing wrong here?
|
You don't need the `Eq` constraint on `Cycle`, nor on `Color` and `Suit`. You can just write the module like this:
```
data Color = Red | Green | Blue
deriving (Show, Eq)
data Suit = Club | Spade | Diamond | Heart
deriving (Show, Eq)
class Cycle a where
step :: a -> a
stepMany :: Integer -> a -> a
stepMany 0 x = x
stepMany steps x = stepMany (steps - 1) (step x)
instance Cycle Color where
step color
| color == Red = Green
| color == Green = Blue
| color == Blue = Red
instance Cycle Suit where
step suit
| suit == Club = Spade
| suit == Spade = Diamond
| suit == Diamond = Heart
| suit == Heart = Club
```
|
JavaFX line chart change color shape
I have this window:
```
@Override
public void start(Stage primaryStage){
NumberAxis xAxis = new NumberAxis(1960, 2020, 10);
xAxis.setLabel("Years");
NumberAxis yAxis = new NumberAxis(0, 350, 50);
yAxis.setLabel("No.of schools");
LineChart<Number, Number> lineChart = new LineChart<>(xAxis, yAxis);
lineChart.setTitle("Chart");
XYChart.Series<Number, Number> series = new XYChart.Series<>();
series.setName("No of schools in an year");
Platform.runLater(() ->
series.getNode().lookup(".chart-series-line").setStyle("-fx-stroke: black;")
);
series.getData().add(new XYChart.Data<>(1970, 15));
series.getData().add(new XYChart.Data<>(1980, 30));
series.getData().add(new XYChart.Data<>(1990, 60));
series.getData().add(new XYChart.Data<>(2000, 120));
series.getData().add(new XYChart.Data<>(2013, 240));
series.getData().add(new XYChart.Data<>(2014, 300));
lineChart.getData().add(series);
var scene = new Scene(lineChart);
primaryStage.setScene(scene);
primaryStage.show();
}
```
This is how it looks:

I want to change the color of series, but what I can achieve is to change color of line, but shape not.
How I can change the color of shape?
|
I would suggest using `CSS`. The key here is looking up all the nodes associated with a series and changing the color of the nodes. The first series is `.series0`.
>
> Key Code:
>
>
>
```
Set<Node> nodes = lineChart.lookupAll(".series" + 0);
for (Node n : nodes) {
n.setStyle("-fx-background-color: black, white;\n"
+ " -fx-background-insets: 0, 2;\n"
+ " -fx-background-radius: 5px;\n"
+ " -fx-padding: 5px;");
}
```
>
> Full Code
>
>
>
```
import java.util.Set;
import javafx.application.Application;
import javafx.application.Platform;
import javafx.scene.Node;
import javafx.scene.Scene;
import javafx.scene.chart.LineChart;
import javafx.scene.chart.NumberAxis;
import javafx.scene.chart.XYChart;
import javafx.stage.Stage;
public class ScatterChartSample extends Application {
@Override
public void start(Stage stage) {
NumberAxis xAxis = new NumberAxis(1960, 2020, 10);
xAxis.setLabel("Years");
NumberAxis yAxis = new NumberAxis(0, 350, 50);
yAxis.setLabel("No.of schools");
LineChart<Number, Number> lineChart = new LineChart<>(xAxis, yAxis);
lineChart.setTitle("Chart");
XYChart.Series<Number, Number> series = new XYChart.Series<>();
series.setName("No of schools in an year");
Platform.runLater(()
-> {
Set<Node> nodes = lineChart.lookupAll(".series" + 0);
for (Node n : nodes) {
n.setStyle("-fx-background-color: black, white;\n"
+ " -fx-background-insets: 0, 2;\n"
+ " -fx-background-radius: 5px;\n"
+ " -fx-padding: 5px;");
}
series.getNode().lookup(".chart-series-line").setStyle("-fx-stroke: black;");
});
series.getData().add(new XYChart.Data<>(1970, 15));
series.getData().add(new XYChart.Data<>(1980, 30));
series.getData().add(new XYChart.Data<>(1990, 60));
series.getData().add(new XYChart.Data<>(2000, 120));
series.getData().add(new XYChart.Data<>(2013, 240));
series.getData().add(new XYChart.Data<>(2014, 300));
lineChart.getData().add(series);
var scene = new Scene(lineChart);
stage.setScene(scene);
stage.show();
}
public static void main(String[] args) {
launch(args);
}
}
```
>
> Update: Try this!
>
>
>
```
for (Data<Number, Number> entry : series.getData()) {
entry.getNode().setStyle("-fx-background-color: black, white;\n"
+ " -fx-background-insets: 0, 2;\n"
+ " -fx-background-radius: 5px;\n"
+ " -fx-padding: 5px;");
}
```
[](https://i.stack.imgur.com/TxiUM.png)
|
How to listen for url change with Chrome Extension
I am writing a Google Chrome extension to automate some common tasks. The functionality I want is as follows:
1. Create a new tab and navigate to my webmail
2. enter username and password
3. click "submit" button
4. Wait until the webmail page appears, and choose the "roundcube" client.
I have completed steps 1,2,and 3 and they work. I am having a lot of trouble trying to listen for the url change after my credentials are submitted so that the function that selects roundcube client can run
I know I can run a script when client selection page appears by adding to my manifest but I want to use "chrome.tabs.executeScript" instead so that roundcube is chosen only if I run the script from the chrome extension and not if I go to client selection page manually.
Here is my manifest.json:
```
{
"manifest_version": 2,
"name" : "Chrome Autobot",
"description": "This extension will run various automation scripts for google chrome",
"version" : "1.0",
"browser_action" : {
"default_icon" : "icon.png",
"default_popup": "index.html"
},
"permissions": [
"activeTab",
"webNavigation",
"tabs",
"http://*/*",
"https://*/*"
]
}
```
Here is my chrome script:
```
jQuery(function($) {
"Use Strict";
var openWebmail = function() {
chrome.tabs.create({
url: 'http://mywebmaillogin.com:2095/'
}, function() {
chrome.tabs.executeScript(null, {file: "scripts/openEmail.js"});
});
chrome.tabs.onUpdated.addListener(function(){
chrome.tabs.executeScript(null, {file: "scripts/openEmail.js"});
alert('i work');
});
};
var init = $('.script-init');
init.on('click', function() {
openWebmail();
});
});
```
and here is the content script to be executed as a callback of tab creation (when the email login page is fetched and the DOM has loaded), and also when the email credentials are submitted and the client selection page's DOM has loaded (which is not working right now)
```
var openEmail = function() {
var loc = window.location.href;
if(loc === 'http://mywebmaillogin.com:2095/') {
var submit = document.getElementById('login_submit');
user.value = 'myusername';
pass.value = 'mypassword';
if(user.value === 'myusername' && pass.value === 'mypassword') {
submit.click();
}
else {
openEmail();
}
}
if(loc.indexOf('http://mywebmaillogin:2095/') > -1 && loc.indexOf('login=1') > -1) {
alert('I work');
}
}()
```
any help would be appreciated... thanks!
|
As mentioned by [@NycCompSci](https://stackoverflow.com/users/485953), you cannot call the chrome api from content scripts. I was able to pass api data to content scripts with message passing though, so thought I'd share that here. First call `onUpdated` in background.js:
### Manifest
```
{
"name": "My test extension",
"version": "1",
"manifest_version": 2,
"background": {
"scripts":["background.js"]
},
"content_scripts": [
{
"matches": ["http://*/*", "https://*/*"],
"js": ["contentscript.js"]
}
],
"permissions": [
"tabs"
]
}
```
### background.js
```
chrome.tabs.onUpdated.addListener(function
(tabId, changeInfo, tab) {
// read changeInfo data and do something with it (like read the url)
if (changeInfo.url) {
// do something here
}
}
);
```
Then you can expand that script to send data (including the new url [and other chrome.tabs.onUpdated info](https://developer.chrome.com/extensions/tabs#event-onUpdated)) from background.js to your content script like this:
### background.js
```
chrome.tabs.onUpdated.addListener(
function(tabId, changeInfo, tab) {
// read changeInfo data and do something with it
// like send the new url to contentscripts.js
if (changeInfo.url) {
chrome.tabs.sendMessage( tabId, {
message: 'hello!',
url: changeInfo.url
})
}
}
);
```
Now you just need to listen for that data in your content script:
### contentscript.js
```
chrome.runtime.onMessage.addListener(
function(request, sender, sendResponse) {
// listen for messages sent from background.js
if (request.message === 'hello!') {
console.log(request.url) // new url is now in content scripts!
}
});
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.