prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Visual Studio variable bin directory
I have a few VS 2010 C# projects that are shared between several solutions. I would like these projects to build to the directory of the solutions they are open in. How do I do this?
I considered setting up different build configurations (Debug\_Xsln, debug\_Ysln, Release\_Xsln...) but wasn't sure if there was a better way.
|
<http://msdn.microsoft.com/en-us/library/42x5kfw4(v=VS.100).aspx>
You can use a postbuild event with xcopy and the macro `$(SolutionName)` or `$(SolutionDir)` to copy the compiled files into the correct folder.
Go into properties for the project, build events tab, and in Post Build event command line enter something like this:
```
xcopy "$(ProjectDir)bin\$(ConfigurationName)\*.*" "$(SolutionDir)$(ProjectName)\bin\$(ConfigurationName)" /i /d /y
```
The benefit of this method is you can copy the build output of one project to multiple locations
OR
*(as Ziplin discovered)*
If you only have one location you want the build output to go, you can use the macros above to set the output path, like this:
```
$(SolutionDir)$(ProjectName)\bin\$(ConfigurationName)
```
just go to the project properties on the build tab and set your macroed location as the output path
|
How to debug my Eunit test suite run when using rebar3?
I have created an `release` app with rebar3 (beta-4).
Added some eunit tests and wrote some code.
For now I have to debug one test case to see what I have to add to make the implementation to work properly.
I found some articles about using `dbg` from Erlang console and I found how to write debug info from Eunit. But I need to get info from code that I have to test (the actual implementation(logic)).
Is there a way to debug Erlang code (actual source code, not the test one) when `rebar3` is used with `eunit` argument?
I'm using tracing in terminal like there: <https://aloiroberto.wordpress.com/2009/02/23/tracing-erlang-functions/>
|
One way to do this is use `rebar3` to run a shell under the test profile, then start the debugger and set up your breakpoints and such:
```
$ rebar3 as test shell
...
1> debugger:start().
{ok, <0.67.0>}
```
This will pop up the [debugger GUI](http://www.erlang.org/doc/apps/debugger/debugger_chapter.html). Once the debugger is set up and ready, run your test under `eunit`:
```
2> eunit:test(your_test_module,[verbose]).
======================== EUnit ========================
your_test_module: xyz_test_ (module 'your_test_module')...
```
Assuming you set up a suitable breakpoint in the debugger, this will hit it, but you'll likely run into a problem with this approach: by default, `eunit` tests time out after 5 seconds, which doesn't give you much time for debugging. You need to specify a [longer timeout](http://erlang.org/doc/apps/eunit/chapter.html#Control) for your test, which is why the example above shows that what's running is a [test fixture](http://erlang.org/doc/apps/eunit/chapter.html#Fixtures) named `xyz_test_`, which wraps the actual test with a long timeout. Such a fixture is pretty simple:
```
-include_lib("eunit/include/eunit.hrl").
xyz_test_() ->
{timeout,3600,
[fun() -> ?assertMatch(expected_value, my_module:my_fun()), ok end]}.
```
Here, the actual test is the anonymous function, which matches the return value from `my_module:my_fun/0`, which for this example represents the business logic under test. This example fixture sets the test timeout to one hour; you can of course set it as needed for your application.
|
Converting numeric string to numeric in Ruby
I want a method like to\_numeric(str) which convert numeric string 'str' into its numeric form else return nil. By numeric form if string is in integer method should return integer and it string is in float it should return float.
I have tried with following code. It works fine but need better solution if possible.
```
def to_numeric(str)
Integer(str)
rescue
Float(str) if Float(str) rescue nil
end
```
One important thing I forgot to mention is "I don't know the type of my input".
My use case:
```
arr = [1, 1.5, 2, 2.5, 4]
some_input = get_input_from_some_source
if arr.include?(to_numeric(some_input))
# do something
end
```
|
You can use [`BigDecimal#frac`](http://www.ruby-doc.org/stdlib-1.9.3/libdoc/bigdecimal/rdoc/BigDecimal.html#method-i-frac) to achieve what you want
```
require 'bigdecimal'
def to_numeric(anything)
num = BigDecimal.new(anything.to_s)
if num.frac == 0
num.to_i
else
num.to_f
end
end
```
It can handle
```
#floats
to_numeric(2.3) #=> 2.3
#rationals
to_numeric(0.2E-4) #=> 2.0e-05
#integers
to_numeric(1) #=> 1
#big decimals
to_numeric(BigDecimal.new("2"))
```
And floats, rationals and integers in form of strings, too
|
Allow bluetooth devices to wake PC
I have a bluetooth mouse and keyboard attached to my Windows 10 desktop. However neither of these devices are able to wake the computer from sleep and no guide online appears accurate.
Does anyone know if and how this can be configured?
|
**Enable device Wake-on-Bluetooth**
- Connect the Bluetooth device
- Run Device Manager
- Double-click Bluetooth
- Double-click the specific device (*not the Bluetooth adapter!*)
- Click the "Power Management" tab
- Click to check "Allow this device to wake the computer"
- Click OK
- Reboot.
If the device doesn't have a "Power Management" tab, perhaps the
following can still help.
**Disable hibernate**
In general, the Bluetooth device will be disconnected when the system
enters sleep or hibernate mode, so cannot wake up the computer.
However, if Modern Standby is supported, the Bluetooth devices are
still connected
after the system enters sleep mode. But if it enters hibernate mode,
the connection will be disabled.
Here is how to enable Modern Standby:
- In the Start menu, run "Choose a power plan"
- Click "Change plan settings" for your power plan
- Click "Change advanced power settings"
- Expand "Hibernate after", located under "Sleep"
- Set "On battery" and "Plugged in" as "Never"
- Click OK
- Reboot.
If this doesn't work still, the chances are slim, but continue on.
**Last-ditch effort**
- Run Command Prompt (cmd) as Administrator
- Enter the command: `powercfg /devicequery wake_programmable`
- If you see the Bluetooth device in the list, run
```
powercfg /deviceenableawake "Bluetooth Device exact name"
```
*Note*: The above command is worded according to the documentation
with [deviceenableawake](https://docs.microsoft.com/en-us/windows-hardware/design/device-experiences/powercfg-command-line-options#option_deviceenablewake).
However, the help shown by `powercfg /?` says rather that the
verb is `deviceenablewake`:
```
powercfg /deviceenablewake "Bluetooth Device exact name"
```
- Run the command:
```
powercfg -devicequery wake_armed
```
If your device is now in the list, this should now work. Reboot.
If nothing works, then the computer hardware probably does not
support Wake-on-Bluetooth.
|
How to make a Text content disappear after some time in JavaFX?
```
b1.setOnAction(new EventHandler<ActionEvent>() {
@Override
public void handle(ActionEvent e) {
try {
Class.forName("com.mysql.jdbc.Driver");
connect = DriverManager
.getConnection("jdbc:mysql://localhost:3306/project?"
+ "user=root&password=virus");
statement = connect.createStatement();
preparedStatement = connect
.prepareStatement("select * from mark where clsnum = " + txt1.getText() + "");
rs = preparedStatement.executeQuery();
if (rs.next()) {
delete();
} else {
msg.setText("Student Not Found...!");
}
} catch (ClassNotFoundException | SQLException ex) {
Logger.getLogger(DeleteMark.class.getName()).log(Level.SEVERE, null, ex);
}
}
});
```
This is my code to display a message if the query not worked(I mean if no row is returned to ResultSet rs). msg is an object of Text and its declaration and other details are -
```
Text msg = new Text();
msg.setFont(Font.font("Calibri", FontWeight.THIN, 18));
msg.setFill(Color.RED);
```
I want to make the Text disappear after sometime, like 3 or 4 seconds. Is it possible to do it in JavaFX (with the help of timer or something else you know) ? If yes, how ?
|
Use [Timelines](http://docs.oracle.com/javase/8/javafx/api/javafx/animation/Timeline.html) and/or [Transitions](http://docs.oracle.com/javase/8/javafx/api/javafx/animation/Transition.html).
This answer is for a previous iteration of the question.
**Sample solution code**
```
import javafx.animation.*;
import javafx.application.Application;
import javafx.scene.*;
import javafx.scene.control.Label;
import javafx.scene.layout.StackPane;
import javafx.stage.Stage;
import javafx.util.Duration;
public class BlinkingAndFading extends Application {
@Override
public void start(Stage stage) {
Label label = new Label("Blinking");
label.setStyle("-fx-text-fill: red; -fx-padding: 10px;");
Timeline blinker = createBlinker(label);
blinker.setOnFinished(event -> label.setText("Fading"));
FadeTransition fader = createFader(label);
SequentialTransition blinkThenFade = new SequentialTransition(
label,
blinker,
fader
);
stage.setScene(new Scene(new StackPane(label), 100, 50));
stage.show();
blinkThenFade.play();
}
private Timeline createBlinker(Node node) {
Timeline blink = new Timeline(
new KeyFrame(
Duration.seconds(0),
new KeyValue(
node.opacityProperty(),
1,
Interpolator.DISCRETE
)
),
new KeyFrame(
Duration.seconds(0.5),
new KeyValue(
node.opacityProperty(),
0,
Interpolator.DISCRETE
)
),
new KeyFrame(
Duration.seconds(1),
new KeyValue(
node.opacityProperty(),
1,
Interpolator.DISCRETE
)
)
);
blink.setCycleCount(3);
return blink;
}
private FadeTransition createFader(Node node) {
FadeTransition fade = new FadeTransition(Duration.seconds(2), node);
fade.setFromValue(1);
fade.setToValue(0);
return fade;
}
public static void main(String[] args) {
launch(args);
}
}
```
**Answers to additional questions**
>
> lambda expression not expected here lambda expressions are not supported in -source 1.7 (use -source 8 or higher to enable lambda expressions)
>
>
>
You should use Java 8 and not set `-source 1.7`. If you wish to stick with Java 7 (which I don't advise for JavaFX work), you can replace the Lambda:
```
blinker.setOnFinished(event -> label.setText("Fading"));
```
with:
```
blinker.setOnFinished(new EventHandler<ActionEvent>() {
@Override
public void handle(ActionEvent event) {
label.setText("Fading");
}
});
```
>
> actual and formal argument lists differ in length
>
>
>
Again, you should use Java 8. But if you wish to use Java 7, replace:
```
stage.setScene(new Scene(new StackPane(label), 100, 50));
```
with:
```
StackPane layout = new StackPane();
layout.getChildren().add(label);
stage.setScene(new Scene(layout, 100, 50));
```
**Further recommendations**
Good call on not having the text both blink and fade. Blinking text makes for pretty distracting UI, but just fading is fine.
I don't think I'd recommend fading an error message, at least until the user clicks on it or something like that. What if the user didn't see the error message before it faded away?
|
How to sort each row of a 3D numpy array by another 2D array?
I have a 2D numpy array of 2D points:
```
np.random.seed(0)
a = np.random.rand(3, 4, 2) # each value is a 2D point
```
I would like to sort each row by the norm of every point
```
norms = np.linalg.norm(a, axis=2) # shape(3, 4)
indices = np.argsort(norms, axis=0) # indices of each sorted row
```
Now I would like to create an array with the same shape and values as `a`. that will have each row of 2D points sorted by their norm.
How can I achieve that?
I tried variations of np.take & np.take\_along\_axis but with no success.
for example:
```
np.take(a, indices, axis=1) # shape (3,3,4,2)
```
This samples `a` 3 times, once for each row in `indices`.
I would like to sample `a` just once. each row in `indices` has the columns that should be sampled from the corresponding row.
|
If I understand you correctly, you want this:
```
norms = np.linalg.norm(a,axis=2) # shape(3,4)
indices = np.argsort(norms , axis=1)
np.take_along_axis(a, indices[:,:,None], axis=1)
```
output for your example:
```
[[[0.4236548 0.64589411]
[0.60276338 0.54488318]
[0.5488135 0.71518937]
[0.43758721 0.891773 ]]
[[0.07103606 0.0871293 ]
[0.79172504 0.52889492]
[0.96366276 0.38344152]
[0.56804456 0.92559664]]
[[0.0202184 0.83261985]
[0.46147936 0.78052918]
[0.77815675 0.87001215]
[0.97861834 0.79915856]]]
```
|
How do layouts work in Jetpack Compose and how do they relate to XML?
I have some text.
I want to centre it on the screen.
I am using Jetpack Compose.
How do I do this?
I know that there are three types of layouts in Jetpack Compose.
- Box
- Column
- Horizontal
Which one should I use?
I don't know how layouts work.
Are they full screen by default like in XML?
If so, how do I position elements like ConstraintLayout?
How do I set padding and margin from only one side and how do I link elements?
|
I guess all your questions can be clarified if you follow the [Compose Pathway](https://developer.android.com/courses/pathways/compose). But I'll try to summarize for you...
You can organize your components using one of the following "layout managers" (which in Compose are just called layouts):
- `Column` (similar to `LinearLayout` with vertical orientation)
- `Row` (similar to `LinearLayout` with horizontal orientation)
- `Box` (similar to `FrameLayout`)
- and `ConstraintLayout`.
If you need something different of these, you can create a custom layout using the `Layout` composable.
*"Which one should I use?"*
You can use any of these, depending of the case... To simply display a text in the center of the screen, you can achieve with all of them.
Using `Column`:
```
Column(
Modifier.fillMaxSize(), // to fill the whole screen
verticalArrangement = Arrangement.Center,
horizontalAlignment = Alignment.CenterHorizontally
) {
Text(text = "Hello")
}
```
Using `Box`
```
Box(
Modifier.fillMaxSize()
) {
Text(text = "Hello",
modifier = Modifier.align(Alignment.Center))
}
```
*"Are they full screen by default like in XML?"*
No, they are "wrap\_content" by default.
*"how do I position elements like ConstraintLayout? How do I set padding and margin from only one side and how do I link elements?"*
You need to declare the references to the components and then positioning them accordingly.
Here is a simple example...
```
ConstraintLayout(modifier = Modifier.fillMaxSize().padding(16.dp)) {
// Creating refs...
val (text1Ref, edit1Ref, btn1Ref, btn2Ref) = createRefs()
Text("Name",
// Linking the reference to this component
modifier = Modifier.constrainAs(text1Ref) {
// linking the top of this component to the parent top
top.linkTo(parent.top)
centerHorizontallyTo(parent)
})
TextField(
value = "",
onValueChange = {},
label = { Text("Name") },
modifier = Modifier.padding(top = 8.dp)
.constrainAs(edit1Ref) {
start.linkTo(parent.start)
end.linkTo(parent.end)
// linking this component with the previous component
top.linkTo(text1Ref.bottom)
})
Button(onClick = {},
content = { Text("OK") },
modifier = Modifier.padding(top = 8.dp).constrainAs(btn1Ref) {
end.linkTo(edit1Ref.end)
top.linkTo(edit1Ref.bottom)
}
)
TextButton(onClick = {},
content = { Text("Cancel") },
modifier = Modifier.padding(end = 8.dp)
.constrainAs(btn2Ref) {
end.linkTo(btn1Ref.start)
baseline.linkTo(btn1Ref.baseline)
}
)
}
```
|
How to call C++ functions in my assembly code
I need to call PrintResult from my assembly to display the result. I know I have to use extrn \_PrintResult somewhere, and I should call the function using call \_PrintResult but I'm not sure quite sure how to use it. any suggestions
```
public _Square
.386
.model flat
.code
_Square proc
mov eax, [esp+4]
imul eax
ret
_Square endp
```
.............Here is my C++ code.........
```
#include <iostream>
using namespace std;
enum ResultCode {ShowSquare};
enum SuccessCode {Failure, Success};
extern "C" long Square (long);
void main ()
{
long Num1;
long Num2;
do
{
cout << "Enter Number to Square" << endl;
cin >> Num1;
Result = Square (Num1);
cout << "Square is: " << Result << endl;
} while (Result > 0);
}
void PrintResult (ResultCode PrintCode, long Value) //PrintCode, long Value)
{
switch (PrintCode)
{
case ShowSquare:
cout << "Display of square is: " << Value << endl;
break;
default:
cout << "Error in assembly routines" << endl;
}
}
```
|
I usually don't like to post full code for things, but give this a try:
## 32-bit Assembly
```
.386
.model flat
.code
_Square proc
mov eax, [esp+4]
imul eax
push eax ; Save the calculated result
; Call PrintResult here
push eax ; value
push 0 ; ShowSquare
call _PrintResult
add esp, 8 ; Clear the stack
pop eax ; Return the calculated result
ret
_Square endp
```
## C++
```
#include <iostream>
using namespace std;
enum ResultCode {ShowSquare};
enum SuccessCode {Failure, Success};
extern "C" long Square(long);
int main(int argc, char* argv[])
{
long Num1, Num2;
do
{
cout << "Enter number to square" << endl;
cin >> Num1;
Num2 = Square(Num1);
cout << "Square returned: " << Num2 << endl;
}
while (Num2);
return 0;
}
extern "C"
void PrintResult(ResultCode result, long value)
{
switch (result)
{
case ShowSquare:
cout << "Square is: " << value << endl;
break;
default:
cout << "Error calculating square" << endl;
break;
}
}
```
Because you are writing a C program, the default calling mechanism is [cdecl](http://en.wikipedia.org/wiki/X86_calling_conventions#cdecl) which means that all the parameters are passed on the stack, the return value is passed back in `eax`, and the caller is responsible for cleaning up the stack afterward.
So in order to call PrintResult, you have to push all of your parameters onto the stack before invoking the procedure. And after the procedure returns, we have to clean up our stack (`add esp, 8`).
Because the [cdecl](http://en.wikipedia.org/wiki/X86_calling_conventions#cdecl) calling convention allows `eax` to be modified during the call, `eax` may not be preserved when PrintResult returns, so we save the calculated result before calling PrintResult and then restore it after the call returns.
I have not tried the above code, but I hope it helps get you going down the right track.
**Note:** Because you are using a C++ compiler, the `extern "C"` before PrintResult is required.
|
Maven 3.0.5 vs 3.1.1 vs 3.2.1
Today I've visited [official Maven website](http://maven.apache.org/) and was surprised to see 3 versions listed there: 3.0.5, 3.1.1, and 3.2.1
I am currently using 3.0.5, and would like to know if I should upgrade to a newer version.
Unfortunately, there is not a single word on the website about what is different between versions, and whether it is recommended to upgrade, and if upgrade to what version.
Can anyone point to the relevant resources?
|
Since Codehaus shut down in April 2015, some of the links in the historical release notes (e.g. in 3.2.1) are broken. You can browse them at:
<https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316922>
One important point for consideration: upgrading Maven will upgrade the *default* version of core plugins. And almost everything interesting about Maven is done in the plugins (hence why you won't find many Earth shattering changes in the release notes of Maven itself).
If you are relying on default plugin versions, you need to look at the plugin release notes as well as just the maven version. The release notes for 3.2.1 don't include the changes that come along with Wagon 2.6, for example.
I suggest not to rely on maven's plugin version defaults as it is too easy for colleagues to use slightly different Maven versions leading to inconsistent builds due to plugin differences.
If you're not doing so already, I suggest to explicitly specify your maven version in your pom; e.g.:
```
<prerequisites>
<maven>3.0.5</maven>
</prerequisites>
```
Then add a dependency on versions-maven-plugin and run
```
mvn versions:display-plugin-updates.
```
This will give you a list of all the updates available to plugins for your version of maven. Google to find out if there are any changes of interest. Of course you can choose to upgrade only some plugins, but regardless specify all your plugin dependencies explicitly.
|
How is a single IP address load balanced?
I'm aware of "round robin DNS" load balancing, but how can a single IP address be load balanced?
Google's DNS servers for example, `8.8.8.8` and `8.8.4.4`. Wikipedia's load balancing article states:
>
> For Internet services, the load balancer is usually a software program that is listening on the port where external clients connect to access services. The load balancer forwards requests to one of the "backend" servers, which usually replies to the load balancer.
>
>
>
..which seems reasonable when used with round robin DNS, however for the likes of Google's DNS servers this doesn't seem like a very redundant or capable setup.
|
<http://en.wikipedia.org/wiki/Anycast>
>
> Anycast is a network addressing and routing methodology in which datagrams from a single sender are routed to the topologically nearest node in a group of potential receivers, though it may be sent to several nodes, all identified by the same destination address.
>
>
> ...
>
>
> Nearly all Internet root nameservers are implemented as clusters of hosts using anycast addressing. 12 of the 13 root servers A-M exist in multiple locations, with 11 on multiple continents. (Root server H exists in two U.S. locations. Root server B exists in a single, unspecified location.) The 12 servers with multiple locations use anycast address announcements to provide a decentralized service. This has accelerated the deployment of physical (rather than logical) root servers outside the United States. RFC 3258 documents the use of anycast addressing to provide authoritative DNS services. Many commercial DNS providers have switched to an IP anycast environment to increase query performance, redundancy, and to implement load balancing.
>
>
>
|
NServiceBus - replying to message after publish
I have a subscriber that successfully handles a message, the subscriber then proceeds to successfully publish another message to state that a certain event has happened, my problem is that i after the publish i attempt to return a message to the sender of the initial message and the system fails with the following message
>
> No destination specified for message NServiceBus.Unicast.Transport.CompletionMessage. Message cannot be sent. Check the UnicastBusConfig section in your config file and ensure that a MessageEndpointMapping exists for the message type.
>
>
>
The return code looks as follows:
```
Bus.Publish(orderMessage);
Bus.Return((int)MySendBus.Core.ErrorCode.Ok);
```
and the app.config is as follows:
```
<configuration>
<configSections>
<section name="MsmqTransportConfig" type="NServiceBus.Config.MsmqTransportConfig, NServiceBus.Core"/>
</configSections>
<MsmqTransportConfig InputQueue="MyServerInputQueue" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5"/>
</configuration>
```
I've added a unicast section and still get the same error. My understanding is that NServicebus knows how to reply to the message and i shouldn't have to specify a queue for the reply to go on other than the MsmqTransportConfig input queue found in the app.config.
Is it possible to have a subscriber publish a message then respond to the where the message was sent?
|
If you use Bus.Return() then you must register a call back on the client endpoint like so:
```
Bus.Send<IRequestDataMessage>(m =>
{
m.DataId = g;
m.String = "<node>it's my \"node\" & i like it<node>";
})
.Register(i => Console.Out.WriteLine(
"Response with header 'Test' = {0}, 1 = {1}, 2 = {2}.",
Bus.CurrentMessageContext.Headers["Test"],
Bus.CurrentMessageContext.Headers["1"],
Bus.CurrentMessageContext.Headers["2"]));
```
If you want to return a full message of your choosing then use Bus.Reply() and write a handler in your client endpoint. My full sample can be found [here](https://github.com/afyles/Blog/tree/master/NServiceBusExplorer/Source/RequestResponse).
|
Do object arguments for dummy parameters get constructed?
Suppose a function template:
```
template <class T>
void foo(T /* dummy */) {...}
```
Suppose that `foo` is called like this:
```
foo(Widget());
```
Would a `Widget` object be constructed in this case?
[This](https://stackoverflow.com/questions/14291549/will-compiler-optimize-out-unused-arguments-of-static-function) post asks a similar question about unused arguments (and arguments for dummy parameters are certainly unused). The replies suggest that, unless the function is called through a function pointer, the unused arguments will be optimized out by the compiler.
However, consider the following text in Section 2.5 of **Modern C++** by Alexandrescu:
>
> Now say there is a rule in your application: Objects of type Widget are untouchable legacy code and must take two arguments upon construction, the second being a fixed value such as -1. Your own classes, derived from Widget, don't have this problem.
>
>
> ...
>
>
> In the absence of partial specialization of functions, the only tool available is, again, overloading. A solution would be to pass a dummy object of type T and rely on overloading:
>
>
>
> ```
> template <class T, class U>
> T* Create(const U& arg, T /* dummy */)
> {
> return new T(arg);
> }
> template <class U>
> Widget* Create(const U& arg, Widget /* dummy */)
> {
> return new Widget(arg, -1);
> }
>
> ```
>
> **Such a solution would incur the overhead of constructing an arbitrarily complex object that remains unused**.
>
>
>
This suggests that the compilers are not smart enough to avoid the construction of the argument for the dummy parameter...
So, which is correct? If Alexandrescu is correct, then why doesn't this optimization happen?
|
Creating an object can have side effects.
Unless the compiler can prove that *no* side effects happen, or there is no part of the standard that mandates that a side effect happens, eliminating this creation of an object is not allowed under either the as-if (the compiler can do anything with your code, so long as it behaves as-if they didn't do the change, up to the requirements of the standard) or elision (you merge lifetime of some objects in some cases, even if it doesn't behave as-if you did not merge them) rules.
As an example, suppose Widgets registered their existence with a central location. When the object was created, the count of Widgets in existence would go up by 1 -- making that not happen is illegal under the standard.
Even if there are no side effects, *proving* there are no side effects requires the compiler gather up all of the code involved in creating the Widget, and analyzing it for "doing nothing in the end". This can vary from hard (link-time optimization of a large amount of code with peculiar "the object will go away at time Y" constraints to determine if any side effects are mandated), to impossible (we are talking about analyzing the non-trivial properties of a Turing complete computation result).
And all of this for a relatively strange corner case, where "someone created an object for no good reason, then discarded it without using it".
|
Determine if a base64 string or a buffer contains JPEG or PNG without metadata? Possible?
Is there any way to do this using node, whether natively or with a plugin?
What I'm trying to accomplish is to choose loseless or lossy image compression depending on the input type. Loseless on a large JPEG is a storage catastrophe.
|
The first eight bytes of a PNG file always contain the following values - see [PNG Specification](http://www.libpng.org/pub/png/spec/1.2/PNG-Rationale.html#R.PNG-file-signature):
```
(decimal) 137 80 78 71 13 10 26 10
(hexadecimal) 89 50 4e 47 0d 0a 1a 0a
(ASCII C notation) \211 P N G \r \n \032 \n
```
So, if I take 8 bytes from the start of any PNG file and base64 encode it as follows, I get:
```
head -c8 test.png | base64
iVBORw0KGgo=
```
---
The first 2 bytes of every JPEG file contain `ff` `d8` in hex - see [Wikipedia entry for JPEG](https://en.wikipedia.org/wiki/JPEG). So if I take any JPEG file and base64 encode the first two bytes as follows, I get:
```
head -c2 test.jpg | base64
/9g=
```
---
So my suggestion would be to look at the first few (10 for `PNG` and 2 for `JPEG`, always excluding the `=`) characters of your base64-encoded file and see if they match what I am suggesting and then use that as the determinant - be sure to output error messages if your string matches neither in case the test is not sufficiently thorough for some reason!
---
Why 10 characters for PNG? Because the guaranteed signature is 8 bytes, i.e. 64 bits and base64 splits into 6 bits at a time to generate a character, so the first 10 characters are the first 60 bits. The 11th character will vary depending on what follows the signature.
Same logic for JPEG... 2 bytes is 16 bits, which means 2 characters each corresponding to 6 bits are guaranteed. The 3rd character will vary depending on what follows the 2-byte SOI marker.
|
How to stop pressing button using keyboard keys like "Spacebar or Enter". C#
I have win-foam application having several buttons. When i run this application, i am able to press these buttons using mouse click also able to press these buttons using keyboard keys. I don't want to press these buttons using keyboard keys.
I also want to stop focus on buttons when i click "Tab" or arrows keys"^,v,<,>".
Regards
|
Instead of the standard `Button`, use the following when you need that behavior
```
public class NonSelectableButton : Button
{
public NonSelectableButton()
{
SetStyle(ControlStyles.Selectable, false);
}
}
```
**EDIT:**
Here is a little test proving that it's working
```
using System;
using System.Linq;
using System.Windows.Forms;
namespace Samples
{
public class NonSelectableButton : Button
{
public NonSelectableButton()
{
SetStyle(ControlStyles.Selectable, false);
}
}
static class Program
{
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
var form = new Form();
Control[] controls = { new TextBox(), new TextBox(), };
Button[] buttons = { new NonSelectableButton { Text = "Prev" }, new NonSelectableButton { Text = "Next" }, };
foreach (var button in buttons)
button.Click += (sender, e) => MessageBox.Show("Button " + ((Button)sender).Text + " clicked!");
int y = 0;
foreach (var item in controls.Concat(buttons))
{
item.Left = 8;
item.Top = y += 8;
form.Controls.Add(item);
y = item.Bottom;
}
Application.Run(form);
}
}
}
```
**EDIT2:**: To apply the solution, you need to do the following:
(1) Add a new code file to your project, call it NonSelectableButton.cs with the following content
```
using System;
using System.Linq;
using System.Windows.Forms;
namespace YourNamespace
{
public class NonSelectableButton : Button
{
public NonSelectableButton()
{
SetStyle(ControlStyles.Selectable, false);
}
}
}
```
(2) Compile the project
(3) Now the new button will appear in the control toolbox (at the top) and you can drag it on a form **instead** of a standard button.
|
Does this function have to use reduce() or is there a more pythonic way?
If I have a value, and a list of additional terms I want multiplied to the value:
```
n = 10
terms = [1,2,3,4]
```
Is it possible to use a list comprehension to do something like this:
```
n *= (term for term in terms) #not working...
```
Or is the only way:
```
n *= reduce(lambda x,y: x*y, terms)
```
This is on Python 2.6.2. Thanks!
|
Reduce is not the only way. You can also write it as a simple loop:
```
for term in terms:
n *= term
```
I think this is much more clear than using `reduce`, especially when you consider that many Python programmers have never seen `reduce` and the name does little to convey to people who see it for the first time what it actually does.
Pythonic does not mean *write everything as comprehensions* or *always use a functional style if possible*. Python is a multi-paradigm language and writing simple imperative code when appropriate is Pythonic.
Guido van Rossum also doesn't want `reduce` in Python:
>
> So now reduce(). This is actually the one I've always hated most, because, apart from a few examples involving + or \*, almost every time I see a reduce() call with a non-trivial function argument, I need to grab pen and paper to diagram what's actually being fed into that function before I understand what the reduce() is supposed to do. So in my mind, the applicability of reduce() is pretty much limited to associative operators, and in all other cases it's better to write out the accumulation loop explicitly.
>
>
> There aren't a whole lot of associative operators. (Those are operators X for which (a X b) X c equals a X (b X c).) I think it's just about limited to +, \*, &, |, ^, and shortcut and/or. We already have sum(); I'd happily trade reduce() for product(), so that takes care of the two most common uses. [...]
>
>
>
In Python 3 reduce has been moved to the `functools` module.
|
How do I let a user browse HTML/WebSite content without launching the browser (need an inline browser)?
I'm creating an app on Windows Phone 7 that lets folks read offline HTML content. Is there a way to show an inline browser or HTML viewing control?
|
The `WebBrowser` control allows you to display arbitrary HTML.
There are two methods on the `WebBrowser` control you can use to display HTML:
1. `Navigate()` will display the contents of a file located at the supplied URL. This file can be in `IsolatedStorage` or on a server.
2. `NavigateToString()` will display the html string you give it.
In your case I'm guessing you'd use a `WebClient` object to download the webpage to offline storage, then the `Navigate` method to display it.
However, the benefit of `NavigateToString` is that you can munge the string on the way in to do some cool (or essential) stuff like style the page so it matches perfectly with the current phone theme, or catch all link clicks and make them pop in the external browser (otherwise they load in the very same `WebBrowser` control).
I've documented the details of styling and link clickery [here](http://www.ben.geek.nz/2010/07/integrated-links-and-styling-for-windows-phone-7-webbrowser-control/).
|
Groupby given percentiles of the values of the chosen DataFrame column
Imagine that I have a `DataFrame` with columns that contain only real values.
```
>> df
col1 col2 col3
0 0.907609 82 4.207991
1 3.743659 1523 6.488842
2 2.358696 324 5.092592
3 0.006793 0 0.000000
4 19.319746 11969 7.405685
```
I want to group it by quartiles (or any other percentiles specified by me) of the chosen column (e.g., `col1`), to perform some operations on these groups. Ideally, I would like to do something like:
```
df.groupy( quartiles_of_col1 ).mean() # not working, how to code quartiles_of_col1?
```
The output should give the mean of each of the columns for four groups corresponding to the quartiles of `col1`. Is this possible with the `groupby` command? What's the simplest way of achieving it?
|
I don't have a computer to test it right now, but I think you can do it by: `df.groupby(pd.cut(df.col0, np.percentile(df.col0, [0, 25, 75, 90, 100]), include_lowest=True)).mean()`. Will update after 150mins.
Some explanations:
```
In [42]:
#use np.percentile to get the bin edges of any percentile you want
np.percentile(df.col0, [0, 25, 75, 90, 100])
Out[42]:
[0.0067930000000000004,
0.907609,
3.7436589999999996,
13.089311200000001,
19.319745999999999]
In [43]:
#Need to use include_lowest=True
print df.groupby(pd.cut(df.col0, np.percentile(df.col0, [0, 25, 75, 90, 100]), include_lowest=True)).mean()
col0 col1 col2
col0
[0.00679, 0.908] 0.457201 41.0 2.103996
(0.908, 3.744] 3.051177 923.5 5.790717
(3.744, 13.0893] NaN NaN NaN
(13.0893, 19.32] 19.319746 11969.0 7.405685
In [44]:
#Or the smallest values will be skiped
print df.groupby(pd.cut(df.col0, np.percentile(df.col0, [0, 25, 75, 90, 100]))).mean()
col0 col1 col2
col0
(0.00679, 0.908] 0.907609 82.0 4.207991
(0.908, 3.744] 3.051177 923.5 5.790717
(3.744, 13.0893] NaN NaN NaN
(13.0893, 19.32] 19.319746 11969.0 7.405685
```
|
TypeScript: assigning `Pick` to `Partial`
In the following example, I can't think of any situation where assigning `Pick<Object, Key>` to `Partial<Object>` would not be sound, therefore I would expect this to be allowed.
Can anyone clarify why it is not allowed?
```
const fn = <T, K extends keyof T>(partial: Partial<T>, picked: Pick<T, K>) => {
/*
Type 'Pick<T, K>' is not assignable to type 'Partial<T>'.
Type 'keyof T' is not assignable to type 'K'.
'keyof T' is assignable to the constraint of type 'K', but 'K' could be instantiated with a different subtype of constraint 'string | number | symbol'.
*/
partial = picked;
};
```
[TypeScript playground example](https://www.typescriptlang.org/play/index.html#code/MYewdgzgLgBAZmGBeGAeAKgGhgaRgUwA8p8wATCGAa3wE8Q4Z0A+ACgAcBDAJygEtOAGwBcMAAo9+QjM2zs+wGmVFiFVDNhzMAlMmYwA3gCgYpmAHoAVCbPpa7fDADkqxRtzMnMPpTAhYnBAQfADmYJwARoKOUCAwUPaOLpICgjJOAHQ2ZraJzjT0jOhePjB+AUGh4VExcQkOzjiZ2TmtTgUMTCWUgcFhkdHxdQAWjqCQUNycfGCwnfVJTdgRAK6wTk0woCuCZDARjjPQnLMCJHsA7nxQwzCcMGR8cHD43KSwECsRCzCd49BTGbrAEzEIwAA+ZRWAFsDtwITAILRYSBBM0cpZzC0uLxUsgYPJFPgyABuIwAXxJQA)
|
@TitianCernicovaDragomir is essentially correct that the compiler generally is unable to do sophisticated type analysis on unresolved generic types. It does much better with concrete types. See [Microsoft/TypeScript#28884](https://github.com/microsoft/TypeScript/issues/28884) for a discussion about this with `Pick` and `Omit` with complementary sets of keys.
In these situations the only way to proceed is for you to personally verify that the assignment is sound and then use a [type assertion](https://www.typescriptlang.org/docs/handbook/basic-types.html#type-assertions) as in `partial = picked as Partial<T>`...
---
... but I wouldn't do that in this case. The error really is a good one here, although it's hard to see why since you've essentially just overwritten the `partial` variable and done nothing with it within the function scope. So despite being unsound the code is harmless because it hasn't been allowed to wreak havoc elsewhere. Let's unchain it by making `fn()` return the modified `partial` variable:
```
const fn = <T, K extends keyof T>(partial: Partial<T>, picked: Pick<T, K>) => {
partial = picked; // error, for good reason
return partial; //
};
```
So, the basic problem is that `Pick<T, K>` is a *wider* type than `T`. It contains the properties from `T` with keys in `K`, but it is not known *not* to contain properties with keys *not* in `K`. I mean, a value of type `Pick<{a: string, b: number}, "a">` may well have a `b` property. And if it does have one, it does not have to be of type `number`. So it's a mistake to assign a value of type `Pick<T, K>` to a variable of type `Partial<T>`.
Let's flesh this out with a silly example. Imagine you have a `Tree` interface and an object of type `Tree`, like this:
```
interface Tree {
type: string;
age: number;
bark: string;
}
const tree: Tree = {
type: "Aspen",
age: 100,
bark: "smooth"
};
```
And you also have a `Dog` interface and an object of type `Dog`, like this:
```
interface Dog {
name: string;
age: number;
bark(): void;
}
const dog: Dog = {
name: "Spot",
age: 5,
bark() {
console.log("WOOF WOOF!");
}
};
```
So, `dog` and `tree` both have a numeric `age` property, and they both have a `bark` property of differing types. One is a `string` and the other is a method. Do note that `dog` is a perfectly valid value of type `Pick<Tree, "age">`, but an *invalid* value of type `Partial<Tree>`. And therefore when you call `fn()`:
```
const partialTree = fn<Tree, "age">(tree, dog); // no error
```
my modified `fn()` returns `dog` as `Partial<Tree>`, and fun begins:
```
if (partialTree.bark) {
partialTree.bark.toUpperCase(); // okay at compile time
// at runtime "TypeError: partialTree.bark.toUpperCase is not a function"
}
```
That unsoundness leaked through precisely because `Pick<T, K>` is not known to exclude or otherwise constrain the "unpicked" properties. You can create your own `StrictPicked<T, K>` in which the properties from `T` not in `K` are explicitly excluded:
```
type StrictPicked<T, K extends keyof T> = Pick<T, K> &
Partial<Record<Exclude<keyof T, K>, never>>;
```
And now your code is more sound (ignoring weird things like `K` being a branded type like in the [above comment](https://stackoverflow.com/users/125734/titian-cernicova-dragomir))... but the compiler still can't verify it:
```
const fn2 = <T, K extends keyof T>(
partial: Partial<T>,
picked: StrictPicked<T, K>
) => {
partial = picked; // also error
partial = picked as Partial<T>; // have to do this
return partial;
};
```
That's still the basic issue here; the compiler can't easily deal with things like this. Maybe it will someday? But at least it's not as easily misused on the caller side:
```
fn2<Tree, "age">(tree, dog); // error, dog is not a StrictPicked<Tree, "age">
```
Anyway, hope that helps. Good luck!
[Link to code](http://www.typescriptlang.org/play//#code/MYewdgzgLgBAZmGBeGAeAKgGhgaRgUwA8p8wATCGAa3wE8Q4Z0A+ACgAcBDAJygEtOAGwBcMAAo9+QjM2zs+wGmVFiFVDNhzMAlMmYwA3gCgYMLrwGDkZtfjIBuGAHonBbtxDdscTzADmICBkMNz4nBDgJiH4UACu3IjmUoKOLjCAvBuAEHtGAL72RkZ8YCTccJzA+EyhlcamULTs+KLQ3EV++aacfk0wYLEAtgBG+NwdMIM8VM1QrWDtuQWgkLAz+D3o1da1MPWNogBEAIIQjWD7mFFdPQCMAAy3F6YT3FMw+xD9gVAAFvu5+YViiMyhUYAAREB+QxRMCcfo9FptMZXUR9IYjMbPKisbSiABuID4DgWRiW0BgZEhoghUJQ21h8IOAGV2CAoOdLt1RABWR7jSY46GmUxkkCCfAAOkEkNY+wA6gB5BUAMRgipVAEJ9toxjl-otwOSkpYNmtrAgMNVsPsrvs2Kt8NhKX4dc5XGAQG4PNwCnxGBxJCbqhKsbptsahKbJViJVAQABVdiNbgAYXC+BxqVcIConFoME4sFA-Xk4p2fHhUTShZCsWKFcq+3QDXwAFF3J5RBHBFGQ5NYwmkyM0xBKnxKB7YJx4HXgPxwH89UZdpUmTMFFBVIo7BpcARiKQKNQ6AwmPoUFv1FhcPoAGRRCQWaQAJXwoG4ZFQrcIwEEsTI+CoDQ9CMNeWjYGA+B4iMzDMACZKwAgABM1i7ngRAkOQlDAaeLCsFE3YqIG0gsHy8jbsoMBrq0c6XjuYHMEYuhIPo4bEVYKDkUoWYFoIERep4BHsdYXF2AWlCPskMg8d8nDQTsnqUjs3zjlEoRxAkZjsfkeRGMhlprNatr2laFKQq6aQjN6TqQjA469GyBZUeutG2J+UZGd0dpGEAA)
|
C# Allow user to draw on winform with mouse/screen clicks
I am working on a personal project and I need to allow the client draw a "signature" on a new popup form, this by handling the events (maybe on-click and on-mouse over) events.
This signature has to be stored on an image object in order to save it to a varbinary(max) field on a database.
Googling around is not working, Any idea how this can be achieved?
|
I checked in my touch screen laptop, touchdown event can be handled through **MouseDown** event, touchup through **MouseUp** and touchmove through **MouseMove** event of the form.
**Note:** My machine has both touch and mouse support. I am not sure about touch only devices or machines.
Following code allows you to draw on the form through touch/mouse interactions.
```
public partial class Form1 : Form
{
Image signature;
bool clicked = false;
Point previousPoint;
public Form1()
{
this.SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.UserPaint | ControlStyles.DoubleBuffer, true);
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
this.Paint += Form1_Paint;
this.MouseDown += Form1_MouseDown;
this.MouseUp += Form1_MouseUp;
this.MouseMove += Form1_MouseMove;
this.MouseLeave += Form1_MouseLeave;
this.FormClosing += Form1_FormClosing;
}
void Form1_FormClosing(object sender, FormClosingEventArgs e)
{
//Dispose signature after closing the form to avoid memory leak
signature.Dispose();
}
void Form1_Paint(object sender, PaintEventArgs e)
{
if (signature != null)
e.Graphics.DrawImage(signature, 0, 0);
}
void Form1_MouseDown(object sender, MouseEventArgs e)
{
clicked = true;
previousPoint = e.Location;
}
void Form1_MouseLeave(object sender, EventArgs e)
{
clicked = false;
}
void Form1_MouseUp(object sender, MouseEventArgs e)
{
clicked = false;
}
void Form1_MouseMove(object sender, MouseEventArgs e)
{
if (clicked)
{
if (signature == null)
signature = new Bitmap(this.Width, this.Height);
using (Graphics g = Graphics.FromImage(signature))
{
g.DrawLine(Pens.Black, previousPoint, e.Location);
previousPoint = e.Location;
this.Invalidate();
}
}
}
}
```
The signature is drawn on an image. So you can save the image as required in your database.
|
python human readable large numbers
is there a python library that would make numbers such as this more human readable
$187,280,840,422,780
edited: for example iw ant the output of this to be 187 Trillion not just comma separated. So I want output to be trillions, millions, billions etc
|
As I understand it, you only want the 'most significant' part. To do so, use `floor(log10(abs(n)))` to get number of digits and then go from there. Something like this, maybe:
```
import math
millnames = ['',' Thousand',' Million',' Billion',' Trillion']
def millify(n):
n = float(n)
millidx = max(0,min(len(millnames)-1,
int(math.floor(0 if n == 0 else math.log10(abs(n))/3))))
return '{:.0f}{}'.format(n / 10**(3 * millidx), millnames[millidx])
```
Running the above function for a bunch of different numbers:
```
for n in (1.23456789 * 10**r for r in range(-2, 19, 1)):
print('%20.1f: %20s' % (n,millify(n)))
0.0: 0
0.1: 0
1.2: 1
12.3: 12
123.5: 123
1234.6: 1 Thousand
12345.7: 12 Thousand
123456.8: 123 Thousand
1234567.9: 1 Million
12345678.9: 12 Million
123456789.0: 123 Million
1234567890.0: 1 Billion
12345678900.0: 12 Billion
123456789000.0: 123 Billion
1234567890000.0: 1 Trillion
12345678900000.0: 12 Trillion
123456789000000.0: 123 Trillion
1234567890000000.0: 1235 Trillion
12345678899999998.0: 12346 Trillion
123456788999999984.0: 123457 Trillion
1234567890000000000.0: 1234568 Trillion
```
|
How to make Visual Studio Code remember previous commit messages?
I have recently started using Microsoft's open-source Visual Studio Code IDE for developing web projects, shifting from Eclipse. I find VSCode highly intuitive and very simple to use.
But one feature I miss in VSCode is that the IDE's inability to remember commit messages (or have I not explored enough?!). Unlike Eclipse, which populates a dropdown list of historical commit messages, we have to manually enter commit messages in VSCode every time we commit our changes.
- Is there any VSCode extension available for this purpose?
- Can I make any entry in `settings.json` so that older commit messages
are retrieved automatically?
**Any help would be highly appreciated.**
|
VSCode 1.51 (Oct. 2020) does have a similar feature: `Alt`+`Arrow Up`:
>
> ## [Source Control input box saves commit message history](https://github.com/microsoft/vscode-docs/blob/b90c7662cebb51d859506135a03f1ff459f1c743/release-notes/v1_51.md#source-control-input-box-saves-commit-message-history)
>
>
> This addresses a [feature request](https://github.com/microsoft/vscode/issues/26952) to navigate SCM commit history.
>
>
> Press `kb(scm.viewPreviousCommit)` and `kb(scm.viewNextCommit)` to display the prior and next commits, respectively.
>
> To move directly to the first and last position of the input box, press `Alt` in conjunction with the corresponding arrow key.
>
>
> Build off of past commit messages and look back in the history without losing your drafted message.
>
>
> [](https://i.stack.imgur.com/poZz1.gif)
>
>
>
|
Adding keywords with Scintilla
I"m using ScintillaNET a wrapper for the Scintilla control. I want to change the keywords (for syntax highlighting) for a specific language, I am assuming I have to build my own version of SciLexer.dll for that. But I can't find a keyword file for the languages in the Scintilla project. Where are they and how can I change them?
|
You don't need to build your own SciLexer.dll, ScintillaNET supports XML config files. Set the properties of the Scintilla like this:
```
// Relative to your running directory
scintilla1.ConfigurationManager.CustomLocation = "Config.xml";
//Name of the language as defined in the file
scintilla1.ConfigurationManager.Language = "MyLanguage";
```
Then create a config file like this one, which is based on lua:
```
<?xml version="1.0" encoding="utf-8"?>
<ScintillaNET>
<!--This is what you set the Language property to-->
<Language Name="lua">
<!--These are characters after which autocomplete will open-->
<AutoComplete FillUpCharacters=".([" SingleLineAccept="True" IsCaseSensitive="False">
<List>
<!--Insert autocomplete keywords here-->
and break do else elseif end false for function
if in local nil not or repeat return then true until while
</List>
</AutoComplete>
<!--Indentation width and indentation type-->
<Indentation TabWidth="4" SmartIndentType="cpp" />
<!--Comment characters and the lexer to use-->
<Lexer LexerName="lua" LineCommentPrefix="--" StreamCommentPrefix="--[[ " StreamCommentSuffix=" ]]" >
<Keywords List="0" Inherit="False">
<!--Insert highlighted keywords here-->
and break do else elseif end false for function
if in local nil not or repeat return then true until while
</Keywords>
</Lexer>
</Language>
</ScintillaNET>
```
|
DateFormat conversion problem in java?
my input String is : 2010-03-24T17:28:50.000Z
output pattern is like:
```
DateFormat formatter1 = new SimpleDateFormat("EEE. MMM. d. yyyy");
```
i convert this like this:
```
formatter1.format(new Date("2010-03-24T17:28:50.000Z"));//illegalArgumentException here the string "2010-03-24T17:28:50.000Z"
```
ouput should be like this: Thu. Mar. 24. 2010 idea
but i get a **illegalArgumentException**. Dont know why? any idea??
stacktrace message is:
```
04-08 19:50:28.326: WARN/System.err(306): java.lang.IllegalArgumentException
04-08 19:50:28.345: WARN/System.err(306): at java.util.Date.parse(Date.java:447)
04-08 19:50:28.355: WARN/System.err(306): at java.util.Date.<init>(Date.java:157)
04-08 19:50:28.366: WARN/System.err(306): at com.example.brown.Bru_Tube$SelectDataTask.doInBackground(Bru_Tube.java:222)
04-08 19:50:28.366: WARN/System.err(306): at com.example.brown.Bru_Tube$SelectDataTask.doInBackground(Bru_Tube.java:1)
04-08 19:50:28.405: WARN/System.err(306): at android.os.AsyncTask$2.call(AsyncTask.java:185)
04-08 19:50:28.415: WARN/System.err(306): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305)
04-08 19:50:28.415: WARN/System.err(306): at java.util.concurrent.FutureTask.run(FutureTask.java:137)
04-08 19:50:28.446: WARN/System.err(306): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068)
04-08 19:50:28.456: WARN/System.err(306): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561)
04-08 19:50:28.466: WARN/System.err(306): at java.lang.Thread.run(Thread.java:1096)
```
|
The problem is in this part:
```
new Date("2010-03-24T17:28:50.000Z")
```
Apparently it doesn't accept dates/times in that format.
You shouldn't be using that constructor anyway - create an appropriate formatter to parse that particular format, and then parse it with that.
Alternatively, use [Joda Time](http://joda-time.sf.net) to start with, and avoid using `DateFormat` completely. I don't know if you can use Joda Time from Android, mind you... and it's fairly large.
EDIT: To spell it out explicitly:
```
String inputText = "2010-03-24T17:28:50.000Z";
// "Z" appears not to be supported for some reason.
DateFormat inputFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
inputFormat.setTimeZone(TimeZone.getTimeZone("UTC"));
DateFormat outputFormat = new SimpleDateFormat("EEE. MMM. d. yyyy");
Date parsed = inputFormat.parse(inputText);
String outputText = outputFormat.format(parsed);
// Output is Wed. Mar. 24 2010 on my box
```
|
Removing the Great Lakes from US county-level maps in R
I am using R to draw US map at county level. I downloaded the shapefile for US from [GADM](https://gadm.org/download_country_v3.html). The county-level shape file is "gadm36\_USA\_2.shp". I then used the code below to draw map:
```
library(sf)
library(tidyverse)
us2 <- st_read("<Path>\\gadm36_USA_2.shp")
mainland2 <- ggplot(data = us2) +
geom_sf(aes(fill = NAME_2), size = 0.4, color = "black") +
coord_sf(crs = st_crs(2163),
xlim = c(-2500000, 2500000),
ylim = c(-2300000, 730000)) + guides(fill = F)
```
The Great Lakes region (shown by red arrows) is plotted rather than left blank:
[](https://i.stack.imgur.com/6X08H.png)
What I want is a figure like below, where the Great Lakes region is left blank:
[](https://i.stack.imgur.com/RAkZ8.jpg)
How could I identify from the "gadm36\_USA\_2.shp" which rows correspond to the Great Lakes region so that I may delete them?
I understand there may be other ways to obtain shapefile than GADM. I believe GADM is an excellent source that provides bourndaries worldwide. I wish to take this opportunity to better acquaint myself with data downloaded from GADM.
Of course, other methods to obtain US county-level boundary data are welcome. I noted `USAboundaries` package also provide country, state, and county level coundaries, but I am having difficulties installing associated USAboundariesData package. Any idea to draw US counties in ways other than shapefile from GADM is welcome. Thanks.
|
One way is to remove every feature that is tagged with `Lake` in the existing records (currently 13 features). First, you need to find the lakes name in the attribute table as below:
```
# retrieving the name of lakes and excluding them from the sf
all.names = us2$NAME_2
patterns = c("Lake", "lake")
lakes.name <- unique(grep(paste(patterns, collapse="|"), all.names, value=TRUE, ignore.case = TRUE))
#[1] "Lake and Peninsula" "Lake" "Bear Lake" "Lake Michigan" "Lake Hurron" "Lake St. Clair"
#[7] "Lake Superior" "Lake of the Woods" "Red Lake" "Lake Ontario" "Lake Erie" "Salt Lake"
#[13] "Green Lake"
`%notin%` <- Negate(`%in%`)
us <- us2[us2$NAME_2 %notin% lakes.name, ]
```
Then you can map the remaining features:
```
mainland2 <- ggplot(data = us) +
geom_sf(aes(fill = NAME_2), size = 0.4, color = "black") +
coord_sf(crs = st_crs(2163),
xlim = c(-2500000, 2500000),
ylim = c(-2300000, 730000)) + guides(fill = F)
mainland2
```
[](https://i.stack.imgur.com/jxUOw.jpg)
Another way (much easier but less flexible) is to map county features by excluding `Water body` values from `ENGTYPE_2` as below:
```
us <- us2[(us2$ENGTYPE_2) != "Water body",]
mainland2 <- ggplot(data = us) +
geom_sf(aes(fill = NAME_2), size = 0.4, color = "black") +
coord_sf(crs = st_crs(2163),
xlim = c(-2500000, 2500000),
ylim = c(-2300000, 730000)) + guides(fill = F)
mainland2
```
[](https://i.stack.imgur.com/Vs0cb.jpg)
|
Mac Catalyst popover set/limit size problem
Is it possible to limit/fix (min/max) popover size on Mac Catalyst? See the attached video.
[Example video](https://i.imgur.com/4UXCiLc.mp4)
|
Yes, but it's a bit of a hack. Big Sur is loading these presented view controllers as their own windows, so we can grab the window's `windowScene` and set its `sizeRestrictions`. The best (?) place to do this is in the *presented* view controller's `viewWillLayoutSubviews` method:
```
class MyPresentedViewController: UIViewController {
override func viewWillLayoutSubviews() {
super.viewWillLayoutSubviews()
if #available(macCatalyst 14, *) {
view.window?.windowScene?.sizeRestrictions?.minimumSize = CGSize(width: 500, height: 500)
view.window?.windowScene?.sizeRestrictions?.maximumSize = CGSize(width: 800, height: 800)
}
}
}
```
If you don’t want the presented view to be resizable at all, just set the `minimumSize` and `maximumSize` to the same value.
I don't love using `viewWillLayoutSubviews` like this, but the `windowScene` is still nil in `viewDidLoad` and `viewWillAppear`, and while it is non-nil in viewDidAppear, setting `sizeRestrictions` there will cause a visible resize on screen.
The good news is that this problem may be fixed in Big Sur 11.1. According to the [beta release notes](https://developer.apple.com/documentation/macos-release-notes/macos-big-sur-11_1-release-notes), macOS 11.1 will respect `preferredContentSize` *and* they won't be resizable by default:
>
> When you present a view controller with page sheet or form sheet presentation style, the size of the view controller’s root view is, by default, determined by the value returned from the presented view controller’s [preferredContentSize](https://developer.apple.com/documentation/uikit/uiviewcontroller/1621476-preferredcontentsize) method and the view is not resizable. You can arrange for the presented view controller to be resizable by using Auto Layout to specify the maximum and minimum sizes of its root view. To enable this, set the [canResizeToFitContent](https://developer.apple.com/documentation/uikit/uiwindow/3368165-canresizetofitcontent) property of the application’s main window to YES. One way to do this is to override the willMove(toWindow:) or didMoveToWindow() methods of a view in the main view controller. (65254666)
>
>
>
|
ColdFusion 2016: Can you have a folder in your web root named 'api' or 'rest'?
I just installed ColdFusion 2016 (upgraded from CF10) and I noticed that whenever I try and access a folder in my webroot called 'api', I get an internal 500 error.
For example: www.mysite.com/api/
I assume this has something to do with the new ColdFusion API REST service so I created another directory called 'rest', performed the same test (www.mysite.com/rest/), and received yet another 500 error.
See the IIS error screenshot: [](https://i.stack.imgur.com/H3BJL.png)
The strange thing is that I don't use the ColdFusion REST service and I don't have it enabled in ColdFusion Administrator.
**My Question:**
Are you allowed to have folder names in your web root named "api" or "rest" anymore? Or are these now reserved folder names? Is there a workaround to disable this feature for a specific site so I can use these folder names?
|
Saw this on the Adobe Forums which should answer your question:
The reason you can't access /api/ or /rest/ is because there is a cf servlet-mapping for those folders.
You can remove the mapping by going to cfinstall/cfusion/wwwroot/WEB-INF/web.xml. Search for the api servlet-mapping and comment it out.
There doesn't seem to be a way to do this for a specific site other than using IIS rewrite to redirect traffic to another folder. Something like this should work (redirects traffic from /api/ to /api2/):
```
<rule name="Redirect" stopProcessing="true">
<match url="^api$|^api/(.*)" />
<action type="Rewrite" url="api2/{R:1}" appendQueryString="true" />
</rule>
```
if anyone knows a way to disable this for a specific site without modifying web.config please feel free to share your ideas.
|
Working on user in dockerfile and installing packages on it permission denied
I want to install packages on dockefile as user in /home/user .
```
FROM ubuntu:16.04
ENV user lg
RUN useradd -m -d /home/${user} ${user} \
&& chown -R ${user} /home/${user}
USER ${user}
WORKDIR /home/${user}
RUN apt-get update
RUN apt-get -y install curl
RUN apt-get -y install lsb-core
RUN apt-get -y install lsb
RUN apt-get -y upgrade -f
```
Docker throws error on executing `apt-get update`
>
> E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
> The command '/bin/sh -c apt-get update' returned a non-zero code: 100
>
>
>
Thanks :)
|
It's because your `lg` user simply doesn't have necessary permissions. In this case, it doesn't matter that ubuntu is dockerized. It's like in any other Linux distro - you need permissions to do certain actions. An example: if you'd create a new user on your native system I bet command `apt-get install X` would raise the exact same error, wouldn't it?
In order to install anything, you'll need `sudo` to authenticate as root for this user. This can be achieved like so:
```
FROM ubuntu:16.04
RUN apt-get update && \
apt-get -y install sudo
ENV user lg
RUN useradd -m -d /home/${user} ${user} && \
chown -R ${user} /home/${user} && \
adduser ${user} sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER ${user}
WORKDIR /home/${user}
RUN sudo apt-get -y install curl && \
sudo apt-get -y install lsb-core && \
sudo apt-get -y install lsb && \
sudo apt-get -y upgrade -f
```
A little explanation:
1. First, you'll need to install sudo package
2. Add your user to sudo
3. And you also need to add NOPASSWD to the sudoers file (I've done it for ALL but you can easily set it for a specific user). Without this, you will encounter following error: `sudo: no tty present and no askpass program specified`
4. Now you can install stuff with this user
Also try avoiding using multiple times the same Dockerfile instruction (In your case you had redundant 4x RUN). Each instruction is a separate layer in later build image. This is known [Dockerfile best practice](https://docs.docker.com/develop/develop-images/dockerfile_best-practices).
>
> Minimize the number of layers In older versions of Docker, it was
> important that you minimized the number of layers in your images to
> ensure they were performant. The following features were added to
> reduce this limitation:
>
>
> In Docker 1.10 and higher, only the instructions RUN, COPY, ADD create
> layers. Other instructions create temporary intermediate images, and
> do not directly increase the size of the build.
>
>
>
|
Numpy, why does `x += y` produce a different result than `x = x + y`?
In this case, why does `x += y` produce a different result than `x = x + y`?
```
import numpy as np
x = np.repeat([1], 10)
y = np.random.random(len(x))
x += y
print x
# Output: [1 1 1 1 1 1 1 1 1 1]
x = x + y
print x
# Output: [ 1.50859536 1.31434732 1.15147365 1.76979431 1.64727364
# 1.02372535 1.39335253 1.71878847 1.48823703 1.99458116]
```
|
Although the linked question explains the general issue, there is a numpy-specific explanation for this particular case. Basically, those answers say "it depends on the type of the variables involved", and what I'm giving below is the explanation for numpy types.
When you do `x + y`, numpy uses a "lowest common denominator" datatype for the result. Since `x` is int and `y` is float, this means it returns a float array.
But when you do `x += y`, you are forcing it to conform to the dtype of `x`, which is `int`. This truncates the decimal portion and leaves all `x` values back at 1. This is the way numpy defines the augmented assignment operators: it forces the return value to be of the same dtype as the assignment target.
You can get the first behavior from the second example by doing `x = (x + y).astype(int)` (explicitly forcing the dtype back to int). You can get the second behavior from the first example by letting `x = np.repeat([1.0], 10)` (using a float makes `x` have dtype float, so now you can add `y` to it without truncation).
|
Fast calculation of image moments
I have a mask (8-bit gray image) and I need calculate center of region with given index of the mask.
To do this I need calculate moments of first order along axes X and Y for this mask.
Currently I'm using next code:
```
void GetCenter(const uint8_t * mask, size_t stride, size_t width, size_t height,
uint8_t index, double * centerX, double * centerY)
{
uint64_t sum = 0, sumX = 0, sumY = 0;
for(size_t y = 0; y < height; ++y)
{
for(size_t x = 0; x < width; ++x)
{
if(mask[x] == index)
{
sum++;
sumX += x;
sumY += y;
}
}
mask += stride;
}
*centerX = sum ? (double)sumX/sum : 0;
*centerY = sum ? (double)sumY/sum : 0;
}
```
And I have a question: Is there any way to improve performance of this algorithm?
|
There is a way to greatly (more then ten times) improve performance of this algorithm.
To do it you need use SIMD instructions of CPU such as (SSE2, AVX2, Altivec, NEON etc.).
I wrote an example with using of SSE2 instructions (AVX2 code will be similar to it):
```
const __m128i K_0 = _mm_setzero_si128();
const __m128i K8_1 = _mm_set1_epi8(1);
const __m128i K16_1 = _mm_set1_epi16(1);
const __m128i K16_8 = _mm_set1_epi16(8);
const __m128i K16_I = _mm_setr_epi16(0, 1, 2, 3, 4, 5, 6, 7);
inline void AddMoments(const __m128i & mask, const __m128i & x, const __m128i & y,
__m128i & sumX, __m128i & sumY)
{
sumX = _mm_add_epi32(sumX, _mm_madd_epi16(_mm_and_si128(mask, x), K16_1));
sumY = _mm_add_epi32(sumY, _mm_madd_epi16(_mm_and_si128(mask, y), K16_1));
}
inline int ExtractSum(__m128i a)
{
return _mm_cvtsi128_si32(a) + _mm_cvtsi128_si32(_mm_srli_si128(a, 4)) +
_mm_cvtsi128_si32(_mm_srli_si128(a, 8)) + _mm_cvtsi128_si32(_mm_srli_si128(a, 12));
}
void GetCenter(const uint8_t * mask, size_t stride, size_t width, size_t height,
uint8_t index, double * centerX, double * centerY)
{
size_t alignedWidth = width & ~(sizeof(__m128i) - 1);
const __m128i _index = _mm_set1_epi8(index);
uint64_t sum = 0, sumX = 0, sumY = 0;
for(size_t y = 0; y < height; ++y)
{
size_t x = 0;
__m128i _x = K16_I;
__m128i _y = _mm_set1_epi16((short)y);
__m128i _sum = K_0;
__m128i _sumX = K_0;
__m128i _sumY = K_0;
for(; x < alignedWidth; x += sizeof(__m128i))
{
__m128i _mask = _mm_and_si128(_mm_cmpeq_epi8(_mm_loadu_si128((__m128i*)(mask + x)), _index), K8_1);
_sum = _mm_add_epi64(_sum, _mm_sad_epu8(_mask, K_0));
AddMoments(_mm_cmpeq_epi16(_mm_unpacklo_epi8(_mask, K_0), K16_1), _x, _y, _sumX, _sumY);
_x = _mm_add_epi16(_x, K16_8);
AddMoments(_mm_cmpeq_epi16(_mm_unpackhi_epi8(_mask, K_0), K16_1), _x, _y, _sumX, _sumY);
_x = _mm_add_epi16(_x, K16_8);
}
sum += ExtractSum(_sum);
sumX += ExtractSum(_sumX);
sumY += ExtractSum(_sumY);
for(; x < width; ++x)
{
if(mask[x] == index)
{
sum++;
sumX += x;
sumY += y;
}
}
mask += stride;
}
*centerX = sum ? (double)sumX/sum : 0;
*centerY = sum ? (double)sumY/sum : 0;
}
```
P.S. There is a more simple and cross platform way to improve performance with using of external library (<http://simd.sourceforge.net/>):
```
void GetCenter(const uint8_t * mask, size_t stride, size_t width, size_t height,
uint8_t index, double * centerX, double * centerY)
{
uint64_t sum, sumX, sumY, sumXX, sumXY, sumYY;
::SimdGetMoments(mask, stride, width, height, index,
&sum, &sumX, &sumY, &sumXX, &sumXY, &sumYY);
*centerX = sum ? (double)sumX/sum : 0;
*centerY = sum ? (double)sumY/sum : 0;
}
```
An implementation with using of \_mm\_movemask\_epi8 and 8-bit lookup tables:
```
uint8_t g_sum[1 << 8], g_sumX[1 << 8];
bool Init()
{
for(int i = 0, n = 1 << 8; i < n; ++i)
{
g_sum[i] = 0;
g_sumX[i] = 0;
for(int j = 0; j < 8; ++j)
{
g_sum[i] += (i >> j) & 1;
g_sumX[i] += ((i >> j) & 1)*j;
}
}
return true;
}
bool g_inited = Init();
inline void AddMoments(uint8_t mask, size_t x, size_t y,
uint64_t & sum, uint64_t & sumX, uint64_t & sumY)
{
int value = g_sum[mask];
sum += value;
sumX += x * value + g_sumX[mask];
sumY += y * value;
}
void GetCenter(const uint8_t * mask, size_t stride, size_t width, size_t height,
uint8_t index, double * centerX, double * centerY)
{
size_t alignedWidth = width & ~(sizeof(__m128i) - 1);
const __m128i _index = _mm_set1_epi8(index);
union PackedValue
{
uint8_t u8[4];
uint16_t u16[2];
uint32_t u32;
} _mask;
uint64_t sum = 0, sumX = 0, sumY = 0;
for(size_t y = 0; y < height; ++y)
{
size_t x = 0;
for(; x < alignedWidth; x += sizeof(__m128i))
{
_mask.u32 = _mm_movemask_epi8(_mm_cmpeq_epi8(
_mm_loadu_si128((__m128i*)(mask + x)), _index));
AddMoments(_mask.u8[0], x, y, sum, sumX, sumY);
AddMoments(_mask.u8[1], x + 8, y, sum, sumX, sumY);
}
for(; x < width; ++x)
{
if(mask[x] == index)
{
sum++;
sumX += x;
sumY += y;
}
}
mask += stride;
}
*centerX = sum ? (double)sumX/sum : 0;
*centerY = sum ? (double)sumY/sum : 0;
}
```
An implementation with using of \_mm\_movemask\_epi8 and 16-bit lookup tables:
```
uint16_t g_sum[1 << 16], g_sumX[1 << 16];
bool Init()
{
for(int i = 0, n = 1 << 16; i < n; ++i)
{
g_sum[i] = 0;
g_sumX[i] = 0;
for(int j = 0; j < 16; ++j)
{
g_sum[i] += (i >> j) & 1;
g_sumX[i] += ((i >> j) & 1)*j;
}
}
return true;
}
bool g_inited = Init();
inline void AddMoments(uint16_t mask, size_t x, size_t y,
uint64_t & sum, uint64_t & sumX, uint64_t & sumY)
{
int value = g_sum[mask];
sum += value;
sumX += x * value + g_sumX[mask];
sumY += y * value;
}
void GetCenter(const uint8_t * mask, size_t stride, size_t width, size_t height,
uint8_t index, double * centerX, double * centerY)
{
size_t alignedWidth = width & ~(sizeof(__m128i) - 1);
const __m128i _index = _mm_set1_epi8(index);
union PackedValue
{
uint8_t u8[4];
uint16_t u16[2];
uint32_t u32;
} _mask;
uint64_t sum = 0, sumX = 0, sumY = 0;
for(size_t y = 0; y < height; ++y)
{
size_t x = 0;
for(; x < alignedWidth; x += sizeof(__m128i))
{
_mask.u32 = _mm_movemask_epi8(_mm_cmpeq_epi8(
_mm_loadu_si128((__m128i*)(mask + x)), _index));
AddMoments(_mask.u16[0], x, y, sum, sumX, sumY);
}
for(; x < width; ++x)
{
if(mask[x] == index)
{
sum++;
sumX += x;
sumY += y;
}
}
mask += stride;
}
*centerX = sum ? (double)sumX/sum : 0;
*centerY = sum ? (double)sumY/sum : 0;
}
```
Performance comparison for 1920x1080 image:
```
Base version: 8.261 ms;
1-st optimization:0.363 ms (in 22 times faster);
2-nd optimization:0.280 ms (in 29 times faster);
3-rd optimization:0.299 ms (in 27 times faster);
4-th optimization:0.325 ms (in 25 times faster);
```
As you can see above the code with using of 8-bit lookup tables has better performance then the code with using of 16-bit lookup tables. But anyway external library is better though it performs additional calculations of the second order moments.
|
Clojure merge multiple map into a single map
I have the following list of maps
```
({"child.search" {:roles #{"ROLE_ADM_UNSUBSCRIBE_SUBSCRIPTION" "ROLE_ADM_SEARCH_SUBSCRIPTION" "ROLE_ADM_VIEW_SUBSCRIPTION"}},
"child.cc.search" {:roles #{"ROLE_ADM_CC_SEARCH_SUBSCRIPTION" "ROLE_ADM_CC_VIEW_SUBSCRIPTION"}}}
{"child.abusereport" {:roles #{"ROLE_ADM_ABUSE_RPT"}},
"child.manualfiltering" {:roles #{"ROLE_ADM_MANUAL_FILTERING_RPT"}}}
{"child.assigned.advertisement" {:roles #{"ROLE_ADM_CREATE_ADVERTISING"}},
"child.manage.advertisement" {:roles #{"ROLE_ADM_LIST_ADVERTISING"}}})
```
I need to have a single map like following.
```
{"child.search" {:roles #{"ROLE_ADM_UNSUBSCRIBE_SUBSCRIPTION" "ROLE_ADM_SEARCH_SUBSCRIPTION" "ROLE_ADM_VIEW_SUBSCRIPTION"}}
"child.cc.search" {:roles #{"ROLE_ADM_CC_SEARCH_SUBSCRIPTION" "ROLE_ADM_CC_VIEW_SUBSCRIPTION"}}
"child.abusereport" {:roles #{"ROLE_ADM_ABUSE_RPT"}}
"child.manualfiltering" {:roles #{"ROLE_ADM_MANUAL_FILTERING_RPT"}}
"child.assigned.advertisement" {:roles #{"ROLE_ADM_CREATE_ADVERTISING"}}
"child.manage.advertisement" {:roles #{"ROLE_ADM_LIST_ADVERTISING"}}}
```
How can I do it?
|
You can use the "**[into](http://clojuredocs.org/clojure_core/clojure.core/into)**" function, and supply an empty map `{}` as the first argument:
```
(into {} map-list)
```
Here's the output from my REPL session; I copied your code into two vars 'map-list' and 'single-map' (Clojure 1.3.0):
```
(def map-list '({"child.search" {:roles #{"ROLE_ADM_UNSUBSCRIBE_SUBSCRIPTION" "ROLE_ADM_SEARCH_SUBSCRIPTION" "ROLE_ADM_VIEW_SUBSCRIPTION"}},
"child.cc.search" {:roles #{"ROLE_ADM_CC_SEARCH_SUBSCRIPTION" "ROLE_ADM_CC_VIEW_SUBSCRIPTION"}}}
{"child.abusereport" {:roles #{"ROLE_ADM_ABUSE_RPT"}},
"child.manualfiltering" {:roles #{"ROLE_ADM_MANUAL_FILTERING_RPT"}}}
{"child.assigned.advertisement" {:roles #{"ROLE_ADM_CREATE_ADVERTISING"}},
"child.manage.advertisement" {:roles #{"ROLE_ADM_LIST_ADVERTISING"}}}))
#'user/map-list
user=>
(def single-map {"child.search" {:roles #{"ROLE_ADM_UNSUBSCRIBE_SUBSCRIPTION" "ROLE_ADM_SEARCH_SUBSCRIPTION" "ROLE_ADM_VIEW_SUBSCRIPTION"}}
"child.cc.search" {:roles #{"ROLE_ADM_CC_SEARCH_SUBSCRIPTION" "ROLE_ADM_CC_VIEW_SUBSCRIPTION"}}
"child.abusereport" {:roles #{"ROLE_ADM_ABUSE_RPT"}}
"child.manualfiltering" {:roles #{"ROLE_ADM_MANUAL_FILTERING_RPT"}}
"child.assigned.advertisement" {:roles #{"ROLE_ADM_CREATE_ADVERTISING"}}
"child.manage.advertisement" {:roles #{"ROLE_ADM_LIST_ADVERTISING"}}})
#'user/single-map
user=>
;; Check to see if we have the desired result
(= (into {} map-list)
single-map)
true
```
|
How to find the version of a Maven plugin in the project?
I'm trying to find the version of ***maven-wagon*** plugin that's being used in my project. Is there a way to find the version of a used plugin via command line?
|
There are several ways to do this:
**1) Check the dependency tree:**
To find out the libraries and versions you are using you can use the [Maven dependency tree](https://maven.apache.org/plugins/maven-dependency-plugin/examples/resolving-conflicts-using-the-dependency-tree.html), just execute this where you have your project (*pom.xml*):
```
mvn dependency:tree -Dverbose
```
This is useful detect which version of an specific library your project is using, but I think it doesn't include plugins.
**2) Describe the specific plugin:**
If you want to know what version of an specific plugin you have installed you can do this:
>
> mvn -Dplugin=: help:describe
>
>
> mvn -Dplugin=org.codehaus.mojo:wagon-maven-plugin help:describe
>
>
>
This shows you something like this:
```
Name: Maven Wagon plugin
Description: Maven plugin that can be used to access various operations on a
given URL using a supported maven wagon. Supports recursive upload, download,
and list directory content functionality.
Group Id: org.codehaus.mojo
Artifact Id: wagon-maven-plugin
Version: 1.0
Goal Prefix: wagon
This plugin has 11 goals:
...
...
```
**3) Check the effective pom:**
Execute this:
>
> mvn help:effective-pom
>
>
>
and go through the pom looking for the plugin you need to clarify, there you will find something like this:
```
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>wagon-maven-plugin</artifactId>
<version>1.0</version>
</plugin>
```
|
Any algorithm for "Flip all" (Light Out) game?
in this game: <http://www.mathsisfun.com/games/allout.html>
The solve function can solve any case, no matter how you "abuse" the original board. Please tell me the algorithm for solving this game. I have tried to think for days but still found no clue to solve all cases.
**OK, after read some answers and comments (and have a quick look at Light out game), I expand my question:**
Will the game different if I expand the size of the grid (like to 25x25)? Still any possible algorithm to solve *any* case, *in acceptable* time (< 2s)?
|
This game is more commonly known as Lights Out, and has a number of elegant solutions, all based in some standard but somewhat advanced mathematics. I won't describe them all here but if you Google a bit you can find all kinds of explanations varying from straightforward procedures to transformations into linear algebra or group theory. A few links:
<http://www.hamusutaa.com/pilot/solution.html>
<http://www.ripon.edu/academics/macs/summation/2010/articles/M.%20Madsen%20-%20Lights%20Out.pdf>
<http://people.math.sfu.ca/~jtmulhol/math302/notes/24-Lights-Out.pdf>
**Edit**: Re: your second question. The algorithm presented in the second link I posted can solve an n x n board in O(n^6) time, meaning you should be able to quickly solve a 25 x 25 board.
|
Strategy to avoid running out of memory in memory intensive application
My C# .NET application has to read lots of files with electrical readings. There are several calculation to be done and output files need to be generated.
Due to the nature of the input it would be very inefficient to output after each input line or even after one file, so it would be better to do so after a chunk of files.
The reason for this is, that each file contains a list of readings/electrical units for one time stamp, but the output needs to be files for each unit.
So for instance the input would be 100 files each containing e.g. volt, ampere, watt, status code for one timestamp (2015\_08\_31\_00\_00\_00.txt, 2015\_08\_31\_00\_00\_05.txt ...).
The output should be per electrical unit and e.g. one file per day
(c://ampere/2015\_08\_31.txt, c://volt/2015\_08\_31.txt ...).
Note that this is a simplified picture of the application. In reality there are several different input and output formats, directory structures etc.
What I want to do is, keep the calculated and ready for output values in memory and output them according to different strategies. This could be e.g. until all input files are processed.
As some strategies like this one can result in too much data in memory, I would like to monitor the memory usage and decide if I need to output sooner.
**I do know how to get the used memory of my program**, but how do I safely get an estimation on the available memory?
As far as I understand this is not a simple task due to memory fragmentation, paging, trashing and so on. I do know in this case that I got the 32-Bit induced memory limit with the current build settings, but I would like to get a general answer which could also be applied to a 64-Bit program.
How do I estimate when enough memory is still available to write to disk, but use a good amount of RAM to optimize I/O?
**Update**
I have not yet implemented this feature so I cannot tell if memory problems would occur under usual circumstances. Thanks to all your comments and answers I see it isn't easily accomplishable.
My current idea is to use a fixed input limit based on the size of the input files. This still would not protect against problems if other programs creates heavy load.
Each file is always just a few KB.
**I will keep the SQLite idea in mind but I will have to check if I get an OK**
|
Unfortunately, there's really no good way to answer the question "how much RAM is my program using?" or "how close am I to hitting the `OutOfMemoryException` wall?", for a number of reasons. It's a lot more complicated than it looks like it should be.
One thing you might want to do, though, is make sure that you're not holding on to the input data unnecessarily. For example, depending on how you're processing the data, it could be feasible to read the file, and perform the relevant calculations, one line at a time, or one small batch of lines at a time. If you hold on to aggregate results, but don't keep the input data around, that should keep the memory usage down.
If that doesn't help, because your results just keep piling up, the best way to handle *that* is to offload them out of memory. Put them in a relational database, for example, as fast as they come in, and then they end up on disc instead. And then at the end of the day, perform a query that will pull in the day's results (HINT: this is a lot easier if you have a timestamp field that gets set when you insert the records) and save them out to their final version in your output files.
|
Why do \_token and XSRF-TOKEN differ in Laravel?
I don't understand why is the token for AJAX requests (XSRF-TOKEN) different from a `_token` that normal forms use. In addition, it's much longer. Why? And why have 2 tokens at all? Why not just use one which would be same for both ajax and normal requests?
|
# 1 Approach, 2 Technics
Laravel Uses 2 distinct Technics to prevent CSRF Attack.
The Approaches are The same:
>
> to send a token (CSRF or XSRF) to The Client and Client Have to return
> it back in following request
>
>
>
and there are 2 steps:
- server sends *token* (get a form) (CSRF or XSRF)
- client return token as *X-token* (post a form) (X-CSRF or X-XSRF)
>
> when you see an X- token its an client-replied that client sends with Post to the server
>
>
>
The Reason we have 2 technics *is not these uses different approaches*,
its because web application Client-Side Architectures using 2 different Architectures :
- old-fashion : server generates pure html and send it to client
- Single Page Application : client SPA Framework (like Vue,React,Angular) send and receive data as Json or Xml and create proper Html in Dom
Now CSRF-Protection Technics Adapts with this Two Client-Side Architectures as Below:
```
+-------------+-----------------+-----------+------------+
| Client Arch | Protection Tech | Get Token | Post Token |
+-------------+-----------------+-----------+------------+
| old-fashion | sync-token | CSRF | X-CSRF |
| SPA | cookie-header | XSRF | X-XSRF |
+-------------+-----------------+-----------+------------+
```
# Mechanism Description
## 1.Server Generates Token
Laravel make a CSRF Token (40 chars) and store it in session
```
/**
* Regenerate the CSRF token value.
*
* @return void
*/
public function regenerateToken()
{
$this->put('_token', Str::random(40));
}
```
After Generating and Storing token in Session, Token Will be Send To Client as CSRF and XSRF
client side will decide to use whatever it wants.
## 2.Server Sends Token To Client
for the old-fashioned (sync-token technic) client can receive The CSRF Token in two forms with call to `csrf_token()` helper method in blade:
1. in form body : `<input type='hidden' name='_token' value='{{csrf_token()}}' />`
2. in meta tag that Ajax request can use it in its header
here is how this helper method returns corresponding value:
```
/**
* Get the CSRF token value.
*
* @return string
*
* @throws \RuntimeException
*/
function csrf_token()
{
$session = app('session');
if (isset($session)) {
return $session->token();
}
throw new RuntimeException('Application session store not set.');
}
```
for cookie-header (SPA Frameworks) client framework (like Angular) can receive XSRF Token in the Cookie **Because**:
>
> there is no Html Form generating in the server which server can seed
> its hidden input in it. and The Way it can send its token to the
> client is sending it with cookie. (This method named XSRF)
>
>
>
```
/**
* Add the CSRF token to the response cookies.
*
* @param \Illuminate\Http\Request $request
* @param \Symfony\Component\HttpFoundation\Response $response
* @return \Symfony\Component\HttpFoundation\Response
*/
protected function addCookieToResponse($request, $response)
{
$config = config('session');
$response->headers->setCookie(
new Cookie(
'XSRF-TOKEN', $request->session()->token(), $this->availableAt(60 * $config['lifetime']),
$config['path'], $config['domain'], $config['secure'], false, false, $config['same_site'] ?? null
)
);
return $response;
}
```
Laravel put token in both places since its up to client which method to use, and expect client to response to one of this methods.
## 3.Client Sends X- Token To Server
In client-side:
1. old-fashion (X-CSRF):
- post token in post data or:
- make ajax call like this:
```
`$.ajaxSetup({
headers: {
'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content')
}
});`
```
2. SPA Framework : These Framework put Token as X-XSRF-TOKEN in Post Headers
3. Server Checks X- Token Vs in-session Token
---
Now Its Time To Laravel Check For The Token
in VerifyCSRFMiddleware, Laravel Checks if The Request Should Be Check For CSRF Protection Token it Checks :
```
/**
* Handle an incoming request.
*
* @param \Illuminate\Http\Request $request
* @param \Closure $next
* @return mixed
*
* @throws \Illuminate\Session\TokenMismatchException
*/
public function handle($request, Closure $next)
{
if (
$this->isReading($request) ||
$this->runningUnitTests() ||
$this->inExceptArray($request) ||
$this->tokensMatch($request) //compares request_token vs session_token
) {
return tap($next($request), function ($response) use ($request) {
if ($this->shouldAddXsrfTokenCookie()) {
$this->addCookieToResponse($request, $response); //add cookie to response
}
});
}
throw new TokenMismatchException('CSRF token mismatch.');
}
```
Two of lines are in interest:
```
$this->tokensMatch($request)
```
and
```
$this->addCookieToResponse($request, $response);
```
so there are multiple data in each request that server can put:
1. *html form input* **\_token** (40 chars) (**C**SRF)
2. *html meta header* **csrf-token** (40 chars) (**C**SRF)
3. *cookie* **XSRF-TOKEN** (224 chars) (**X**SRF)
and multiple data can client send to server as response to the tokens
1. *post parameter* **\_token** (40 chars) (X-**C**SRF)
2. *http header* **X-CSRF-TOKEN** (40 chars) (X-**C**SRF)
3. *http header* **X-XSRF-TOKEN** (224 chars) (X-**X**SRF)
>
> Why in CSRF token are 40 chars and in XSRF are 224 chars ?
> We Will get to this a little bit latter
>
>
>
The Http Request Has To Match Token with one of the Above **X-Token**
```
/**
* Determine if the session and input CSRF tokens match.
*
* @param \Illuminate\Http\Request $request
* @return bool
*/
protected function tokensMatch($request)
{
$token = $this->getTokenFromRequest($request);// it get token from request
return is_string($request->session()->token()) &&
is_string($token) &&
hash_equals($request->session()->token(), $token); //checks if it is equal to session token or not
}
/**
* Get the CSRF token from the request.
*
* @param \Illuminate\Http\Request $request
* @return string
*/
protected function getTokenFromRequest($request)
{
$token = $request->input('_token') ?: $request->header('X-CSRF-TOKEN');//check sync-token
if (! $token && $header = $request->header('X-XSRF-TOKEN')) {
$token = CookieValuePrefix::remove($this->encrypter->decrypt($header, static::serialized()));
}
return $token;
}
```
first pattern to examine is sync-token, token from client can be in an `<input name='_token' />` or it can be in *Http Header* if requested from an Ajax method call in the client.
the line
```
$token = $request->input('_token') ?: $request->header('X-CSRF-TOKEN');
```
will check for that and if it can be retrieved, it will return and check via the session\_token
but `if (! $token` is `NULL` it will check against the cookie-header pattern:
getting `$header = $request->header('X-XSRF-TOKEN')` from header and decrypt it, if its need decryption
```
$token = CookieValuePrefix::remove($this->encrypter->decrypt($header, static::serialized()));
```
if it has been encrypted before it has been added to cookie
# Cookie Encryption
>
> This is The Reason That XSRF Token Could be 224chars :
> **Cookie Encryption**
> and you may disable cookie Encryption and make the XSRF Token 40 chars, Like The CSRF Token
>
>
>
>
> so The Difference was for The Cookie Encryption.
>
>
>
# Necessity of Cookie Encryption
But **Why Cookie Needs to get Encrypted?? Why XSRF Cookie needs to get Encrypted??**
In General, Laravel Store some data on cookies and cookie can be modified by client. because server dose not want modifications on client, it **Encrypt Cookies** .
this can be config to not to *Encrypt CSRF Cookie* Since it is not subjected to change by user and its only subjected to be stole by **cookie hijacking** which encryption is not going to preventing this event.
>
> The only Difference its make is to having to token (unencrypted and encrypted)
> for two CSRF Protection methods.
> So if attackers can access a **cookie-stored** (X-XSRF) Token (Since Hijacking > Cookie is much easier to hijacking runtime html and css with XSS )
> it cannot be Abuse With sync-token mechanism.
> Since CSRF Attack with http-form parameter is easier since html can be in email
> or etc While Runnig Js is Less common.
>
>
>
## Conclusion
>
> So if a client use **old-fashion client architect** . the cookie-header technic > ( XSRF stored in Cookie ) wont leave him with **a data leak in cookie**.
>
>
>
further information on this prevention patterns can be found here:
<https://en.wikipedia.org/wiki/Cross-site_request_forgery#Prevention>
|
Poor performance with transpose and cumulative sum in Repa
I have developed a cumulative sum function as defined below in the Haskell library Repa. However, I have run into an issue when combining this function with the transpose operation. All 3 of the following operations take well under a second:
```
cumsum $ cumsum $ cumsum x
transpose $ transpose $ transpose x
transpose $ cumsum x
```
However, if I write:
```
cumsum $ transpose x
```
performance degrades horrendously. While each individual operation in isolation takes well under a second on a 1920x1080 image, when combined they now take 30+ seconds...
Any ideas on what could be causing this? My gut tells me it has something to do with delayed arrays, not forcing at the right time, etc... But I do not have enough experience to track this down quite yet.
```
{-# LANGUAGE TypeOperators, FlexibleContexts, TypeFamilies #-}
import Data.Array.Repa as Repa
{-# INLINE indexSlice #-}
indexSlice :: (Shape sh, Elt a) => Int -> Array (sh :. Int) a -> (sh :. Int) -> a
indexSlice from arr (z :. ix) = arr `unsafeIndex` (z :. (ix + from))
{-# INLINE sliceRange #-}
sliceRange :: (Slice sh, Shape sh, Elt a) => Int -> Int -> Array (sh :. Int) a -> Array (sh :. Int) a
sliceRange from to arr = fromFunction (z :. (to - from + 1)) $ indexSlice from arr
where (z :. _) = extent arr
{-# INLINE cumsum' #-}
cumsum' :: (Slice (SliceShape sh), Slice sh, Shape (FullShape sh), Shape (SliceShape sh), Elt a, Num a) =>
Array (FullShape sh :. Int) a -> t -> (sh :. Int) -> a
cumsum' arr f (sh :. outer) = Repa.sumAll $ sliceRange 0 outer $ Repa.slice arr (sh :. All)
{-# INLINE cumsum #-}
cumsum :: (FullShape sh ~ sh, Slice sh, Slice (SliceShape sh), Shape sh, Shape (SliceShape sh), Elt a, Num a) =>
Array (sh :. Int) a -> Array (sh :. Int) a
cumsum arr = Repa.force $ unsafeTraverse arr id $ cumsum' arr
```
|
From a library implementor's perspective, the way to debug this is to create a wrapper for the suspect operation, then look at the core code to see if fusion has worked.
```
-- Main.hs ---------------------------------------------------
import Solver
import Data.Array.Repa.IO.BMP
main
= do Right img <- readImageFromBMP "whatever.bmp"
print $ cumsumBMP img
-- Solver.hs --------------------------------------------------
{-# LANGUAGE TypeOperators, FlexibleContexts, TypeFamilies #-}
module Solver (cumsumBMP) where
import Data.Array.Repa as Repa
import Data.Word
{- all your defs -}
{-# NOINLINE cumsumBMP #-}
cumsumBMP :: Array DIM3 Word8 -> Array DIM3 Word8
cumsumBMP img = cumsum $ transpose img
```
I've put the "solver" code in a separate module, so we only have to wade through the core code for the definitions we care about.
Compile like:
```
touch Solver.hs ; ghc -O2 --make Main.hs \
-ddump-simpl -dsuppress-module-prefixes -dsuppress-coercions > dump
```
Go to the definition of `cumsumBMP` and search for the `letrec` keyword. Searching for `letrec` is a quick way to find the inner loops.
Not too far down I see this: (slightly reformatted)
```
case gen_a1tr
of _ {
GenManifest vec_a1tv ->
case sh2_a1tc `cast` ... of _ { :. sh3_a1iu sh4_a1iv ->
case ix'_a1t9 `cast` ... of _ { :. sh1'_a1iz sh2'_a1iA ->
case sh3_a1iu `cast` ... of _ { :. sh5_X1n0 sh6_X1n2 ->
case sh1'_a1iz `cast` ... of _ { :. sh1'1_X1n9 sh2'1_X1nb ->
case sh5_X1n0 of _ { :. sh7_X1n8 sh8_X1na ->
...
case sh2'1_X1nb of _ { I# y3_X1nO ->
case sh4_a1iv of _ { I# y4_X1nP ->
case sh2'_a1iA of _ { I# y5_X1nX ->
...
let { x3_a1x6 :: Int# [LclId]
x3_a1x6 =
+#
(*#
(+#
(*#
y1_a1iM
y2_X1nG)
y3_X1nO)
y4_X1nP)
y5_X1nX } in
case >=#
x3_a1x6
0
of ...
```
Disaster! The `x3_a1x6` binding is clearly doing some useful work (multiplications, additions and suchlike) but it's wrapped in a long series of unboxing operations that are also executed for every loop iteration. What's worse is that it's unboxing the length and width (shape) of the array at every iteration, and this information will always be the same. GHC should really float these case expressions out of the loop, but it doesn't yet. This is an instance of [Issue #4081 on the GHC trac](http://hackage.haskell.org/trac/ghc/ticket/4081), which hopefully will be fixed sometime soon.
The work around is to apply `deepSeqArray` to the incoming array. This places a demand on its value at the top level (outside the loop) which lets GHC know it's ok to move the case matches further up. For a function like `cumsumBMP`, we also expect the incoming array to already be manifest, so we can add an explicit case match for this:
```
{-# NOINLINE cumsumBMP #-}
cumsumBMP :: Array DIM3 Word8 -> Array DIM3 Word8
cumsumBMP img@(Array _ [Region RangeAll (GenManifest _)])
= img `deepSeqArray` cumsum $ transpose img
```
Compiling again, the inner loop now looks much better:
```
letrec {
$s$wfoldlM'_loop_s2mW [...]
:: Int# -> Word# -> Word# [...]
$s$wfoldlM'_loop_s2mW =
\ (sc_s2mA :: Int#) (sc1_s2mB :: Word#) ->
case <=# sc_s2mA a_s2ji of _ {
False -> sc1_s2mB;
True ->
$s$wfoldlM'_loop_s2mW
(+# sc_s2mA 1)
(narrow8Word#
(plusWord#
sc1_s2mB
(indexWord8Array#
rb3_a2gZ
(+#
rb1_a2gX
(+#
(*#
(+#
(*#
wild19_X1zO
ipv1_X1m5)
sc_s2mA)
ipv2_X1m0)
wild20_X1Ct)))))
}; } in
```
That's a tight, tail recursive loop that only uses primitive operations. Provided you compile with `-fllvm -optlo-O3`, there's no reason that won't run as fast as an equivalent C program.
There's a slight hiccup when running it though:
```
desire:tmp benl$ ./Main
Main: Solver.hs:(50,1)-(51,45): Non-exhaustive patterns in function cumsumBMP
```
This just reminds us that we need to force the array before calling `cumsumBMP`.
```
-- Main.hs ---------------------------------------------------
...
import Data.Array.Repa as Repa
main
= do Right img <- readImageFromBMP "whatever.bmp"
print $ cumsumBMP $ Repa.force img
```
In summary:
1. You need to add some `deepSeqArray` and pattern matching goop to your top level
functions to work around a current infelicity in GHC. This is demonstrated by
the final version of the `cumsumBMP` function above. If you want GHC HQ to fix
this soon then add yourself as a cc to [Issue #4081 on the GHC trac](http://hackage.haskell.org/trac/ghc/ticket/4081). Repa programs will be much prettier when this is fixed.
2. You don't need to add the goop to every function. In this example I didn't need to touch `indexSlice` and friends. The general rule is to add the goop to functions that use `force`, `fold` or `sumAll`. These functions instantiate the actual loops that operate over the array data, that is, they convert a delayed array to a manifest value.
3. The performance of a piece of Repa code is determined as much by the context in which it's used as the actual code. If you pass your top level functions delayed arrays then they will run very slowly. There is more discussion of this in [The Repa Tutorial](http://www.haskell.org/haskellwiki/Numeric_Haskell%3a_A_Repa_Tutorial#Optimising_Repa_programs).
4. BMP files read with the repa-io library aren't pre-forced, so you need to force them before use. This is probably the wrong default, so I'll change it in the next version.
|
Stream works without filter(predicate), gives "Incompatible types" with it
Why does the below code work only when the filter-predicate line is commented out. With the filter line, it gives me this error:
```
Incompatible types: java.lang.Object cannot be converted to java.lang.String
```
What am I doing wrong?
```
import static java.util.stream.Collectors.joining;
import java.util.Arrays;
import java.util.function.Predicate;
import java.util.regex.Pattern;
public class PredicateStreamInvalidType {
public static void main(String[] ignored) {
String input = "one two three four five six seven eight";
Predicate containsAnE = Pattern.compile("e").asPredicate();
String[] splits = input.split(" ");
String output = Arrays.stream(splits).
//filter(containsAnE).
collect(joining(" "));
System.out.println(output);
}
}
```
I'm expecting this output:
```
one three five seven eight
```
|
You're using the raw form of `Predicate<T>` - using raw types is very often where you end up getting compile-time errors involving `Object` where you expected to be using a more specific type.
All you need to do is change the compile-time type of `containsAnE`:
```
Predicate<String> containsAnE = Pattern.compile("e").asPredicate();
```
[`Pattern.asPredicate`](http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#asPredicate--) is declared to return a `Predicate<String>`, so you should be good to go :)
As a way to help you avoid such problems in future, you should see if you can get whatever environment you're using (IDE, command line compilation, whatever) to warn you when you use raw types. It's *almost* always a bad idea.
|
Horizontal vs Vertical Fragmentation in Distributed Database Management Systems (DDBMS)
In the context in DDBMS, what is the difference between **Vertical Fragmentation** and **Horizontal Fragmentation**?
Is it such that the relation's extension is fragmented for vertical fragmentation and intension fragmentation is horizontal fragmentation?
|
Suppose you have a relation, Student.
You divide relation Student in two sets (call them set1 and set2) such that half of the attributes of Student are in set1 and half of them are in set2. This is called 'vertical fragmentation', as a relation is fragmented along columns (similar to tearing a page vertically). Original relation is obtained by take the natural join of all the sets. For the natural join we require at least one attribute which is common to all the sets(generally it is the primary key).
But if our relation divided so that a subset of rows (see here all attributes are present in rows) is present with site1 (for example), another subset is present with site2, and so on, this is called 'horizontal fragmentation', and original relation is obtained by taking the union of all the sets. It's like tearing a page horizontally.
As is clear, this is in the context of Distributed DBMS.
|
How can I share a git configuration?
I've initiated a new git repository using `git flow`, done a commit or two and pushed.
When I clone the repository in a new directory and run a `git flow` command I get the error:
```
Fatal: Not a gitflow-enabled repo yet. Please run 'git flow init' first.
```
The reason for the error is that the `.git/config` file in the newly cloned directory doesn't contain the git flow configuration.
**How can I push/share the configuration so any clone of the repository will have the correct configuration?**
|
## You cannot directly share you config
The contents of the `.git` folder are (intended to be) specific to an individual install.
## Alternative
Instead of trying to directly share your config, consider adding a script to the repository to setup whatever config you want to share. e.g. add a file named `bin/setup` to the repository with these contents:
```
#!/usr/bin/env bash
# simple
git flow init -d
# override stuff or whatever
git config gitflow.prefix.versiontag ""
git config gitflow.prefix.feature ""
```
Commit it:
```
-> chmod +x bin/setup
-> git add bin/setup
-> git commit -m "adding a setup script to ensure consistent config"
```
And run it on new clones:
```
-> git clone ....
-> cd project
-> bin/setup
-> git config -l --local
...
gitflow.branch.master=master
gitflow.branch.develop=development
gitflow.prefix.versiontag=
gitflow.prefix.feature=
gitflow.prefix.release=release/
gitflow.prefix.hotfix=hotfix/
gitflow.prefix.support=support/
```
|
multiple versions of a script on the same page (d3.js)
I need to have multiple versions of a javascript library on the same page. How can I accomplish this, short of manually refactoring one version to avoid naming conflicts?
There are many examples of how to do this with Jquery ([example](https://stackoverflow.com/questions/1566595/can-i-use-multiple-versions-of-jquery-on-the-same-page)). This seems to rely on jQuery goodness, however. How can I do this for an arbitrary script?
More detail: I'm using [d3.js](http://d3js.org/), and I'm plugging in visualizations others have made using d3. The issue is, one of the vizzes requires one version of d3, the other requires a newer version. Both of these vizzes are supposed to be available on the same page - the user swaps which viz is displayed by clicking a thumbnail, and then js is used to hide one viz and build the other. So, it seems like swapping the script rather than loading both in a no-conflict style could also be an option.
|
If you take a look at the main d3 source file: <https://github.com/mbostock/d3/blob/master/d3.js>
you see it begins:
```
d3 = function() {
var d3 = {
version: "3.1.5"
};
//.....
```
So d3 is just an object. I'm not sure if this is the best method, but I think the following would work:
1. include one version of d3
2. put the line: `d3versionX = d3;`
3. include the next version
4. same as 2 with different version number.
5. put `d3 = d3versionX` where X is your default version for the visualization when the page loads
6. put an event handler on the thumbnails that trigger the switching of version, and set the d3 variable to the appropriate version number as the first thing that happens.
## update with sample code
See [this jsbin](http://jsbin.com/naqidebino/edit?html,js,console,output) for a working example. The relevant code:
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.10.0/d3.min.js"></script>
<script>
d3version4 = d3
window.d3 = null
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.17/d3.min.js"></script>
<script>
d3version3 = d3
window.d3 = null
// test it worked
console.log('v3', d3version3.version)
console.log('v4', d3version4.version)
</script>
```
|
C# datagridview combobox column datasource from list/dictionary/datatable
I've got a datatable and one column is an integer ID foreign key to another database table.
I've got a datagridview and i'd like to use a combobox column to allow user to change value. But instead of using the integers, it would be great to use the names.
I've tried creating a simple struct with public members int ID and string Name; a dictionary and looked into enums (however values not known at compile time) but not got anything to work yet.
I was able to populate the combobox with struct values, but not able to programmatically set the selected item/index; ie, if ID "5" is in the datatable, set the combo box selected item to the struct that has an ID of 5.
So to be clear i'm wanting:
```
gridview datasource's fk ID's
1
2
3
Foreign Key table:
ID Name
1 Name 1
2 Name 2
3 Name 3
```
Datagridviewcombobox column should be loaded with three items; should display as "Name 1, Name 2, Name 3". Based on the gridview datasource's FK id, the selected item for each should match.
|
You can set the [`DataGridViewComboBoxColumn.DataSource`](http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridviewcomboboxcolumn.datasource.aspx) property, and then use the [`ValueMember`](http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridviewcomboboxcolumn.valuemember.aspx) and [`DisplayMember`](http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridviewcomboboxcolumn.displaymember.aspx) properties to determine what is shown in the `ComboBox`. Easiest is probably to load your FK values into `DataTable` and use that as the data source. For your example:
```
// assuming your DataGridViewComboBox column is fkCol
// and your fk values are in a DataTable called fkTable
fkCol.DataSource = fkTable;
fkCol.ValueMember = "ID";
fkCol.DisplayMember = "Name";
```
I am not sure how you are binding your `DataGridView` to your initial `DataTable`, but you can associate the `DataGridViewComboBox` column with a specific column in your original `DataTable` using [`DataPropertyName`](http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridviewcolumn.datapropertyname.aspx):
```
fkCol.DataPropertyName = "ColumnName";
```
|
How to reuse existing user/group data in Activiti?
I have a webapp which has user/group functions, and existing user/group data.
I want to use Activiti the process engine, however, it seems Activiti manage user/group info itself.
Should I:
1. Refactor my existing webapp, to reuse the user/group data from Activiti, or
2. Write some adapter code, to make Activiti reuse user/group data in my existing database? Maybe, another implmentation of `RepositoryService`, `IdentityService`, etc., and recompile? It seems `RepositionServiceImpl` is hard coded in the Activiti sources, and there isn't a `setRepositionService()` method in `ProcessEngine`.
I can't rename the existing db tables, because there are some other apps using them.
I have read the user guide, but I didn't found any information on how to integrate Activiti with existing apps.
|
I don't know what version you are currently using, but I used your second option successfully with version 5.5, overriding some Activiti classes:
1. Extend `GroupManager` and `UserManager` (from package `org.activiti.engine.impl.persistence.entity`), and implement the methods you need, using the required DAOs/EntityManager/whatever pointing to your database. Code here: [GroupManager](http://pastebin.com/KCV3R0dg) / [UserManager](http://pastebin.com/EK8t4sQG).
2. Implement `org.activiti.engine.impl.interceptor.SessionFactory.SessionFactory`, for groups and users. Check out code here: [ActivitiGroupManagerFactory](http://pastebin.com/6k0fEAi4) / [ActivitiUserManagerFactory](http://pastebin.com/ipfR1CtQ).
3. Finally, in your activity config you have to set your new SessionFactory classes. I was using spring, so there is my activiti-config bean code: [activiti-config.xml](http://pastebin.com/zPwYUfBK) (check line 14)
Hope this helps in some way :)
|
Is it ok to manually update a ufw profile?
I am new to linux server management and want to move my OpenSSH ufw profile to listen on port 4444 instead of port 22 when I enable the firewall.
So I am acomplishing this by updating the profile at
`/etc/ufw/applications.d/openssh-server`
to
```
[OpenSSH]
title=Secure shell server, an rshd replacement
description=OpenSSH is a free implementation of the Secure Shell protocol.
ports=4444/tcp
```
Is it okay to just do this manually? I'm afraid I might accidentally initiate some sort of firewall update in the future and it might change it back to port 22 originally, which will lock me out (as I'm also setting the ssh config to 4444).
Also is it normal practice to edit these *standard profile files* or just create a *new profile entirely* for the firewall?
|
From [ArchLinux Wiki](https://wiki.archlinux.org/index.php/Uncomplicated_Firewall#Adding_other_applications):
>
> **Warning:** If users modify any of the PKG provided rule sets, these will
> be overwritten the first time the ufw package is updated. This is why
> custom app definitions need to reside in a non-PKG file as recommended
> above!
>
>
>
I could not find similar statement within Ubuntu documentation. The only thing, connected with this topic, that I found there is in the article [Firewall](https://help.ubuntu.com/lts/serverguide/firewall.html):
>
> Applications that open ports can include an ufw profile, which details
> the ports needed for the application to function properly. **The
> profiles are kept in `/etc/ufw/applications.d`, and can be edited if
> the default ports have been changed**.
>
>
>
To be sure your edits will not be overwritten, you can create your own application profile file, for example:
```
sudo cp /etc/ufw/applications.d/openssh-server /etc/ufw/applications.d/openssh-server-custom
```
Then modify `/etc/ufw/applications.d/openssh-server-custom` in this way:
```
[CustomSSH]
title=Secure shell server, an rshd replacement
description=OpenSSH is a free implementation of the Secure Shell protocol.
ports=4444/tcp
```
After that you will be able to create rules as this:
```
sudo ufw limit CustomSSH
```
---
The main advantage of profile usage is that you can change the application's port and just reload UFW's configuration without need to delete rules and type new ones. If you don't intend to do that often, then you can type a rule for the custom port and leave a comment to be clear what is the rule's purpose:
```
sudo ufw limit 4444/tcp comment 'SSH Custom port'
```
|
Using the ST monad
In the blog post, <http://galvanist.com/post/83741037068/adding-badly-under-python-julia-go>, the author uses a simple algorithm to compare the performance of various languages (including Haskell). In the Haskell example, the author uses a recursive function. As an exercise, I wanted to use the ST monad to allow local mutable state. This works but the recursive function is MUCH faster than my function which uses the ST monad.
Recursive function-
```
peanoAdd :: Int -> Int -> Int
peanoAdd 0 y = y
peanoAdd x y = peanoAdd (x - 1) (y + 1)
main :: IO ()
main = do
let a = 64000000 :: Int
let b = 64000000 :: Int
let n = peanoAdd a b
print n
128000000
real 0m0.583s
user 0m0.480s
sys 0m0.096s
```
Using the ST monad-
```
import Control.Monad.ST
import Data.STRef
import Control.Monad.Loops
peanoAdd :: Int -> Int -> Int
peanoAdd x y = runST $ do
x' <- newSTRef x
y' <- newSTRef y
whileM_ (do x'' <- readSTRef x'
return $ x'' /= 0)
(do modifySTRef x' (subtract 1)
modifySTRef y' (+1))
readSTRef y'
main :: IO ()
main = do
let a = 64000000 :: Int
let b = 64000000 :: Int
let n = peanoAdd a b
print n
128000000
real 0m17.837s
user 0m16.412s
sys 0m1.424s
```
Is there something I am doing obviously wrong which is hurting performance in the ST monad example? (PS. I am using Stack with the simple template for both projects.)
|
One reason your ST program could be running slow is that you're using [`modifySTRef`, which is non-strict](http://hackage.haskell.org/package/base-4.8.1.0/docs/Data-STRef.html#v:modifySTRef):
>
> Be warned that `modifySTRef` does not apply the function strictly. This
> means if the program calls `modifySTRef` many times, but seldomly uses
> the value, thunks will pile up in memory resulting in a space leak.
> This is a common mistake made when using an STRef as a counter. For
> example, the following will leak memory and likely produce a stack
> overflow:
>
>
>
> ```
> print $ runST $ do
> ref <- newSTRef 0
> replicateM_ 1000000 $ modifySTRef ref (+1)
> readSTRef ref
>
> ```
>
>
Your `x'` is forced once per loop, but `y'` isn't forced until the `print`, so there's a huge chain of thunks built up.
Benchmarking it on my laptop against a version using `modifySTRef'` shows how strictness can improve runtime (though both still lose to the recursive version).
```
benchmarking rec
time 7.896 ms (7.602 ms .. 8.269 ms)
0.992 R² (0.988 R² .. 0.997 R²)
mean 7.842 ms (7.724 ms .. 8.001 ms)
std dev 404.5 μs (303.9 μs .. 523.8 μs)
variance introduced by outliers: 25% (moderately inflated)
benchmarking st
time 18.44 ms (17.84 ms .. 19.01 ms)
0.996 R² (0.993 R² .. 0.998 R²)
mean 18.03 ms (17.79 ms .. 18.41 ms)
std dev 750.4 μs (528.0 μs .. 1.110 ms)
variance introduced by outliers: 16% (moderately inflated)
benchmarking st'
time 9.191 ms (9.028 ms .. 9.437 ms)
0.996 R² (0.992 R² .. 0.999 R²)
mean 9.317 ms (9.175 ms .. 9.527 ms)
std dev 475.8 μs (311.8 μs .. 677.9 μs)
variance introduced by outliers: 25% (moderately inflated)
```
The benchmarking code:
```
import Criterion.Main
import Control.Monad.ST
import Data.STRef
import Control.Monad.Loops
peanoAddST :: Int -> Int -> Int
peanoAddST x y = runST $ do
x' <- newSTRef x
y' <- newSTRef y
whileM_ (do x'' <- readSTRef x'
return $ x'' /= 0)
(do modifySTRef x' (subtract 1)
modifySTRef y' (+1))
readSTRef y'
peanoAddST' :: Int -> Int -> Int
peanoAddST' x y = runST $ do
x' <- newSTRef x
y' <- newSTRef y
whileM_ (do x'' <- readSTRef x'
return $ x'' /= 0)
(do modifySTRef' x' (subtract 1)
modifySTRef' y' (+1))
readSTRef y'
peanoAddRec :: Int -> Int -> Int
peanoAddRec 0 y = y
peanoAddRec x y = peanoAddRec (x - 1) (y + 1)
main =
let n = 64000 in
defaultMain
[ bench "rec" $ whnf (peanoAddRec n) n
, bench "st" $ whnf (peanoAddST n) n
, bench "st'" $ whnf (peanoAddST' n) n
]
```
|
Why if (n & -n) == n then n is a power of 2?
[Line 294 of java.util.Random source](http://developer.classpath.org/doc/java/util/Random-source.html#line.294) says
```
if ((n & -n) == n) // i.e., n is a power of 2
// rest of the code
```
Why is this?
|
The description is not entirely accurate because `(0 & -0) == 0` but 0 is not a power of two. A better way to say it is
`((n & -n) == n)` when n is a power of two, or the negative of a power of two, or zero.
If n is a power of two, then n in binary is a single 1 followed by zeros.
-n in two's complement is the inverse + 1 so the bits lines up thus
```
n 0000100...000
-n 1111100...000
n & -n 0000100...000
```
To see why this work, consider two's complement as inverse + 1, `-n == ~n + 1`
```
n 0000100...000
inverse n 1111011...111
+ 1
two's comp 1111100...000
```
since you carry the one all the way through when adding one to get the two's complement.
If n were anything other than a power of two† then the result would be missing a bit because the two's complement would not have the highest bit set due to that carry.
† - or zero or a negative of a power of two ... as explained at the top.
|
Access session across subdomains (Rails 4)
Hi i have a multitenant rails 4 application that has a simple sign in solution. However each user has a subdomain that the user gets redirected to after login.
The problem is that as they arrive at the subdomain they are not logged in anymore due to the known problem that sessions are not shared across subdomains.
I have tried several different solution to this problem, however i do not get the session to persist across subdomains. I believe this might be due to my development environment?
I have tried all answers to this question:
[Share session (cookies) between subdomains in Rails?](https://stackoverflow.com/questions/10402777/share-session-cookies-between-subdomains-in-rails/10403338#10403338)
Nothing seems to work. Is there something I'm missing here? Is it the browser or rails 4 or....? How should i approach this problem?
**Edit:**
My sessions\_store initializer:
```
Imagesite::Application.config.session_store :cookie_store, key: '_imagesite_session', :domain => "imagesite.dev"
```
I have also tried `".imagesite.dev"` and `:all`.
I also tried the solution described by Evan at the other question linked above.
**Examples of subdomains:** "ole.imagesite.dev" or "ole2.imagesite.dev" just basic subdomain based on what the user has entered as his/her subdomain.
|
I finally solved it!
I had to set the domain when i create the auth\_token cookie. like this:
```
cookies[:auth_token] = { value: user.auth_token, domain: ".lvh.me" }
```
and like this to delete the cookie:
```
cookies.delete(:auth_token, :domain => '.lvh.me')
```
Complete example:
```
def create
user = User.find_by_username(params[:username])
user ||= User.find_by_email(params[:username])
if user && user.authenticate(params[:password])
# session[:user_id] = user.id
if params[:remember_me]
cookies.permanent[:auth_token] = { value: user.auth_token, domain: ".lvh.me" }
else
cookies[:auth_token] = { value: user.auth_token, domain: ".lvh.me" }
end
redirect_to root_url(:subdomain => "#{current_user.subdomain}"), notice: "You are now loged in."
else
flash.now.alert = "Email or password is invalid"
render "new"
end
end
def destroy
#session[:user_id] = nil
cookies.delete(:auth_token, :domain => '.lvh.me')
redirect_to root_url(:subdomain => false), notice: "Loged out"
end
```
|
binding key event in python using ctypes function
I have been trying to use python to bind my customize event to keyboard event with specific event code number like below
>
> ctypes.windll.user32.keybd\_event('0x24',0,2,0)
>
>
>
but as you already know
>
> windll
>
>
>
the library only worked on Windows OS.
how can I do something like this in Linux machines?
I read about
>
> CDLL('libc.so.6')
>
>
>
but I can't figure it out if this library is helpful or not?
is there another way to set keypress listener in OS level with python using the virtual key code?
|
Linux input subsystem is composed of three parts: the driver layer, the input subsystem core layer and the event processing layer.
and the keyboard or other input event is all describe by `input_event`.
use below code and type in your Terminal `python filename.py | grep "keyboard"`
```
#!/usr/bin/env python
#coding: utf-8
import os
deviceFilePath = '/sys/class/input/'
def showDevice():
os.chdir(deviceFilePath)
for i in os.listdir(os.getcwd()):
namePath = deviceFilePath + i + '/device/name'
if os.path.isfile(namePath):
print "Name: %s Device: %s" % (i, file(namePath).read())
if __name__ == '__main__':
showDevice()
```
you should get `Name: event1 Device: AT Translated Set 2 keyboard`.
then use
```
#!/usr/bin/env python
#coding: utf-8
from evdev import InputDevice
from select import select
def detectInputKey():
dev = InputDevice('/dev/input/event1')
while True:
select([dev], [], [])
for event in dev.read():
print "code:%s value:%s" % (event.code, event.value)
if __name__ == '__main__':
detectInputKey()
```
`evdev` is a package provides bindings to the generic input event interface in Linux. The evdev interface serves the purpose of passing events generated in the kernel directly to userspace through character devices that are typically located in /dev/input/.and`select` is `select`.
|
Plot margins for support vector machine
i just set up a SVM in R with e1071.
Unfortunately the plot of the margin and die hyperplane does not look as desired. I wanted the margin to pass through the support vectors. Shouldnt this be the case?
Can anybody spot my mistake?
Here is my code:
```
rm(list=ls(all=TRUE))
x1s <- c(.5,1,1,2,3,3.5, 1,3.5,4,5,5.5,6)
x2s <- c(3.5,1,2.5,2,1,1.2, 5.8,3,4,5,4,1)
ys <- c(rep(+1,6), rep(-1,6))
my.data <- data.frame(x1=x1s, x2=x2s, type=as.factor(ys))
my.data
library('e1071')
svm.model <- svm(type ~ ., data=my.data, type='C-classification', kernel='linear',scale=FALSE)
plot(my.data[,-3],col=(ys+3)/2, pch=19, xlim=c(-1,6), ylim=c(-1,6))
points(my.data[svm.model$index,c(1,2)],col="blue",cex=2)
w <- t(svm.model$coefs) %*% svm.model$SV
b <- -svm.model$rho
p <- svm.model$SV
abline(a=-b/w[1,2], b=-w[1,1]/w[1,2], col="black", lty=1)
abline(a=--b/p[1,2], b=-w[1,1]/w[1,2], col="orange", lty=3)
abline(a=--b/p[3,2], b=-w[1,1]/w[1,2], col="orange", lty=3)
```
|
Your last 2 commands should be
```
abline(a=(-b-1)/w[1,2], b=-w[1,1]/w[1,2], col="orange", lty=3)
abline(a=(-b+1)/w[1,2], b=-w[1,1]/w[1,2], col="orange", lty=3)
```
Another way
```
plot(my.data[,-3],col=(ys+3)/2, pch=19, xlim=c(-1,6), ylim=c(-1,6))
points(my.data[svm.model$index,c(1,2)],col="blue",cex=2)
x1min = min(x1s); x1max = max(x1s);
x2min = min(x2s); x2max = max(x2s);
coef1 = sum(svm.model$coefs*x1s[svm.model$index]);
coef2 = sum(svm.model$coefs*x2s[svm.model$index]);
lines(c(x1min,x1max), (svm.model$rho-coef1*c(x1min, x1max))/coef2)
lines(c(x1min,x1max), (svm.model$rho+1-coef1*c(x1min, x1max))/coef2, lty=2)
lines(c(x1min,x1max), (svm.model$rho-1-coef1*c(x1min, x1max))/coef2, lty=2)
```
|
What is the difference between curly brace and square bracket in Python?
what is the difference between curly brace and square bracket in python?
```
A ={1,2}
B =[1,2]
```
when I print `A` and `B` on my terminal, they made no difference. Is it real?
And sometimes, I noticed some code use `{}` and `[]` to initialize different variables.
E.g. `A=[]`, `B={}`
Is there any difference there?
|
Curly braces create [dictionaries](https://docs.python.org/3/library/stdtypes.html#mapping-types-dict) or [sets](https://docs.python.org/3/library/stdtypes.html#set-types-set-frozenset). Square brackets create [lists](https://docs.python.org/3/library/stdtypes.html#sequence-types-list-tuple-range).
They are called *literals*; a set literal:
```
aset = {'foo', 'bar'}
```
or a dictionary literal:
```
adict = {'foo': 42, 'bar': 81}
empty_dict = {}
```
or a list literal:
```
alist = ['foo', 'bar', 'bar']
empty_list = []
```
To create an empty set, you can only use `set()`.
Sets are collections of *unique* elements and you cannot order them. Lists are ordered sequences of elements, and values can be repeated. Dictionaries map keys to values, keys must be unique. Set and dictionary keys must meet other restrictions as well, so that Python can actually keep track of them efficiently and know they are and will remain unique.
There is also the [`tuple` type](https://docs.python.org/3/library/stdtypes.html#tuple), using a comma for 1 or more elements, with parenthesis being optional in many contexts:
```
atuple = ('foo', 'bar')
another_tuple = 'spam',
empty_tuple = ()
WARNING_not_a_tuple = ('eggs')
```
Note the comma in the `another_tuple` definition; it is that comma that makes it a `tuple`, not the parenthesis. `WARNING_not_a_tuple` is not a tuple, it has no comma. Without the parentheses all you have left is a string, instead.
See the [data structures chapter](http://docs.python.org/3/tutorial/datastructures.html) of the Python tutorial for more details; lists are introduced in the [introduction chapter](http://docs.python.org/3/tutorial/introduction.html#lists).
Literals for containers such as these are also called [displays](https://docs.python.org/3/reference/expressions.html#displays-for-lists-sets-and-dictionaries) and the syntax allows for procedural creation of the contents based of looping, called *comprehensions*.
|
Knockout: how to update another binding from custom binding?
We have usual problem with optionsCaption binding: it is always showed even when there is only one element. We solved this problem using our custom binding:
```
ko.bindingHandlers.optionsAutoSelect = {
update: function (element, valueAccessor, allBindingsAccessor) {
var value = ko.utils.unwrapObservable(valueAccessor());
var allBindings = allBindingsAccessor();
if (value.length == 1) {
allBindings.optionsCaption = null;
}
ko.bindingHandlers.options.update(element, valueAccessor, allBindingsAccessor);
}
};
```
after updating to knockout 3.0 allBindings become readonly. So any changes are really skipped. Any ideas how it could be solved in ko 3.0? We really have a lot of such auto selects and don't want to copy paste some computed code on all views. So we want some single option/extensibility point. Unfortunately as I could see options binding is rather monolithic.
|
Probably some different approaches that you could take here. Here is one thought (feel free to use a better name than `optionsPlus`):
```
ko.bindingHandlers.optionsPlus = {
preprocess: function(value, name, addBindingCallback) {
//add optionsCaption to the bindings against a "caption" sub-observable
addBindingCallback("optionsCaption", value + ".caption");
//just return the original value to allow this binding to remain as-is
return value;
},
init: function(element, valueAccessor) {
var options = valueAccessor();
//create an observable to track the caption
if (options && !ko.isObservable(options.caption)) {
options.caption = ko.observable();
}
//call the real options binding, return to control descendant bindings
return ko.bindingHandlers.options.init.apply(this, arguments);
},
update: function(element, valueAccessor, allBindings) {
var options = valueAccessor(),
value = ko.unwrap(options);
//set the caption observable based on the length of data
options.caption(value.length === 1 ? null : allBindings.get("defaultCaption"));
//call the original options update
ko.bindingHandlers.options.update.apply(this, arguments);
}
};
```
You would use it like:
```
<select data-bind="optionsPlus: choices, defaultCaption: 'choose one...'"></select>
```
It creates a `caption` observable off of your observableArray/array and updates the caption initially and whenever the options are updated (if using an observableArray).
Sample here: <http://jsfiddle.net/rniemeyer/jZ2FC/>
|
How is make\_heap in C++ implemented to have complexity of 3N?
I wonder what's the algorithm of make\_heap in in C++ such that the complexity is 3\*N? Only way I can think of to make a heap by inserting elements have complexity of O(N Log N). Thanks a lot!
|
You represent the heap as an array. The two elements below the `i`'th element are at positions `2i+1` and `2i+2`. If the array has `n` elements then, starting from the end, take each element, and let it "fall" to the right place in the heap. This is `O(n)` to run.
Why? Well for `n/2` of the elements there are no children. For `n/4` there is a subtree of height 1. For `n/8` there is a subtree of height 2. For `n/16` a subtree of height 3. And so on. So we get the series `n/22 + 2n/23 + 3n/24 + ... = (n/2)(1 * (1/2 + 1/4 + 1/8 + . ...) + (1/2) * (1/2 + 1/4 + 1/8 + . ...) + (1/4) * (1/2 + 1/4 + 1/8 + . ...) + ...) = (n/2) * (1 * 1 + (1/2) * 1 + (1/4) * 1 + ...) = (n/2) * 2 = n`. Or, formatted maybe more readably to see the geometric series that are being summed:
```
n/2^2 + 2n/2^3 + 3n/2^4 + ...
= (n/2^2 + n/2^3 + n/2^4 + ...)
+ (n/2^3 + n/2^4 + ...)
+ (n/2^4 + ...)
+ ...
= n/2^2 (1 + 1/2 + 1/2^4 + ...)
+ n/2^3 (1 + 1/2 + 1/2^3 + ...)
+ n/2^4 (1 + 1/2 + 1/2^3 + ...)
+ ...
= n/2^2 * 2
+ n/2^3 * 2
+ n/2^4 * 2
+ ...
= n/2 + n/2^2 + n/2^3 + ...
= n(1/2 + 1/4 + 1/8 + ...)
= n
```
And the trick we used repeatedly is that we can sum the geometric series with
```
1 + 1/2 + 1/4 + 1/8 + ...
= (1 + 1/2 + 1/4 + 1/8 + ...) (1 - 1/2)/(1 - 1/2)
= (1 * (1 - 1/2)
+ 1/2 * (1 - 1/2)
+ 1/4 * (1 - 1/2)
+ 1/8 * (1 - 1/2)
+ ...) / (1 - 1/2)
= (1 - 1/2
+ 1/2 - 1/4
+ 1/4 - 1/8
+ 1/8 - 1/16
+ ...) / (1 - 1/2)
= 1 / (1 - 1/2)
= 1 / (1/2)
= 2
```
So the total number of "see if I need to fall one more, and if so which way do I fall? comparisons comes to `n`. But you get round-off from discretization, so you always come out to less than `n` sets of swaps to figure out. Each of which requires at most 3 comparisons. (Compare root to each child to see if it needs to fall, then the children to each other if the root was larger than both children.)
|
Using plyr to join two massive dataframes on two columns
I have a very large dataframe that I need join to another dataframe on two columns. I've been using merge to accomplish ir, but R runs out of memory the larger the tables get. Is there a similar solution using dplyr or plyr? I hear they require substantially less memory to accomplish. I know how to use the join function in plyr generally, what I am struggling with is joining by two columns. The merge synatx I've been using is below:
```
Correlation_Table <- merge(Correlation_Table, inter, by.x = c(1,2), by.y = c(1,2), all.x = TRUE, all.y = TRUE)
```
So for example if I have the following two dataframes:
```
> head(df1)
x y z a
1 1 2 429.57410 43.746670
2 2 3 717.98184 524.288886
3 3 4 601.66938 640.245469
4 4 5 87.41476 318.964765
5 5 6 586.22234 196.759991
6 6 7 619.82194 3.308136
> head(df2)
b c d
1 5 8 152.2855
2 6 9 191.5406
3 7 10 197.0520
4 8 11 175.4209
5 9 12 157.6239
6 10 13 136.3286
```
Where columns x and y of df1 are dimensions, while columns b and c of df2 are also dimensions and the other columns are measures. My goal here is create a new dataframe of all three measures where records of df1.x and df1.y match df2.a and df2.b.
Is this possible using plyr?
|
You can try
```
library(dplyr)
res1 <- full_join(df1, df2, by=c('x'='b', 'y'='c'))
```
According to `?full_join`
>
> by: a character vector of variables to join by. If ‘NULL’, the
> default, ‘join’ will do a natural join, using all variables
> with common names across the two tables. A message lists the
> variables so that you can check they're right. To join by different variables on x and y use a named vector.
> For example, ‘by = c("a" = "b")’ will match ‘x.a’ to ‘y.b’.
>
>
>
and compare the results with
```
res2 <- merge(df1, df2, by.x = c(1,2), by.y = c(1,2),
all.x = TRUE, all.y = TRUE)
```
NOTE: The order of rows will be different
|
What is the difference between p:nth-child(2) and p:nth-of-type(2)?
What is the difference between `p:nth-child(2)` and `p:nth-of-type(2)`?
As per [W3Schools CSS Selector Reference](https://www.w3schools.com/cssref/css_selectors.asp):
- `p:nth-child(2)`: Selects every `<p>` element that is the second child of its parent.
- `p:nth-of-type(2)`: Selects every `<p>` element that is the second `<p>` element of its parent.
The difference seem to be *child of its parent* and `<p>` *element of its parent*.
If we are already mentioning the element type as `<p>` in both the cases and the keyword **parent** establishes a *parent-child* relation, so what can be the difference?
|
This question may remind you of [What is the difference between :first-child and :first-of-type?](https://stackoverflow.com/questions/24657555/what-is-the-difference-between-first-child-and-first-of-type) — and in fact, a lot of parallels *can* be drawn between the two. Where this question greatly differs from the other is the arbitrary integer argument X, as in `:nth-child(X)` and `:nth-of-type(X)`. They're similar in principle to their "first" and "last" counterparts, but the potentially matching elements vary *greatly* based on what's actually in the page.
But first, some theory. Remember that [simple selectors are independent conditions](https://stackoverflow.com/questions/5545649/can-i-combine-nth-child-or-nth-of-type-with-an-arbitrary-selector/5546296#5546296). They remain independent even when combined into compound selectors. That means that the `p` neither is influenced by, nor influences, how `:nth-child()` or `:nth-of-type()` matches. Combining them this way simply means that elements must match all of their conditions *simultaneously* in order to match.
Here's where things get interesting. This independent matching means I can get pretty creative in how I express compound (and complex) selectors in terms of plain English, *without changing the meaning of the selectors*. In fact, I can do so right now in a way that makes the difference between `:nth-child(2)` and `:nth-of-type(2)` seem so significant that the pseudo-classes might as well be completely unrelated to each other (except for the "siblings" part anyway):
- `p:nth-child(2)`: Select the second child among its siblings if and only if it is a `p` element.
- `p:nth-of-type(2)`: Select the second `p` element among its siblings.
All of a sudden, they sound really different! And this is where a bit of explanation helps.
Any element may only have a single child element matching `:nth-child(X)` for any integer X at a time. This is why I've chosen to emphasize "the second child" by mentioning it first. In addition, this child element will only match `p:nth-child(X)` if it happens to be of type `p` (remember that "type" refers to the tagname). This is very much in line with `:first-child` and `:last-child` (and, similarly, `p:first-child` and `p:last-child`).
There's two aspects to `:nth-of-type(X)` on the other hand:
1. Because the "type" in `:nth-of-type()` is the same concept as the "type" in a type selector, this family of pseudo-classes is *designed to be used in conjunction with type selectors* (even though they still operate independently). This is why `p:nth-of-type(2)` can be expressed as succinctly as "Select the second `p` element among its siblings." It just works!
2. However, unlike `:first-of-type` and `:last-of-type`, the X requires that there actually be that many child elements of the same type within their parent element. For example, if there's only one `p` element within its parent, `p:nth-of-type(2)` will match nothing within that parent, even though that `p` element is guaranteed to match `p:first-of-type` and `p:last-of-type` (as well as, by extension, `p:only-of-type`).
An illustration:
```
<div class="parent">
<p>Paragraph</p>
<p>Paragraph</p> <!-- [1] p:nth-child(2), p:nth-of-type(2) -->
<p>Paragraph</p>
<footer>Footer</footer>
</div>
<div class="parent">
<header>Header</header>
<p>Paragraph</p> <!-- [2] p:nth-child(2) -->
<p>Paragraph</p> <!-- [3] p:nth-of-type(2) -->
<footer>Footer</footer>
</div>
<div class="parent">
<header>Header</header>
<figure>Figure 1</figure>
<p>Paragraph</p> <!-- [4] -->
<footer>Footer</footer>
</div>
<div class="parent">
<header>Header</header>
<p>Paragraph</p> <!-- [2] p:nth-child(2) -->
<figure>Figure 1</figure>
<hr>
<figure>Figure 2</figure> <!-- [5] .parent > :nth-of-type(2) -->
<p>Paragraph</p> <!-- [5] .parent > :nth-of-type(2) -->
<p>Paragraph</p>
<footer>Footer</footer>
</div>
```
What's selected, what's not, and why?
1. **Selected by both `p:nth-child(2)` and `p:nth-of-type(2)`**
The first two children of this element are both `p` elements, allowing this element to match both pseudo-classes simultaneously for the same integer argument X, because all of these independent conditions are true:
- it is the second child of its parent;
- it is a `p` element; and
- it is the second `p` element within its parent.
2. **Selected by `p:nth-child(2)` only**
This second child is a `p` element, so it does match `p:nth-child(2)`.
But it's *the first `p` element* (the first child is a `header`), so it does not match `p:nth-of-type(2)`.
3. **Selected by `p:nth-of-type(2)` only**
This `p` element is the second `p` element after the one above, but it's the third child, allowing it to match `p:nth-of-type(2)` but not `p:nth-child(2)`. Remember, again, that a parent element can only have one child element matching `:nth-child(X)` for a specific X at a time — the previous `p` is already taking up the `:nth-child(2)` slot in the context of this particular parent element.
4. **Not selected**
This `p` element is the only one in its parent, and it's not its second child. Therefore it matches neither `:nth-child(2)` nor `:nth-of-type(2)` (not even when not qualified by a type selector; see below).
5. **Selected by `.parent > :nth-of-type(2)`**
This element is the second of its type within its parent. Like `:first-of-type` and `:last-of-type`, leaving out the type selector allows the pseudo-class to potentially match more than one element within the same parent. Unlike them, *how many* it actually matches depends on how many of each element type there actually are.
Here, there are two `figure` elements and three `p` elements, allowing `:nth-of-type(2)` to match a `figure` and a `p`. But there's only one `header`, one `hr`, and one `footer`, so it won't match elements of any of those types.
In conclusion, `:nth-child()` and `:nth-of-type()`, with an integer argument X (i.e. not in the form An+B with a coefficient A of n), function pretty similarly to `:first-child`/`:last-child` and `:first-of-type`/`:last-of-type`, with the major difference being that the argument, along with the page itself, influences how many different elements may be matched with `:nth-of-type()`.
Of course, there's a whole lot more to `:nth-child()` and `:nth-of-type()` than just a simple integer argument, but needless to say the details and possibilities thereof are outside the scope of this question.
|
Build Dictionary with LINQ
Let's say we have a variable 'data' which is a list of Id's and Child Id's:
```
var data = new List<Data>
{
new()
{
Id = 1,
ChildIds = new List<int> {123, 234, 345}
},
new()
{
Id = 1,
ChildIds = new List<int> {123, 234, 345}
},
new()
{
Id = 2,
ChildIds = new List<int> {678, 789}
},
};
```
I would like to have a dictionary with ChildId's and the related Id's. If the ChildId is already in the dictionary, it should overwrite with the new Id.
Currently I have this code:
```
var dict = new Dictionary<int, int>();
foreach (var dataItem in data)
{
foreach (var child in dataItem.ChildIds)
{
dict[child] = dataItem.Id;
}
}
```
This works fine, but I don't like the fact that I am using two loops. I prefer to use Linq ToDictionary to build up the dictionary in a Functional way.
What is the best way to build up the dictionary by using Linq?
Why? I prefer functional code over mutating a state. Besides that, I was just curious how to build up the dictionary by using Linq ;-)
|
In this case your foreach appproach is both, readable and efficient. So even if i'm a fan of LINQ i would use that. The loop has the bonus that you can debug it easily or add logging if necessary(for example invalid id's).
However, if you want to use LINQ i would probably use [`SelectMany`](https://learn.microsoft.com/en-us/dotnet/api/system.linq.enumerable.selectmany?view=net-5.0) and [`ToLookup`](https://learn.microsoft.com/en-us/dotnet/api/system.linq.enumerable.tolookup?view=net-5.0). The former is used to flatten child collections like this `ChildIds` and the latter is used to create a collection which is very similar to your dictionary. But one difference is that it allows duplicate keys, you get multiple values in that case:
```
ILookup<int, int> idLookup = data
.SelectMany(d => d.ChildIds.Select(c => (Id:d.Id, ChildId:c)))
.ToLookup(x => x.ChildId, x => x.Id);
```
Now you have already everything you needed since it can be used like a dictionary with same lookup performance. If you wanted to create that dictionary anyway, you can use:
```
Dictionary<int, int> dict = idLookup.ToDictionary(x => x.Key, x => x.First());
```
If you want to override duplicates with the new Id, as mentioned, simply use `Last()`.
.NET-Fiddle: <https://dotnetfiddle.net/mUBZPi>
|
How do I pass a data attribute in vue js select option
I need to pass a select object data attribute to a function on change. I can pass the value fine, but how can i pass the 'test' attribute below? e.target.test returns undefined.
```
<select v-model="prog.programme_id" @change="updateTest">
<option v-for="programme in programmes" :key="programme.id" :value="programme.programme_id" :test="programme.newval">{{ programme.title }} )</option>
</select>
```
method:
```
updateTest(e){
console.log(e.target.value)
},
```
|
event.target refers to the SELECT not the options... you have to select the selected option and get it's attribute
```
event.target.options[event.target.options.selectedIndex].getAttribute('test')
```
here is the full component example
```
<template>
<div>
<select v-model="val" @change="updateTest($event)" style="width: 100px" test="Test">
<option v-for="programme in list" :key="programme.id" :value="programme.id" :test="programme.name">{{ programme.value }} )</option>
</select>
</div>
</template>
<script>
export default {
name: 'HelloWorld',
data() {
return {
val: null,
list: [
{id: 1, name: "John", value: "A"},
{id: 2, name: "Jack", value: "B"},
]
}
},
methods: {
updateTest(event) {
console.log(event.target.options[event.target.options.selectedIndex].getAttribute('test'))
}
}
}
</script>
<style scoped>
</style>
```
|
When should a social security number be used as a database primary key?
Our DBA says that because the social security number (SSN) is unique, that we should use it as the primary key in a table.
While I do not agree with the DBA (and have already mentioned some of the good parts found in [this answer](https://stackoverflow.com/a/621891/2657515)), are there any situations where he could possibly be correct?
|
SSN is not a unique identifier for people. Whether it should be the PK in some table depends on what the rows in the table mean. (See also [sqlvogel's answer](https://stackoverflow.com/a/45423641/3404097).)
>
> 6.1 percent of Americans have at least two SSNs associated with their name. More than 100,000 Americans have five or more SSNs associated with their name. [...] More than 15 percent of SSNs are associated with two or more people. More than 140,000 SSNs are associated with five or more people. Significantly, more than 27,000 SSNs are associated with 10 or more people.
>
> --[idanalytics.com](http://www.idanalytics.com/blog/press-releases/20-million-americans-multiple-social-security-numbers-associated-name/)
>
>
>
See also Wikipedia [Social Security number](https://en.wikipedia.org/wiki/Social_Security_number).
|
'\_Thr = \_Other.\_Thr;' break point
With most errors I often have some idea in what direction I need to head to it, but here I have no idea.
So let's start with the breakpoint itself:
[](https://i.stack.imgur.com/azdgW.png)
There are relatively few times any interactions are mad using the thread library, here are all of them:
The inclusion:
```
#include <thread>
```
The usage:
```
void evolve(double* bestN, double* bestP, double* bestM, double* bestQ, double* bestWinConstant) {
std::thread threadArray[threadAmount];
for (int i = 0; i < 100000; i++) {
for (int t = 0; t < threadAmount; t++) {
if (gameArrayInUse[t] == 0) {
copyArray(gameArray[t], startArray);
gameArrayInUse[t] = 1;
threadArray[t] = std::thread(playGames, i, t);
std::cout << "------------New Thread Spawned, No: " << t << std::endl;
break;
}
if (t == threadAmount - 1) {
t = -1;
}
}
}
for (int i = 0; i < threadAmount; i++)
{
std::cout << "JOIN THREADS--------------------------------------------------" << std::endl;
threadArray[i].join();
}
}
```
The error as seen in the console is (should be noted this changes every time):
```
Start of evolution on magnitude:1
------------New Thread Spawned, No: 0Completed:
0.000000%
------------New Thread Spawned, No: 1
Compl
```
Due to my lacking knowledge about this I apologise if you find the content I provide about my code lacking in certain areas, I would implore you to inform me via a comment so I can amend it.
|
The "breakpoint" is actually triggered by `_STD terminate()`, one line above. (Especially when debugging native code, I find that it's a healthy habit to look at the line above the one that seems to have caused an error when it looks inexplicable.) Per the condition that wraps it, it's triggered when you `move` a thread on top of an existing thread that is running.
C++ does not allow you to overwrite running thread objects. Before running each iteration of your `for(i)` loop, you should ensure that every thread created in your `for(t)` loop has terminated. You can do that by calling [`thread::join`](http://www.cplusplus.com/reference/thread/thread/join/). Alternatively, you can call [`thread::detach`](http://www.cplusplus.com/reference/thread/thread/detach/) to let the thread run on its own without an object representing it, but you won't have any way to check for its completion.
You must also ensure that every thread has completed before the function returns, or else you'll get a similar error when the running `thread` objects are destroyed.
|
Changing refresh rate for specific monitor
I have 3 monitors, one of which is a CRT, and I need to bump up it's refresh rate to 75/80 to get rid of the flicker, I've used it with my Raspberry Pi and it fully supports up to 100Hz. When I use `xrandr` it tries to change my 1st monitor, I want my 2nd one changed.
`xrandr` confirms that I can do this,
```
HDMI1 connected 1280x1024+2646+0 (normal left inverted right x axis y axis) 338mm x 270mm
1280x1024 60.02*+ 75.02
1920x1080 60.00 59.94
1152x864 75.00
1280x720 60.00 59.94
1024x768 75.08 60.00
800x600 75.00 60.32
720x480 60.00 59.94
640x480 75.00 60.00 59.94
720x400 70.08
```
I have Intel HD Graphics 4400. Ubuntu 16.04.
|
**Method 1**
With the following command
```
xrandr --output HDMI1 --rate 75
```
Where the string after `--output` is the name of the monitor and the number after `--rate` is the new refresh rate
Or, if you want to make sure the resolution is correct too in the same command
```
xrandr --output HDMI1 --mode 1280x1024 --rate 75
```
See `man xrandr` and the following link to learn more
<https://wiki.ubuntu.com/X/Config/Resolution>
**Method 2**
You should have a file **monitors.xml** (~/.config/monitors.xml)
Edit the file with your favorite text editor
```
vim ~/.config/monitors.xml
```
You will see monitors available, and you should recognize them by name.
Here's some example information you will see about a monitor
```
<vendor>AUO</vendor>
<product>0x38ed</product>
<serial>0x00000000</serial>
<width>1920</width>
<height>1080</height>
<rate>60</rate>
<x>0</x>
<y>0</y>
<rotation>normal</rotation>
<reflect_x>no</reflect_x>
<reflect_y>no</reflect_y>
<primary>yes</primary>
```
There's the line
```
<rate>60</rate>
```
Replace the current refresh rate with a new one. For example
```
<rate>75</rate>
```
Save the file and reboot, and the new refresh rate will take place.
**NOTE:** Make sure that monitor's resolution supports the refresh rate you are going to change
|
Reverse array values while keeping keys
Here is an array I have:
```
$a = array('a' => 'a1', 'b' => 'a2', 'c' => 'a3', 'd' => 'a4', 'e' => 'a5');
```
What I would like to do is reverse the values of the array while keeping the keys intact, in other words it should look like this:
```
$a = array('a' => 'a5', 'b' => 'a4', 'c' => 'a3', 'd' => 'a2', 'e' => 'a1');
```
How should I go about it?
P.S. I tried using `array_reverse()` but it didn't seem to work
|
Some step-by-step processing using native PHP functions (this can be compressed with less variables):
```
$a = array('a' => 'a1', 'b' => 'a2', 'c' => 'a3', 'd' => 'a4', 'e' => 'a5');
$k = array_keys($a);
$v = array_values($a);
$rv = array_reverse($v);
$b = array_combine($k, $rv);
var_dump($b);
```
Result:
```
array(5) {
'a' =>
string(2) "a5"
'b' =>
string(2) "a4"
'c' =>
string(2) "a3"
'd' =>
string(2) "a2"
'e' =>
string(2) "a1"
}
```
|
How to create an bash alias for the command "cd ~1"
In BASH, I use "pushd . " command to save the current directory on the stack.
After issuing this command in couple of different directories, I have multiple directories saved on the stack which I am able to see by issuing command "dirs".
For example, the output of "dirs" command in my current bash session is given below -
0 ~/eclipse/src
1 ~/eclipse
2 ~/parboil/src
Now, to switch to 0th directory, I issue a command "cd ~0".
I want to create a bash alias command or a function for this command.
Something like "xya 0", which will switch to 0th directory on stack.
I wrote following function to achieve this -
xya(){
cd ~$1
}
Where "$1" in above function, is the first argument passed to the function "xya".
But, I am getting the following error -
**-bash: cd: ~1: No such file or directory**
Can you please tell what is going wrong here ?
|
Generally, bash parsing happens in the following order:
- brace expansion
- tilde expansion
- parameter, variable, arithmetic expansion; command substitution (same phase, left-to-right)
- word splitting
- pathname expansion
Thus, by the time your parameter is expanded, tilde expansion is already finished and will not take place again, without doing something explicit like use of `eval`.
---
If you [know the risks](http://mywiki.wooledge.org/BashFAQ/048) and are willing to accept them, use `eval` to force parsing to restart at the beginning after the expansion of `$1` is complete. The below tries to mitigate the damage should something that isn't `eval`-safe is passed as an argument:
```
xya() {
local cmd
printf -v cmd 'cd ~%q' "$1"
eval "$cmd"
}
```
...or, less cautiously (which is to say that the below trusts your arguments to be `eval`-safe):
```
xya() {
eval "cd ~$1"
}
```
|
How does the method passed into an angularjs event-directive work?
I'm new in angularjs and I'm trying to write some directives which will work on some events like 'blur'. But I'm really confused about the back-end process of an event-directive. How does it work? There are numerous directives like 'ngClick' or 'ngHover' and we pass the method which we want to execute on that event to this directives. Like this:
```
<div ng-click="doSomethingOnClick()"></div>
```
and we define 'doSomethingOnClick()' method in our controller. Now I just want to know how the 'ngClick' directive executes this method on the event happens.
If you can explain with the code, that will be great.
Thanks in advance.
|
Here's the definition of ng-click pulled from the 1.1.5 source I also added comments throughout the code to explain each line so far as I understand it.
```
/**
* @ngdoc directive
* @name ng.directive:ngClick
*
* @description
* The ngClick allows you to specify custom behavior when
* element is clicked.
*
* @element ANY
* @param {expression} ngClick {@link guide/expression Expression} to evaluate upon
* click. (Event object is available as `$event`)
*
* @example
<doc:example>
<doc:source>
<button ng-click="count = count + 1" ng-init="count=0">
Increment
</button>
count: {{count}}
</doc:source>
<doc:scenario>
it('should check ng-click', function() {
expect(binding('count')).toBe('0');
element('.doc-example-live :button').click();
expect(binding('count')).toBe('1');
});
</doc:scenario>
</doc:example>
*/
/*
* A directive that allows creation of custom onclick handlers that are defined as angular
* expressions and are compiled and executed within the current scope.
*
* Events that are handled via these handler are always configured not to propagate further.
*/
//make an object
var ngEventDirectives = {};
//for each string in the list separated by spaces
forEach(
'click dblclick mousedown mouseup mouseover mouseout mousemove mouseenter mouseleave'.split(' '),
//create a function which creates the directive and is called for each element in the list above
function(name) {
//directiveNormalize does things to strip the -data prefix and deal with camel casing conversion
var directiveName = directiveNormalize('ng-' + name);
//setting a property on the ngEventDirectives equal to a new [] which contains the dependency injection values and finally the function that will return the directive definion object (or in this case the link function) $parse service is being used
ngEventDirectives[directiveName] = ['$parse', function($parse) {
//link function to call for each element
return function(scope, element, attr) {
//$parse the value passed in the quotes for this attribute ng-click="something()" then fn = something()
//my guess is parse does some magic... will investigate this soon
var fn = $parse(attr[directiveName]);
//Setup the regular event listener using bind as an abstraction for addEventListener/attachEvent
element.bind(lowercase(name), function(event) {
//running the function
scope.$apply(function() {
fn(scope, {$event:event});
});
});
};
}];
}
);
```
|
How can I read/write app.config settings at runtime without using user settings?
I'm looking for a way to store application or machine level settings that can be written to at runtime using [Application Settings](http://msdn.microsoft.com/en-us/library/k4s6c3a0.aspx). User settings allow read/write but application settings do not. I have been using user settings for saving settings like this at runtime but this has really proven to be impractical for the following reasons:
- All users of the machine need to share settings.
- In support calls (especially in crisis situations) it is difficult to explain to users/employees where to find and modify these settings manually (appdata is a hidden folder among other things).
- New versions of the app need to use previous settings (user settings seem to get blown away with new versions).
- It is common for our employees to copy the application to a new folder which also resets the user settings.
Our company machines are only used by one user anyway so user specific settings are not generally needed.
Otherwise I really like using application settings and would like to continue to use them if possible. It would be ideal if the settings could reside in the **same folder as the EXE** (like good ol' ini files once did).
**NOTE:** This is a WPF application and not an ASP.net web app so no web.config.
|
Well, I haven't yet wanted to change application settings at runtime (that's what I use user settings for), but what I have been able to do is write application settings at install time. I imagine that a similar approach might work at runtime. You could try it out since there don't seem to be any other propsed solutions ATM.
```
exePath = Path.Combine( exePath, "MyApp.exe" );
Configuration config = ConfigurationManager.OpenExeConfiguration( exePath );
var setting = config.AppSettings.Settings[SettingKey];
if (setting != null)
{
setting.Value = newValue;
}
else
{
config.AppSettings.Settings.Add( SettingKey, newValue);
}
config.Save();
```
Hope that helps!
|
Handle Volley error
I want to handle and show some message in `onErrorResponse`
below is my code.
```
String url = MainActivity.strHostUrl+"api/delete_picture";
jobjDeleteImage = new JsonObjectRequest(Request.Method.POST, url, jobj, new Response.Listener<JSONObject>() {
@Override
public void onResponse(JSONObject response) {
Log.e("Image response", response.toString());
}
}, new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
Log.e("Volly Error", error.toString());
NetworkResponse networkResponse = error.networkResponse;
if (networkResponse != null) {
Log.e("Status code", String.valueOf(networkResponse.statusCode));
}
}
});
```
I want to handle `com.android.volley.TimeoutError` and also some other error code like 404, 503 etc and `Toast` message here.
|
The networkResponse is null because in a TimeoutError no data is received from the server -- hence the timeout. Instead, you need generic client side strings to display when one of these events occur. You can check for the VolleyError's type using instanceof to differentiate between error types since you have no network response to work with -- for example:
```
@Override
public void onErrorResponse(VolleyError error) {
if (error instanceof TimeoutError || error instanceof NoConnectionError) {
Toast.makeText(context,
context.getString(R.string.error_network_timeout),
Toast.LENGTH_LONG).show();
} else if (error instanceof AuthFailureError) {
//TODO
} else if (error instanceof ServerError) {
//TODO
} else if (error instanceof NetworkError) {
//TODO
} else if (error instanceof ParseError) {
//TODO
}
}
```
|
Proper setup for load balanced web that needs sticky sessions?
I have found many q&a's on this subject, but I still haven't figured out the proper configuration for our servers. Background is this: We have two web servers behind a load balancer and must make sure that the user's don't loose their sessions.
- Web servers are IIS7/ASP.NET 4.
- We currently cannot setup a separate session state server and must therefor use sticky sessions on the LB.
As far as I understand, the following must be assured:
- Set same machinekey on both web servers.
- Use precompiled sites so that the assemblies get the same naming on both machines.
- We either must base sticky session on ip-number or cookie (the latter is prefered)
Is it necessary to have precompiled sites? (I know it is faster, but we loose some flexibility)
Since we have sticky sessions, is it correct that having the same machinekey only affects those cases when user's session time out and he/she therefore ends up on the other server (which means a post back containing view state might not be valid unless they have the same machine key?)
|
You are correct on all points - sticky sessions on LB would ensure that same web server will be hit on subsequent requests and hence correct in-proc session state would be available. However, its imperative that LB stickiness should match (or should be more) with the time-out value of ASP.NET session. For example, if LB is using cookie for stickiness then the cookie expiration time should be more than that of ASP.NET session.
Same machine-key argument is useful for cases where post backs for requests go to other servers for whatever reasons. The client side state (such as view-state and event validation, perhaps authentication ticket) gets encrypted with machine key so having same key ensures that any of the server can serve the post-back. On the side note, in all web farm scenarios, it makes 100% sense to have exact S/W environment (and possible H/W environment) on all web severs to avoid any surprises.
Pre-compilation of web sites is needed to avoid serialization conflicts - type names has to be same while serialize/deserialize. So you cannot have serialized types from dynamically generated assemblies and per-compilation avoids that. IMO, this would more likely to affect view-state storage rather than session state (because your session state is any way not shared and will not be available on second server). Lastly, if you are not using web sites and rather using web application projects then this point become more or less irrelevant because code files would be anyway complied while building projects. Only pages (markup) would be dynamically complied and having chances of serializable types in markup is almost zero (and sounds like bad idea anyway).
|
Libgdx storing game items
I understand the basics of JSON, but what I more curious about is how I would use it to go about achieving what I need to get done.
Let me explain, I want to store all my game ITEMS in a JSON file so it would hold the item name "WEAPON OF WHATEVER" then have its stats such as `buyPrice`, `sellPrice`, `minDamage`, `maxDamage`, etc...
How would one go about doing something like this in JSON?
|
I've used JSON files to store reference data for my game, and the good news is that libGDX [comes with a JSON parser](https://github.com/libgdx/libgdx/wiki/Reading-%26-writing-JSON) that you can use in a few lines of code as shown below. First, you want to define a class that can be serialized:
```
public class Item {
private String name;
private String description;
private String image;
private float baseValue;
private int baseQuantity;
private float rarityIndex;
private Array<String> tags = new Array<>();
public Item() { }
// More Constructors/Getters/Setters/Helper Methods, etc.
}
```
Then you need to create a `Json` object to read a file (or write to a file):
```
Json json = new Json();
types = json.fromJson(Array.class, Item.class, Gdx.files.internal(FILE_NAME) );
```
This is actual, working code from my game. A few things to point out:
- **types** is of type `Array<Item>` using libGDX's `Array` class. This means we're really telling libGDX to interpret it as an Array object instead of a proper array, but I really like libGDX's data structures, so this wasn't an issue for me.
- Not all Java data structures are supported when using GWT (in particular, when using
Maps you want to use a libGDX implementation under `com.badlogic.gdx.utils`). This is because the nature of GWT puts certain restrictions on reflection.
- Note that when we call `Json#fromJson()` we have to pass `Array.class` and `Item.class` so that it knows to return an `Array<Item>` object.
Now how do we structure the JSON? The easiest way to figure this out is to create a small test program that generates a few objects and then writes them to a file or to stdout:
```
System.out.println(json.prettyPrint(types));
```
This will work for simple structures (and is probably all you need, but it might be helpful to look at the JSON and understand how it's structured. Going back to my example earlier, the JSON file looks like the following:
```
[
{
"class": "tech.otter.merchant.data.Item",
"name": "Bindookian Spices",
"description": "Sweet, yet spicy.",
"image": "goods0",
"baseValue": 150,
"baseQuantity": 1000,
"rarityIndex": 0.25,
"tags": [
"category-food",
"category-luxury"
]
},
{
"class": "tech.otter.merchant.data.Item",
"name": "Italiluminum Rods",
"description": "Shiny.",
"image": "goods1",
"baseValue": 400,
"baseQuantity": 1000,
"rarityIndex": 0.25,
"tags": [
"category-valuable",
"category-material"
]
}
// More items in the real file
]
```
You'll see that this is a pretty straightforward mapping from the JSON to our class. A few things of note:
- Note how the file begins with the square brackets []- this tells us that the data is being stored as a JSON array, even though we're translating it into an Array class in our code.
- I based this off of the libGDX output, so inside each Item structure there is a "class" tag. I don't believe that this is strictly necessary, but it doesn't hurt when you have nested data structures like this.
- Notice how "tags" got stored as an array, and libGDX doesn't need to know that it is an Array object.
In summary, the libGDX library comes with its own JSON parser (which can be [further customized](https://github.com/libgdx/libgdx/wiki/Reading-%26-writing-JSON#customizing-serialization) if you have special serialization needs), but the tradeoff is that you have to use the libGDX data structures in certain cases (particularly when using GWT).
|
What's the R equivalent of SQL's LIKE 'description%' statement?
Not sure how else to ask this but, I want to search for a term within several string elements. Here's what my code looks like (but wrong):
```
inplay = vector(length=nrow(des))
for (ii in 1:nrow(des)) {
if (des[ii] = 'In play%')
inplay[ii] = 1
else inplay[ii] = 0
}
```
des is a vector that stores strings such as "Swinging Strike", "In play (run(s))", "In play (out(s) recorded)" and etc. What I want inplay to store is a 1s and 0s vector corresponding with the des vector, with the 1s in inplay indicating that the des value had "In play%" in it and 0s otherwise.
I believe the 3rd line is incorrect, because all this does is return a vector of 0s with a 1 in the last element.
Thanks in advance!
|
The R analog to SQL's LIKE is just R's ordinary indexing syntax.
The 'LIKE' operator selects data rows from a table by matching string values in a specified column against a user-supplied pattern
```
> # create a data frame having a character column
> clrs = c("blue", "black", "brown", "beige", "berry", "bronze", "blue-green", "blueberry")
> dfx = data.frame(Velocity=sample(100, 8), Colors=clrs)
> dfx
Velocity Colors
1 90 blue
2 94 black
3 71 brown
4 36 beige
5 75 berry
6 2 bronze
7 89 blue-green
8 93 blueberry
> # create a pattern to use (the same as you would do when using the LIKE operator)
> ptn = '^be.*?' # gets beige and berry but not blueberry
> # execute a pattern-matching function on your data to create an index vector
> ndx = grep(ptn, dfx$Colors, perl=T)
> # use this index vector to extract the rows you want from the data frome:
> selected_rows = dfx[ndx,]
> selected_rows
Velocity Colors
4 36 beige
5 75 berry
```
In SQL, that would be:
```
SELECT * FROM dfx WHERE Colors LIKE ptn3
```
|
RESTful-Flask parsing JSON Arrays with parse\_args()
This's my code:
```
parser = reqparse.RequestParser(bundle_errors=True)
parser.add_argument('list', type=list, location="json")
class Example(Resource):
def post(self):
#json_data = request.get_json(force=True)
json_data = parser.parse_args()
return json_data
```
If I post a JSON object like this:
```
{
"list" : ["ele1", "ele2", "ele3"]
}
```
`parser.parse_args()` parses it to this python list:
```
'list': ['e','l','e','1']
```
`request.get_json()` works, but I would really like to have the validation of the reqparser...
How can I get the `parser.parse_args()` working properly with JSON arrays?
(I get this error: `TypeError("'int' object is not iterable",) is not JSON serializable`, if if the JSON array contains integers: `'list': [1, 2, 3]`)
|
chuong nguyen is right in his comment to your question. With an example:
```
def post(self):
parser = reqparse.RequestParser()
parser.add_argument('options', action='append')
parser = parser.parse_args()
options = parser['options']
response = {'options': options}
return jsonify(response)
```
With this, if you request:
```
curl -H 'Content-type: application/json' -X POST -d '{"options": ["option_1", "option_2"]}' http://127.0.0.1:8080/my-endpoint
```
The response would be:
```
{
"options": [
"option_1",
"option_2"
]
}
```
|
What does "ABC" class do?
I wrote a class that inherits another class:
```
class ValueSum(SubQuery):
output = IntegerField()
```
And *pycharm* is showing the following warning:
>
> Class ValueSum must implement all abstract methods
>
>
>
Then I `alt+enter` to add `ABC` to superclass. And my warning is gone. I have several questions:
- Should I always do this when writing a sub-class?
- What is the difference between manually implementing all the methods
vs just using `ABC`?
- Does `ABC` add something to my code?
|
`SubQuery` is an abstract base class (per the [`abc` module](https://docs.python.org/3/library/abc.html)) with one or more abstract methods that you did not override. By adding `ABC` to the list of base classes, you defined `ValueSum` itself to be an abstract base class. That means you aren't forced to override the methods, but it also means you cannot instantiate `ValueSum` itself.
PyCharm is warning you ahead of time that you need to implement the abstract methods inherited from `SubQuery`; if you don't, you would get an error from Python when you actually tried to instantiate `ValueSum`.
---
As to what inheriting from `ABC` *does*, the answer is... not much. It's a convenience for setting the metaclass. The following are equivalent:
```
class Foo(metaclass=abc.ABCMeta):
...
```
and
```
class Foo(abc.ABC):
...
```
The metaclass modifies `__new__` so that every attempt to create an instance of your class checks that the class has implemented all methods decorated with `@abstractmethod` in a parent class.
|
erratic results for numpy/scipy eigendecompositions
I am finding that scipy.linalg.eig sometimes gives inconsistent results. But not every time.
```
>>> import numpy as np
>>> import scipy.linalg as lin
>>> modmat=np.random.random((150,150))
>>> modmat=modmat+modmat.T # the data i am interested in is described by real symmetric matrices
>>> d,v=lin.eig(modmat)
>>> dx=d.copy()
>>> vx=v.copy()
>>> d,v=lin.eig(modmat)
>>> np.all(d==dx)
False
>>> np.all(v==vx)
False
>>> e,w=lin.eigh(modmat)
>>> ex=e.copy()
>>> wx=w.copy()
>>> e,w=lin.eigh(modmat)
>>> np.all(e==ex)
True
>>> e,w=lin.eigh(modmat)
>>> np.all(e==ex)
False
```
While I am not the greatest linear algebra wizard, I do understand that the eigendecomposition is inherently subject to weird rounding errors, but I don't understand why repeating the computation would result in a different value. But my results and reproducibility are varying.
What exactly is the nature of the problem -- well, sometimes the results are acceptably different, and sometimes they aren't. Here are some examples:
```
>>> d[1]
(9.8986888573772465+0j)
>>> dx[1]
(9.8986888573772092+0j)
```
The difference above of ~3e-13 does not seem like an enormously big deal. Instead, the real problem (at least for my present project) is that some of the eigenvalues cannot seem to agree on the proper sign.
```
>>> np.all(np.sign(d)==np.sign(dx))
False
>>> np.nonzero(np.sign(d)!=np.sign(dx))
(array([ 38, 39, 40, 41, 42, 45, 46, 47, 79, 80, 81, 82, 83,
84, 109, 112]),)
>>> d[38]
(-6.4011617320002525+0j)
>>> dx[38]
(6.1888785138080209+0j)
```
Similar code in MATLAB does not seem to have this problem.
|
The eigenvalue decompositions satisfy A V = V Lambda, which is all what is guaranteed --- for instance the order of the eigenvalues is not.
Answer to the second part of your question:
Modern compilers/linear algebra libraries produce/contain code that does different things
depending on whether the data is aligned in memory on (e.g.) 16-byte boundaries. This affects rounding error in computations, as floating point operations are done in a different order. Small changes in rounding error can then affect things such as ordering of the eigenvalues if the algorithm (here, LAPACK/xGEEV) does not guarantee numerical stability in this respect.
(If your code is sensitive to things like this, it is incorrect! Running e.g. it on a different platform or different library version would lead to a similar problem.)
The results usually are quasi-deterministic --- for instance you get one of 2 possible results, depending if the array happens to be aligned in memory or not. If you are curious about alignment, check `A.__array_interface__['data'][0] % 16`.
See <http://www.nccs.nasa.gov/images/FloatingPoint_consistency.pdf> for more
|
Allow customer to change order status in WooCommerce
In WooCommerce, when an order is in processing status, I would like on "My Account" Page to display an action button allowing customer to confirm that the order has arrived by changing order status to complete.
I have seen [Allowing customer to change status of the order via email](https://stackoverflow.com/questions/60101968/allowing-customer-to-change-status-of-the-order-via-email) related question code (without answers), that doesn't really help to achieve my goal.
Is it possible that the customer can confirm if the order has arrived by changing order status to "completed"?
|
You can use `woocommerce_my_account_my_orders_actions` to add a custom action button to "My account" orders section, for orders with "processing" status (and also to view order).
Then using `template_redirect` hook, it is possible for the customer to change the status of one of its processing orders, displaying a success notice.
The code:
```
// The button Url and the label
function customer_order_confirm_args( $order_id ) {
return array(
'url' => wp_nonce_url( add_query_arg( 'complete_order', $order_id ) , 'wc_complete_order' ),
'name' => __( 'Approve order', 'woocommerce' )
);
}
// Add a custom action button to processing orders (My account > Orders)
add_filter( 'woocommerce_my_account_my_orders_actions', 'complete_action_button_my_accout_orders', 50, 2 );
function complete_action_button_my_accout_orders( $actions, $order ) {
if ( $order->has_status( 'processing' ) ) {
$actions['order_confirmed'] = customer_order_confirm_args( $order->get_id() );
}
return $actions;
}
// Add a custom button to processing orders (My account > View order)
add_action( 'woocommerce_order_details_after_order_table', 'complete_action_button_my_accout_order_view' );
function complete_action_button_my_accout_order_view( $order ){
// Avoiding displaying buttons on email notification
if( is_wc_endpoint_url( 'view-order' ) ) {
$data = customer_order_confirm_args( $order->get_id() );
echo '<div style="margin:16px 0 24px;">
<a class="button" href="'.$data['url'].'">'.$data['name'].'</a>
</div>';
}
}
// Change order status and display a message
add_action( 'template_redirect', 'action_complete_order_status' );
function action_complete_order_status( $query ) {
if ( ( is_wc_endpoint_url( 'orders' )
|| is_wc_endpoint_url( 'view-order' ) )
&& isset( $_GET['complete_order'] )
&& $_GET['complete_order'] > 1
&& isset($_GET['_wpnonce'])
&& wp_verify_nonce($_GET['_wpnonce'], 'wc_complete_order') )
{
$order = wc_get_order( absint($_GET['complete_order']) );
if ( is_a($order, 'WC_Order') ) {
// Change order status to "completed"
$order->update_status( 'completed', __('Approved by the customer', 'woocommerce') ) ;
// Add a notice (optional)
wc_add_notice( sprintf( __( 'Order #%s has been approved', 'woocommerce' ), $order->get_id() ) );
// Remove query args
wp_redirect( esc_url( remove_query_arg( array( 'complete_order', '_wpnonce' ) ) ) );
exit();
}
}
}
```
Code goes in functions.php file of your active child theme (or active theme). Tested and works.
Related *(old answer)*: [Allow customer to change the order status in WooCommerce My account](https://stackoverflow.com/questions/56066431/allow-customer-to-change-the-order-status-in-woocommerce-my-account/56071548#56071548)
|
Android - App Update Manifest Version Code and Name
**my Android Manifest File :**
my existing Version Code = 1
also
my existing Version Number = 1
I want to update my App and I know that have to change my Version Number e.g. to 1.1 but what does the **Version Code** mean?
- Should it stay at 1?
Or do i have to change it also? Maybe be to 2? oder anything else?
Thx for ur help!
|
Version Code is important. It indicates a release number. 1= first release, 2 = second release etc of your APK.
Version number is a display version that shows major minor release versions.
>
> **android:versionCode** — An integer value that represents the version of
> the application code, relative to other versions. The value is an
> integer so that other applications can programmatically evaluate it,
> for example to check an upgrade or downgrade relationship. You can set
> the value to any integer you want, however you should make sure that
> each successive release of your application uses a greater value. The
> system does not enforce this behavior, but increasing the value with
> successive releases is normative. Typically, you would release the
> first version of your application with versionCode set to 1, then
> monotonically increase the value with each release, regardless whether
> the release constitutes a major or minor release. This means that the
> android:versionCode value does not necessarily have a strong
> resemblance to the application release version that is visible to the
> user (see android:versionName, below). Applications and publishing
> services should not display this version value to users.
>
>
> **android:versionName** — A string value that represents the release
> version of the application code, as it should be shown to users. The
> value is a string so that you can describe the application version as
> a .. string, or as any other type of absolute or
> relative version identifier. As with android:versionCode, the system
> does not use this value for any internal purpose, other than to enable
> applications to display it to users. Publishing services may also
> extract the android:versionName value for display to users.
>
>
>
Source:
<http://developer.android.com/tools/publishing/versioning.html>
|
Include html in another html file
I have a html "head" template and a navigation template that I want to include in all my other html files for my site.
I found this post:
[Include another HTML file in a HTML file](https://stackoverflow.com/questions/8988855/include-another-html-file-in-a-html-file)
And my question is... what if it's the header that I want to include?
So for example, I have the following file structure:
```
/var/www/includes/templates/header.html
navigation.html
```
header.html might look like this:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>Test Portal</title>
</head>
```
In a case like this, can I still follow the example in the other post where they create a div and populate the div via jquery?
|
**Method 1:**
I think it would be best way to include an html content/file into another html file using jQuery.
You can simply include the jQuery.js and load the HTML file using `$("#DivContent").load("yourFile.html");`
For example
```
<html>
<head>
<script src="jquery.js"></script>
<script>
$(function(){
$("#DivContent").load("another_file.html");
});
</script>
</head>
<body>
<div id="DivContent"></div>
</body>
</html>
```
**Method 2:**
There are no such tags available to include the file but there are some third party methods available like this:
```
<!DOCTYPE html>
<html>
<script src="http://www.w3schools.com/lib/w3data.js"></script>
<body>
<div w3-include-html="content.html"></div>
<script>
w3IncludeHTML();
</script>
</body>
</html>
```
**Method 3:**
Some people also used server-side-includes (SSI):
```
<!--#include virtual="a.html" -->
```
|
Nice solution for hiding all visible DIV elements on the page, then restoring only the ones that were visible before
I'm trying to make a jQuery script that will hide all the visible elements on the page,
and then after someone hits a button, make all the elements that were hidden appear again.
I know I can use the `.is(':visible')` selector on every single divider and store the ones that are visible, then use `.hide();` on all of those. Finally, I know I can then use `.show();` on them again when someone clicks my button.
But I was wondering if there is a nicer way to do this.
I imagine hiding all elements in one sweep won't be a big problem (possibly something like `$(document).hide()` will do it?)
But most importantly how do I store all the elements that were hidden in a nice way so I can restore them again?
|
Before you hide the elements, apply a class to all of them so you can identify them later:
```
// hide everything visible, except the button clicked
$('button#hideVisible').click(function() {
$(':visible').each(function() {
$(this).addClass("unhideable");
$(this).hide();
});
$(this).show();
});
```
Alternatively, you don't really need to use jQuery `each` in this specific case. You can simplify the jQuery function by combining the `:visible` selector with the [jQuery not selector](http://api.jquery.com/not/) to exclude the button using the `this` value:
```
// much simpler version of applying an unhideable class and hiding visible elements
$('button#hideVisible').click(function() {
$(':visible').not(this).addClass("unhideable");
$(':visible').not(this).hide();
});
```
**UPDATE:**
However, while the above two solutions are great for scenarios where you don't need user intervention to unhide elements, like in the case of an automated script, the problem with the above 2 examples is that they hide parent elements of the button, which means the button will be hidden, despite it not being explicit. If you require a user to click the button again in order to unhide elements, then the above two solutions are limited.
Thus, the next evolution of this function is to ensure that we exclude the parents using [jQuery's parentsUntil](http://api.jquery.com/parentsUntil/) method:
```
$('button#hideVisible').click(function() {
// if we're in the hide state, unhide elements and return
if( $('button').hasClass('ancestors') == true ) {
$(this).html("Hide");
$('.unhideable').show();
$('.unhideable, .ancestors').removeClass("unhideable").removeClass("ancestors");
return;
}
// hide all children of the button. While this might not make sense for a button
// it's helpful to use this on clickable DIV's that may have descendants!
$('#hideVisible').find(':visible').addClass("unhideable");
$('#hideVisible').find(':visible').hide();
// put a class on button ancestors so we can exclude them from the hide actions
$('#hideVisible').parentsUntil("html").addClass("ancestors");
$('html').parentsUntil("html").addClass("ancestors");
// let's not forget to include the button as well
$(this).addClass("ancestors");
// make sure all other elements on the page that are not ancestors and
// not descendants of button end up marked as unhideable too!
$(':visible').not('.ancestors').addClass("unhideable");
// nice debug output to give you an idea of what's going on under the hood
// when the hide operation is finally called
$(':visible').not('.ancestors, html').each(function() {
console.info($(this).get());
});
// hide the remaining elements on the page that aren't ancestors,
// and include the html element in the exception list as for some reason,
// we can't add class attributes to it
$(':visible').not('.ancestors, html').hide();
// change the button text to "Show"
$(this).html("Show");
});
```
**Unhiding all unhideable elements:**
Next, when you want to unhide them, simply call:
```
$('.unhideable').show();
```
Finally, if needed, you can clean up after yourself by calling:
```
$('.unhideable, .ancestors').removeClass("unhideable").removeClass("ancestors");
```
**DEMO:**
See this [jsfiddle](http://jsfiddle.net/sK2dR/) for a demonstration of this functionality.
|
jQuery .append() not appending to textarea after text edited
Take the following page:
```
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js" type="text/javascript"/>
</head>
<body>
<div class="hashtag">#one</div>
<div class="hashtag">#two</div>
<form accept-charset="UTF-8" action="/home/index" method="post">
<textarea id="text-box"/>
<input type="submit" value ="ok" id="go" />
</form>
<script type="text/javascript">
$(document).ready(function() {
$(".hashtag").click(function() {
var txt = $.trim($(this).text());
$("#text-box").append(txt);
});
});
</script>
</body>
</html>
```
The behavior I would expect, and that I want to achieve is that when I click on one of the *divs* with class `hashtag` their content ("#one" and "#two" respectively) would be appended at the end of the text in *textarea* `text-box`.
This does happen when I click on the hash tags just after the page loads. However when I then also start editing the text in `text-box` manually and then go back to clicking on any of the hashtags they don't get appended on Firefox. On Chrome the most bizarre thing is happening - all the text I type manually gets replaced with the new hashtag and disappears.
I probably am doing something very wrong here, so I would appreciate if someone can point out my mistake here, and how to fix that.
Thanks.
|
2 things.
First, `<textarea/>` is not a valid tag. `<textarea>` tags must be fully closed with a full `</textarea>` closing tag.
Second, `$(textarea).append(txt)` doesn't work like you think. When a page is loaded the text nodes inside the textarea are set the value of that form field. After that, the text nodes and the value can be disconnected. As you type in the field, the value changes, but the text nodes inside it on the DOM do not. Then you change the text nodes with the append() and the browser erases the value because it knows the text nodes inside the tag have changed.
So you want to set the value, you don't want to append. Use jQuery's val() method for this.
```
$(document).ready(function(){
$(".hashtag").click(function(){
var txt = $.trim($(this).text());
var box = $("#text-box");
box.val(box.val() + txt);
});
});
```
Working example:
<http://jsfiddle.net/Hhptn/>
|
navbar not going hamburger - bootstrap / Flask
My navbar does not display correctly for small screen sizes: <https://www.paulahuevo.com/>
I use Bootstrap Material Design with Flask and have the following code for the nav-bar:
```
<!-- Navbar -->
<nav class="navbar navbar-expand-lg navbar-dark purple">
<div class='container'>
<div class="collapse navbar-collapse" id="navbarNavDropdown">
<ul class="navbar-nav">
{% for href, id, caption in navigation_bar %}
<li{% if id == active_page %} class="active"{% endif %}>
<a class="nav-link" href="{{ href|e }}">{{ caption|e }}</a>
</li>
{% endfor %}
</ul>
</div>
</div>
</nav>
```
I never worked with bootstrap, what am I missing?
|
You are missing some of the navbar components.
For example, you are missing the `nav-item` class on the `li`s.
Also, you need the navbar-toggler.
Check out this template as a working example:
```
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta.3/css/bootstrap.min.css" integrity="sha384-Zug+QiDoJOrZ5t4lssLdxGhVrurbmBWopoEl+M6BdEfwnCJZtKxi1KgxUyJq13dy" crossorigin="anonymous">
<nav class="navbar navbar-expand-lg navbar-light bg-light">
<a class="navbar-brand" href="#">Navbar</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav mr-auto">
<li class="nav-item active">
<a class="nav-link" href="#">Home <span class="sr-only">(current)</span></a>
</li>
<li class="nav-item">
<a class="nav-link" href="#">Link</a>
</li>
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle" href="#" id="navbarDropdown" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
Dropdown
</a>
<div class="dropdown-menu" aria-labelledby="navbarDropdown">
<a class="dropdown-item" href="#">Action</a>
<a class="dropdown-item" href="#">Another action</a>
<div class="dropdown-divider"></div>
<a class="dropdown-item" href="#">Something else here</a>
</div>
</li>
<li class="nav-item">
<a class="nav-link disabled" href="#">Disabled</a>
</li>
</ul>
<form class="form-inline my-2 my-lg-0">
<input class="form-control mr-sm-2" type="search" placeholder="Search" aria-label="Search">
<button class="btn btn-outline-success my-2 my-sm-0" type="submit">Search</button>
</form>
</div>
</nav>
<script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta.3/js/bootstrap.min.js" integrity="sha384-a5N7Y/aK3qNeh15eJKGWxsqtnX/wWdSZSKp+81YjTmS15nvnvxKHuzaWwXHDli+4" crossorigin="anonymous"></script>
```
|
Fuzzy smart number parsing in Python
I wish to parse *decimal* numbers regardless of their format, which is unknown. Language of the original text is unknown and may vary. In addition, the source string can contain some extra text before or after, like currency or units.
I'm using the following:
```
# NOTE: Do not use, this algorithm is buggy. See below.
def extractnumber(value):
if (isinstance(value, int)): return value
if (isinstance(value, float)): return value
result = re.sub(r'&#\d+', '', value)
result = re.sub(r'[^0-9\,\.]', '', result)
if (len(result) == 0): return None
numPoints = result.count('.')
numCommas = result.count(',')
result = result.replace(",", ".")
if ((numPoints > 0 and numCommas > 0) or (numPoints == 1) or (numCommas == 1)):
decimalPart = result.split(".")[-1]
integerPart = "".join ( result.split(".")[0:-1] )
else:
integerPart = result.replace(".", "")
result = int(integerPart) + (float(decimalPart) / pow(10, len(decimalPart) ))
return result
```
This kind of works...
```
>>> extractnumber("2")
2
>>> extractnumber("2.3")
2.3
>>> extractnumber("2,35")
2.35
>>> extractnumber("-2 000,5")
-2000.5
>>> extractnumber("EUR 1.000,74 €")
1000.74
>>> extractnumber("20,5 20,8") # Testing failure...
ValueError: invalid literal for int() with base 10: '205 208'
>>> extractnumber("20.345.32.231,50") # Returns false positive
2034532231.5
```
So my method seems very fragile to me, and returns lots of false positives.
Is there any library or smart function that can handle this? Ideally `20.345.32.231,50` shall not pass, but numbers in other languages like `1.200,50` or `1 200'50` would be extracted, regardless the amount of other text and characters (including newlines) around.
(Updated implementation according to accepted answer:
<https://github.com/jjmontesl/cubetl/blob/master/cubetl/text/functions.py#L91>)
|
You can do this with a suitably fancy regular expression. Here's my best attempt at one. I use named capturing groups, as with a pattern this complex, numeric ones would be much more confusing to use in backreferences.
First, the regexp pattern:
```
_pattern = r"""(?x) # enable verbose mode (which ignores whitespace and comments)
^ # start of the input
[^\d+-\.]* # prefixed junk
(?P<number> # capturing group for the whole number
(?P<sign>[+-])? # sign group (optional)
(?P<integer_part> # capturing group for the integer part
\d{1,3} # leading digits in an int with a thousands separator
(?P<sep> # capturing group for the thousands separator
[ ,.] # the allowed separator characters
)
\d{3} # exactly three digits after the separator
(?: # non-capturing group
(?P=sep) # the same separator again (a backreference)
\d{3} # exactly three more digits
)* # repeated 0 or more times
| # or
\d+ # simple integer (just digits with no separator)
)? # integer part is optional, to allow numbers like ".5"
(?P<decimal_part> # capturing group for the decimal part of the number
(?P<point> # capturing group for the decimal point
(?(sep) # conditional pattern, only tested if sep matched
(?! # a negative lookahead
(?P=sep) # backreference to the separator
)
)
[.,] # the accepted decimal point characters
)
\d+ # one or more digits after the decimal point
)? # the whole decimal part is optional
)
[^\d]* # suffixed junk
$ # end of the input
"""
```
And here's a function to use it:
```
def parse_number(text):
match = re.match(_pattern, text)
if match is None or not (match.group("integer_part") or
match.group("decimal_part")): # failed to match
return None # consider raising an exception instead
num_str = match.group("number") # get all of the number, without the junk
sep = match.group("sep")
if sep:
num_str = num_str.replace(sep, "") # remove thousands separators
if match.group("decimal_part"):
point = match.group("point")
if point != ".":
num_str = num_str.replace(point, ".") # regularize the decimal point
return float(num_str)
return int(num_str)
```
Some numeric strings with exactly one comma or period and exactly three digits following it (like `"1,234"` and `"1.234"`) are ambiguous. This code will parse both of them as integers with a thousand separator (`1234`), rather than floating point values (`1.234`) regardless of the actual separator character used. It's possible you could handle this with a special case, if you want a different outcome for those numbers (e.g. if you'd prefer to make a float out of `1.234`).
Some test output:
```
>>> test_cases = ["2", "2.3", "2,35", "-2 000,5", "EUR 1.000,74 €",
"20,5 20,8", "20.345.32.231,50", "1.234"]
>>> for s in test_cases:
print("{!r:20}: {}".format(s, parse_number(s)))
'2' : 2
'2.3' : 2.3
'2,35' : 2.35
'-2 000,5' : -2000.5
'EUR 1.000,74 €' : 1000.74
'20,5 20,8' : None
'20.345.32.231,50' : None
'1.234' : 1234
```
|
How can I convert connection data lines to block of schemes using Perl?
I'm looking for a way to convert signals connections to a simple scheme or graph.
Let's say I have 2 components with 2 line/signals around them:
```
component A:
input - S1
output - S2
component B:
input - S2
output - S1
```
This will be the input data file, and the output will be a scheme that shows it as 2 blocks with connecting lines around them or a illustration graph.
I'm wondering if an implementation of that exists in Perl's world.
|
It sounds like you want something like the [graphviz](http://www.graphviz.org/) graph generator.
It's written in C, but there is a Perl interface: [GraphViz](http://search.cpan.org/perldoc?GraphViz).
Example:
```
use GraphViz;
use File::Slurp qw(write_file);
my $g = GraphViz->new;
$g->add_node('componentA');
$g->add_node('componentB');
$g->add_edge('componentB' => 'componentA', label => 'S1');
$g->add_edge('componentA' => 'componentB', label => 'S2');
write_file('out.png', $g->as_png);
```
You could load your input data and keep track of component connections via a hash on the signal number, then call `add_edge` for each one.
Output:
[graphviz output http://img704.imageshack.us/img704/2624/outd.png](http://img704.imageshack.us/img704/2624/outd.png)
(labels are optional).
|
What does import unquote(\_\_MODULE\_\_) do in a using callback?
In the following code:
```
defmodule ModuleToBeUsed do
defmacro __using__(_) do
quote do
import unquote(__MODULE__)
end
end
end
defmodule ModuleUsing do
use ModuleToBeUsed
end
```
What does the line `import unquote(__MODULE__)` do? `__MODULE__` refers to `ModuleUsing`. Is it importing it into itself?
|
Elixir (as well as erlang) has *two* stages of “execution.” The code is being “executed” during a compilation stage to produce the beam; afterwards the code is executing on the “execution” stage (I know this sounds a bit cumbersome, but the code below would help to understand what’s going on.)
Consider the following code:
```
$ cat /tmp/test.ex
defmodule ModuleToBeUsed do
defmacro __using__(_) do
IO.puts __MODULE__ # COMPILATION STAGE
quote do # needed to prevent execution on compilation stage
import unquote(__MODULE__)
def test, do: IO.puts "I am test" # EXECUTION STAGE
end
end
end
defmodule ModuleUsing do
use ModuleToBeUsed
def test_of_test, do: test() # I can call `test` here!
end
ModuleUsing.test_of_test()
```
The code above will output
```
Elixir.ModuleToBeUsed # from compilation stage
I am test # on execution stage
```
Remove the last actual call to `ModuleUsing.test_of_test()` and you’ll still see the first output (with module name.)
---
Now turning back to your question. `quote do: import unquote(__MODULE__)` inside a macro would compile to `import` (as is) `ModuleToBeUsed` (due to `unquote`, it would be expanded *on compilation stage*.) On execution stage that would force `ModuleUsing` to import `ModuleToBeUsed`. making a call to `test()` possible without a namespace (otherwise one should be calling it as `ModuleToBeUsed.test()`.
|
Testing performance of queries in mysql
I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details:
- I have root access
- I am the only user accessing the server
- Mostly interested in InnoDB performance
- The queries I am optimizing are mostly search queries (`SELECT ... LIKE '%xy%'`)
What I want to do is to create reliable testing environment for measuring the speed of a single query, free from dependencies on other variables.
Till now I have been using [SQL\_NO\_CACHE](http://dev.mysql.com/doc/refman/5.1/en/select.html), but sometimes the results of such tests also show caching behaviour - taking much longer to execute on the first run and taking less time on subsequent runs.
If someone can explain this behaviour in full detail I might stick to using `SQL_NO_CACHE`; I do believe that it might be due to file system cache and/or caching of indexes used to execute the query, as [this](http://www.mysqlperformanceblog.com/2007/09/12/query-profiling-with-mysql-bypassing-caches/) post explains. It is not clear to me when Buffer Pool and Key Buffer get invalidated or how they might interfere with testing.
So, short of restarting mysql server, how would you recommend to setup an environment that would be reliable in determining if one query performs better then the other?
|
Assuming that you can not optimize the LIKE operation itself, you should try to optimize the base query without them minimizing number of rows that should be checked.
Some things that might be useful for that:
`rows` column in EXPLAIN SELECT ... result. Then,
```
mysql> set profiling=1;
mysql> select sql_no_cache * from mytable;
...
mysql> show profile;
+--------------------+----------+
| Status | Duration |
+--------------------+----------+
| starting | 0.000063 |
| Opening tables | 0.000009 |
| System lock | 0.000002 |
| Table lock | 0.000005 |
| init | 0.000012 |
| optimizing | 0.000002 |
| statistics | 0.000007 |
| preparing | 0.000005 |
| executing | 0.000001 |
| Sending data | 0.001309 |
| end | 0.000003 |
| query end | 0.000001 |
| freeing items | 0.000016 |
| logging slow query | 0.000001 |
| cleaning up | 0.000001 |
+--------------------+----------+
15 rows in set (0.00 sec)
```
Then,
```
mysql> FLUSH STATUS;
mysql> select sql_no_cache * from mytable;
...
mysql> SHOW SESSION STATUS LIKE 'Select%';
+------------------------+-------+
| Variable_name | Value |
+------------------------+-------+
| Select_full_join | 0 |
| Select_full_range_join | 0 |
| Select_range | 0 |
| Select_range_check | 0 |
| Select_scan | 1 |
+------------------------+-------+
5 rows in set (0.00 sec)
```
And another interesting value is `last_query_cost`, which shows how expensive the optimizer estimated the query (the value is the number of random page reads):
```
mysql> SHOW STATUS LIKE 'last_query_cost';
+-----------------+-------------+
| Variable_name | Value |
+-----------------+-------------+
| Last_query_cost | 2635.399000 |
+-----------------+-------------+
1 row in set (0.00 sec)
```
MySQL documentation is your friend.
|
Bootstrap dropdown RTL alignment
Recently in the last release of bootstrap (3.2.0), I just realized that we cannot align `dropdowns` with `dir="rtl"` or`dir="ltr"` so we have to do it manually as their [official blog says](http://blog.getbootstrap.com/2014/01/30/bootstrap-3-1-0-released/): (This feature added from version 3.1.0)
>
> Dropdowns now have their own alignment classes for easier customization
>
>
>
What is that class and how can I make a dropdown list right to left?
|
Twitter bootstrap's new dropdown alignment is quite different than what you are after. It only changes the position of absolutely positioned dropdown menu. I.e. it won't make the dropdown appear in RTL (Right To Left) mode.
Before v3.1.0 [`.pull-right`](https://github.com/twbs/bootstrap/blob/v3.1.0/less/dropdowns.less#L51) had been used to move the dropdown to the right side of its containing block. However as of v3.1.0 it became *deprecated* in favor of [`.dropdown-menu-right`](https://github.com/twbs/bootstrap/blob/v3.1.0/less/dropdowns.less#L135)/[`.dropdown-menu-left`](https://github.com/twbs/bootstrap/blob/v3.1.0/less/dropdowns.less#L145):
>
> **[Deprecated .pull-right alignment](http://getbootstrap.com/components/#dropdowns-alignment)**
>
>
> As of v3.1.0, we've deprecated `.pull-right` on dropdown menus. To
> right-align a menu, use `.dropdown-menu-right`. Right-aligned nav
> components in the navbar use a mixin version of this class to
> automatically align the menu. To override it, use
> `.dropdown-menu-left`.
>
>
>
But it doesn't give the RTL effect as mentioned before.
#RTL Mode
>
> What is that class and how can I make a dropdown list right to left?
>
>
>
In order to achieve that you could give `direction: rtl;` to the `.dropdown` element and also override the default `text-align: left` of `.dropdown-menu` with `text-align: right`1.
Besides, you have to move the absolutely positioned dropdown menu to the right via `.dropdown-menu-right` as well. Hence you'll end up with something like this:
**[Example Here](http://jsbin.com/nafole/3/edit)**
```
.rtl { direction: rtl; }
.rtl .dropdown-menu-right { text-align: right; }
```
```
<div class="dropdown rtl">
<button class="btn btn-default dropdown-toggle" type="button" id="dropdownMenu1" data-toggle="dropdown">
Dropdown
<span class="caret"></span>
</button>
<ul class="dropdown-menu dropdown-menu-right" role="menu" aria-labelledby="dLabel">
<li role="presentation"><a role="menuitem" tabindex="-1" href="#">Action</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="#">Another action</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="#">Something else here</a></li>
<li role="presentation" class="divider"></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="#">Separated link</a></li>
</ul>
</div>
```
---
1 I strongly recommend to use an additional class name in order not to change the Twitter Bootstrap's default styling of `.dropdown`.
|
how to make switch execute 2 cases
how to make switch execute 2 cases?
I tried with the following code, but it execute only the first case
```
#!/bin/sh
action="titi"
case "$action" in
toto|titi)
echo "1_$action"
;;
tata|titi)
echo "2_$action"
;;
esac
```
|
The `case` statement in bash executes the commands in the `COMMAND-LIST` for the first match only.
However, In `bash version 4` or later introduced the `;&` terminator. The `;;&` operator is like `;;`, except the case statement doesn't terminate after executing the associated list - Bash just continues testing the next pattern as though the previous pattern didn't match. Using these terminators, a case statement can be configured to test against all patterns, or to share code between blocks, for example.
**Reference:** Excerpt taken from <http://wiki.bash-hackers.org/syntax/ccmd/case>
So if you have `bash v 4 or later` this would give your desired result:
```
#!/bin/sh
action="titi"
case "$action" in
toto|titi)
echo "1_$action"
;;&
tata|titi)
echo "2_$action"
;;
esac
```
|
Should I use mixins or an utility class?
I have a Vue.js project that use one method in multiple files, so I create an utility class to write this method there, something like:
```
export class Util{
doSomething(){
return 'something'
}
}
```
But I know that I can do it using mixins, like:
```
export const myMixin = {
methods: {
doSomething(){
return 'something'
}
}
}
```
Should I use mixin or an utility class?
When should I use one of them?
|
This is a great question. Unfortunately, there is no precise answer but I'll provide some guidelines based on my own experience working with a large Vue codebase.
**Mixins**
Mixins are ideal for situations where you have a collection of interdependent non-side-effect-free code that you'd like to share between components.
In my case, I have an `input` mixin that defines `props`, some `data` (unique ids for input and error elements), and methods for emitting events like `blur`. It's ~60 lines of code, that I'd otherwise have to retype for each of nine different components.
The benefits of a mixin are similar to that of an inherited class in traditional OOP. I.e. code reuse and complexity encapsulation.
The primary disadvantage of a mixin, is that it can make your code harder to read. Imagine you come back to work on the `AppTextArea` component six months later, and it isn't obvious how and why certain things work or where they are defined... then you remember it uses a mixin, and then you have to dive into the mixin code... In other words it makes for implicit, rather than explicit implementations.
**Shared Functions**
Shared functions are ideal for situations where you can reuse units of side-effect-free code in your application.
In my case, I have a `date` library with a `formatBySlash` function which takes a `Date` object and returns something like `"5/12/2018"`. I've added it as a global filter, so I can do things like `{{ post.createdAt | formatBySlash }}` in my templates. Additionally I can import the function and use it directly in a method or computed property.
Shared functions are flexible, easy to test, and make your code more explicit.
---
In summary, I'd generally recommend writing a shared function unless your use case really requires that it be a mixin.
|
aggregate methods treat missing values (NA) differently
Here's a simple data frame with a missing value:
```
M = data.frame( Name = c('name', 'name'), Col1 = c(NA, 1) , Col2 = c(1, 1))
# Name Col1 Col2
# 1 name NA 1
# 2 name 1 1
```
When I use `aggregate` to `sum` variables by group ('Name') using the `formula` method:
`aggregate(. ~ Name, M, FUN = sum, na.rm = TRUE)`
the result is:
```
# RowName Col1 Col2
# name 1 1
```
So the entire first row, which have an `NA`, is ignored. But if use the "non-`formula`" specification:
`aggregate(M[, 2:3], by = list(M$Name), FUN = sum, na.rm = TRUE)`
the result is:
```
# Group.1 Col1 Col2
# name 1 2
```
Here only the (1,1) entry is ignored.
This caused a major debugging headache in one of my code, since I thought these two calls were equivalent. Is there a good reason why the `formula` entry method is treated differently?
|
Good question, but in my opinion, this shouldn't have caused a *major* debugging headache because it is documented quite clearly in multiple places in the manual page for `aggregate`.
First, in the usage section:
```
## S3 method for class 'formula'
aggregate(formula, data, FUN, ...,
subset, na.action = na.omit)
```
Later, in the description:
>
> `na.action`: a function which indicates what should happen when the data contain NA values. The default is to ignore missing values in the given variables.
>
>
>
---
I can't answer *why* the formula mode was written differently---that's something the function authors would have to answer---but using the above information, you can probably use the following:
```
aggregate(.~Name, M, FUN=sum, na.rm=TRUE, na.action=NULL)
# Name Col1 Col2
# 1 name 1 2
```
|
will office365 Enterprise office 2013 update itself to office 2016
Hi guys I have 150 devices with Office365
A point-to-run/click Office365 was installed on a single device and a image was based on this.
the users are set to recieve update automatically.
Now currently using Office 2013.
But I've always thought this was just updates for the current office and not a full release
Will they also get the entire office 2016 unless I make GPO right now.. to stop it?
I have multiple plugins and honestly dont know if they work with office 2016, so I'm very much interested that it is NOT!
released to my users.
|
**From [this link](https://technet.microsoft.com/en-us/library/mt422981.aspx)**:
- You can use the 2013 version of Office until September 2016. At that point updates will end (including security patches). So, consider September 2016 your deadline for upgrading.
- If you did an "unmanaged" installation of Office 2013 from portal.office.com, it will be automatically upgraded to the 2016 version in February 2016 (unless you set Group Policy to inhibit this, see the comments below). Users can also go to the Office portal and download/install the new version on their own (currently only for people opted into [First Release](https://support.office.com/en-us/article/Office-365-release-options-3B3ADFA4-1777-4FF0-B606-FB8732101F47)).
- If you deployed Office 2013 with the [Click-to-Run Deployment Tool](https://technet.microsoft.com/en-us/library/jj219423.aspx), you have more control of when the switchover happens. In this case, administrator involvement is required for the switchover, and it must be completed before support ends September 2016.
**If you are using the Deployment Tool:**
If you used the Deployment Tool for 2013, [there is a new version of the tool for 2016](http://www.microsoft.com/en-us/download/details.aspx?id=49117). It is similar to the previous version, and uses the same XML configuration (there are some new options, see below). You **must** switch to the new version to deploy 2016.
Note the new version version of the tool is for 2016 **only**. If you plan to support both versions (e.g. keep using 2013 while you do test deployments of 2016), you will need to keep the deployment tools and installation files for both versions side by side on your Application Deployment file share.
The new deployment tool also has support for a "fast" track for updates (First Release) and a "slow" track for updates (Current Branch for Business). [You specify this in the XML configuration and you can modify it after installation via Group Policy (or the underlying Registry setting).](https://technet.microsoft.com/library/mt455210.aspx)
**Disclaimer:**
All of the above info only applies to Office 365 customers that are licensed for ProPlus (e.g. the E3 plan), and installed via Click-to-Run. If your Office desktop install is not tied to an Office 365 subscription (e.g. Volume License or Retail), things are different.
**Something to keep in mind:**
If you use Group Policy or logon/startup scripts to apply standard Office settings for your users, note that you will need to have Office 2016 versions of your policies prepared for deployment. Microsoft [just released the ADMX templates for Office 2016](http://www.microsoft.com/en-us/download/details.aspx?id=49030). They also released [a script to copy your settings from a previous version](https://blogs.office.com/2015/08/19/introducing-the-office-it-pro-deployment-script-project/).
|
"Strong Consistency" vs. "Read-after-write Consistency"
Why does AWS says "strong consistency" in DynamoDB and "read-after-write consistency" for S3? Do both of them mean the same thing?
|
The two terms essentially mean the same thing, in the sense that read-after-write is one type of strong consistency.
The noteworthy difference is that DynamoDB's strong consistency includes read-after-update and read-after-delete, as well as read-after-write. S3 only offers read-after-write... so we could say read-after-write is a subset of strong consistency.
In S3, everything is eventually consistent with one exception: if you create an object *and* you have not previously tried to fetch that object (such as to check whether the object already existed before creating it) *then* fetching that object after creating it will always return the object you created. That's the read-after-write consistency in S3, and it's always available in the circumstance described -- you don't have to ask S3 for a strongly-consistent read-after-write on a new object, because it's always provided.
Any other operation in S3 does not have that consistency guarantee. Examples:
- fetch a nonexistent object, get a 404, then create it, then immediately try to fetch it again. You might get it, but you might continue to get 404 for a short time.
- create an object in S3, then fetch a listing of objects in the bucket. The new object may not immediately appear in the list.
- download an object, delete it, then try downloading it again. You might succeed for a short time. You will eventually get a 404.
- download an object, then overwrite it. Download it again. You might get the new object, or you may get the old one for a short time. You will not get a corrupt or partial object, but you may get the old or the new.
All of these are aspects of the [S3 Consistency Model](https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel) which are the result of optimizations for performance.
DynamoDB is also optimized for performance, and as a result, it *defaults* to eventual (not strong) consistency, for the same reasons... but you can specify strongly-consistent reads in DynamoDB if you need them. These come with caveats:
>
> - A strongly consistent read might not be available if there is a network delay or outage. In this case, DynamoDB may return a server error (HTTP 500).
> - Strongly consistent reads may have higher latency than eventually consistent reads.
> - Strongly consistent reads are not supported on global secondary indexes.
> - Strongly consistent reads use more throughput capacity than eventually consistent reads
>
>
> <https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html>
>
>
>
|
Gaussian Marginal Likelihood
I am currently reading the Lawrence 2005 paper "Probabilistic non-linear Principal Component Analysis with Gaussian Process Latent Variable Models" available [here](http://www.jmlr.org/papers/volume6/lawrence05a/lawrence05a.pdf). However I am failing to see how the integral in Equation 2 is evaluated in closed form.
The integral in question is a marginalisation over latent variables $\mathbf{x}$
$p(\mathbf{y}\_{n} | \mathbf{W}, \beta) = \int p(\mathbf{y}\_{n} | \mathbf{x}\_{n}, \mathbf{W}, \beta) p(\mathbf{x}\_{n}) d\mathbf{x}\_{n}$
where the prior over $\mathbf{x}\_{i}$ is given by
$p(\mathbf{x}\_{n}) = \mathcal{N}(\mathbf{x}\_{n} | \mathbf{0}, \mathbf{I})$
The author of the paper provides the closed form solution to this integration as the following
$p(\mathbf{y}\_{n} | \mathbf{W}, \beta) = \mathcal{N}(\mathbf{y}\_{n} | \mathbf{0}, \mathbf{W}\mathbf{W}^{T} + \beta^{-1} \mathbf{I})$
However it is not clear to me how this result is derived, any insight would be valued.
In general how does one tackle integrals of this form? i.e. computing marginals of continuous distributions? It seems that a lot of arcane matrix and distribution identities are used.
I know that in the case of a Gaussian Liklihood, a variable $\mathbf{x}\_{n}$ may be marginalised by simply dropping it from the mean vector and covariance matrix. But what about the case of a distribution multiplied by one of it's conjugate priors?
|
We want to solve
$$
\int \mathcal N(y | Wx, \beta^{-1} I) \mathcal N(x | 0, I) dx
$$
$$
= \frac{\beta^{D/2}}{(2\pi)^{D/2}} \cdot \frac{1}{(2\pi)^{q/2}}\int \exp \left(-\frac \beta 2 || y - Wx ||^2 - \frac 12 x^T x\right) dx
$$
$$
\propto e^{-\frac \beta 2 y^T y}\int \exp \left(-\frac 12 \left[x^T(\beta W^T W
+ I)x - 2 \beta y^T W x\right]\right) dx.
$$
Let $C = \beta W^T W + I$ and $u^T = \beta y^T W$, and note that $C$ is positive definite. Then by completing the square we have that this last result is equal to
$$
e^{-\frac \beta 2 y^T y}\int \exp \left(-\frac 12 \left[ (x - C^{-1}u)^T C (x -
C^{-1}u) - u^T C^{-1} u \right]\right) dx
$$
$$
\propto e^{-\frac 12 \left(\beta y^T y - u^T C^{-1} u\right)} = \exp \left(-\frac 12 y^T\left(\beta I - \beta^2 WC^{-1}W^T \right)y\right)
$$
so this tells us that $\Sigma^{-1} = \beta I - \beta^2 WC^{-1}W^T$ so now we need to decide if
$$
\Sigma = \left(\beta I - \beta^2 W \left(\beta W^T W + I\right)^{-1}W^T\right)^{-1} \stackrel ?= WW^T + \beta^{-1} I.
$$
Recall the [Woodbury matrix identity](https://en.wikipedia.org/wiki/Woodbury_matrix_identity):
$$
A^{-1} - A^{-1} U (B^{-1} + VA^{-1}U)^{-1}V A^{-1} = (A + UBV)^{-1}.
$$
Take $A = \frac 1\beta I$, $B = I$, $U = W$, and $V = W^T$. Then
$$
A^{-1} - A^{-1} U (B^{-1} + VA^{-1}U)^{-1}V A^{-1} = \beta I - \beta^2 W (I + \beta W^T W)^{-1}W^T = \Sigma^{-1} = \left(\frac 1 \beta I + W W^T\right)^{-1}
$$
$$
\implies \Sigma = \frac 1 \beta I + W W^T
$$
as desired.
This means that we have found that the kernel of the distribution is
$$
\exp \left(-\frac 12 y^T\left(WW^T + \beta^{-1} I\right)^{-1}y\right)
$$
which corresponds to $\mathcal N(y | 0, WW^T + \beta^{-1} I)$.
---
As a general strategy, integrals involving multiple normal pdfs usually require completing the square so that you can factor out some things and are left with a normal density that integrates to 1. While the Woodbury identity looks really ugly at first, it is so useful that very quickly you'll recognize it so this is just practice.
|
Where do things go when I ‘print’ them from my Django app?
I have a Django app on a Linux server. In one of the views, some form of `print` command is executed, and some string gets printed. How can I find out what the printed string was? Is there some log in which these things are kept?
|
The output should be in the terminal, where django was started. (if you don't started it directly, I don't believe there's a way to read it)
As linkedlinked pointed out, it's the best to not use `print`, because this can cause Exceptions! But that's not the only reason: There are modules (like [logging](http://docs.python.org/library/logging.html)) made for such purposes and they have a lot more options.
[This site](http://simonwillison.net/2008/May/22/debugging/) (even when it's from 2008) confirm my statements:
>
> If you want to know what’s going on inside a view, the quickest way is to drop in a print statement. The **development server** outputs any print statements directly to the terminal; it’s the server-side alternative to a JavaScript alert().
>
>
> If you want to be a bit **more sophisticated** with your logging, it’s worth turning to Python’s logging module (part of the standard library). You can configure it in your settings.py: *here he describes, what to do (look on the site)*
>
>
>
For debugging-purposes you could also enable the [debug-mode](http://docs.djangoproject.com/en/dev/ref/settings/#debug) or use the [django-debug-toolbar](http://github.com/robhudson/django-debug-toolbar).
Hope it helps! :)
|
Slow/bad performance on Chrome with large amount of html
When testing all the browsers with TinyMCE 4, Chrome is horribly slow.
(I tried removing all the plugins from TinyMCE but it makes no difference.)
Chrome takes about 20-25 seconds to render some HTML in TinyMCE that contains a few large-ish (500kb) embedded base64 images.
Internet Explorer and Firefox take about 1 second and Edge is instant.
(Edge is pretty damn fast!)
All plugins have been disabled on Chrome.
I have looked at the timeline under F12 and nothing is holding it up. According to the F12 timeline, everything was processed in 800ms - Yet it still takes 20 seconds to show up?
The delay is only when TinyMCE contains embedded base64 images.
Has anyone experienced similar behavior?
**Update:**
I have noticed that even when I open large documents in Chrome, its loads a lot slower than the other browsers. You can actually watch as the scrolling bar gets smaller and smaller as Chrome loads the document.
In other browsers, such as Edge, the whole page is loaded instantly.
|
The problem is that a) Chrome tries to render the superlong data URI inside `<textarea>` as plain text before initializing TinyMCE editor and b) it became superslow in Chrome 49 which switched to a supposedly more correct complex text rendering. However even before that, a few megabyte data URI (linked in [crbug.com/945203](https://crbug.com/945203)) would take 20 seconds to open in Chrome 48 and older as well as in current Firefox.
So if your workflow allows, you should simplify the HTML inside the textarea and instead set it via direct DOM manipulation. Like removing src attributes from the images and instead setting them via `src` property in JS would make the initialization almost instantaneous:
```
<textarea>
<img id=img1 title="SampleJPGImage_5mbmb.jpg" src="" alt="" width="700" height="933">
</textarea>
```
```
tinymce.init({
selector: 'textarea',
init_instance_callback(e) {
e.contentDocument.getElementById('img1').src = 'data:image/jpeg;base64,............';
},
});
```
Alternatively you can hide the textarea entirely via an inline `hidden` attribute which should be set in the html itself so Chrome sees it while parsing the file:
```
<textarea hidden>
<img title="SampleJPGImage_5mbmb.jpg" src="data:image/jpeg;base64,...........">
</textarea>
```
Note, you may have to apply more workarounds as these solutions were confirmed to work only on the standard TinyMCE init as shown above - that is without the plethora of plugins that you load in your test case linked in crbug above.
|
C++: calling non-member functions with the same syntax of member ones
One thing I'd like to do in C++ is to call non-member functions with the same syntax you call member functions:
```
class A { };
void f( A & this ) { /* ... */ }
// ...
A a;
a.f(); // this is the same as f(a);
```
Of course this could only work as long as
- `f` is not virtual (since it cannot appear in `A`'s virtual table.
- `f` doesn't need to access `A`'s non-public members.
- `f` doesn't conflict with a function declared in `A` (`A::f`).
I'd like such a syntax because in my opinion it would be quite comfortable and would push good habits:
1. calling `str.strip()` on a `std::string` (where `strip` is a function defined by the user) would sound a lot better than calling `strip( str );`.
2. most of the times (always?) classes provide some member functions which don't require to be member (ie: are not virtual and don't use non-public members). This breaks encapsulation, but is the most practical thing to do (due to point 1).
My question here is: what do you think of such feature? Do you think it would be something nice, or something that would introduce more issues than the ones it aims to solve? Could it make sense to propose such a feature to the next standard (the one after C++0x)?
---
Of course this is just a brief description of this idea; it is not complete; we'd probably need to explicitly mark a function with a special keyword to let it work like this and many other stuff.
|
re your #2: Actually, making functions *non-members* often ***increases encapsulation***, [as Scott Meyers observed](http://www.ddj.com/cpp/184401197) more than a decade ago.
Anyway, what you describe sounds much like C#'s extension methods to me. They are good to soothe the minds of those that are frightened when they see totally free functions. `:)` Once you've braved the STL (which, BTW, is not the standard library, but only that part of it which comes from the original STL) where almost all functions are so free that they aren't even real functions, but function templates, you will no longer need those.
In short: **Don't try to bend a language to adapt to your mindset**. Instead **enhance your mindset to embrace the language's philosophy.** You will emerge a bit taller from doing so.
|
How do Python properties work?
I've been successfully using Python properties, but I don't see how they could work. If I dereference a property outside of a class, I just get an object of type `property`:
```
@property
def hello(): return "Hello, world!"
hello # <property object at 0x9870a8>
```
But if I put a property in a class, the behavior is very different:
```
class Foo(object):
@property
def hello(self): return "Hello, world!"
Foo().hello # 'Hello, world!'
```
I've noticed that unbound `Foo.hello` is still the `property` object, so class instantiation must be doing the magic, but what magic is that?
|
As others have noted, they use a language feature called descriptors.
The reason that the actual property object is returned when you access it via a class `Foo.hello` lies in how the property implements the `__get__(self, instance, owner)` special method:
- If a descriptor is accessed on an *instance*, then that instance is passed as the appropriate argument, and `owner` is the *class* of that instance.
- When it is accessed through the class, then `instance` is None and only `owner` is passed. The `property` object recognizes this and returns `self`.
---
Besides the [Descriptors howto](http://docs.python.org/howto/descriptor.html), see also the documentation on [Implementing Descriptors](http://docs.python.org/2/reference/datamodel.html#descriptors) and [Invoking Descriptors](http://docs.python.org/2/reference/datamodel.html#invoking-descriptors) in the Language Guide.
|
Node.js buf.toString vs String.fromCharCode
I'm attempting to display the character `í` from `0xed` (237).
`String.fromCharCode` yields the correct result:
```
String.fromCharCode(0xed); // 'í'
```
However, when using a `Buffer`:
```
var buf = new Buffer(1);
buf.writeUInt8(0xed,0); // <Buffer ed>
buf.toString('utf8'); // '?', same as buf.toString()
buf.toString('binary'); // 'í'
```
Using 'binary' with `Buffer.toString` is to be [deprecated](http://nodejs.org/api/buffer.html#buffer_buffer) so I want to avoid this.
Second, I can also expect incoming data to be multibyte (i.e. UTF-8), e.g.:
```
String.fromCharCode(0x0512); // Ԓ - correct
var buf = new Buffer(2);
buf.writeUInt16LE(0x0512,0); // <Buffer 12 05>, [0x0512 & 0xff, 0x0512 >> 8]
buf.toString('utf8'); // Ԓ - correct
buf.toString('binary'); // Ô
```
Note that both examples are inconsistent.
SO, what am I missing? What am I assuming that I shouldn't? Is `String.fromCharCode` magical?
|
Seems you might be assuming that `String`s and `Buffer`s use the same bit-length and encoding.
JavaScript `String`s are [16-bit, UTF-16 sequences](http://es5.github.io/#x4.3.16) while Node's `Buffer`s are 8-bit sequences.
UTF-8 is also a variable byte-length encoding, with code points consuming [between 1 and 6 bytes](https://en.wikipedia.org/wiki/UTF-8#Description). The UTF-8 encoding of `í`, for example, takes 2 bytes:
```
> new Buffer('í', 'utf8')
<Buffer c3 ad>
```
And, on its own, `0xed` is not a valid byte in UTF-8 encoding, thus the `?` representing an "unknown character." It is, however, a valid UTF-16 code for use with `String.fromCharCode()`.
Also, the output you suggest for the 2nd example doesn't seem correct.
```
var buf = new Buffer(2);
buf.writeUInt16LE(0x0512, 0);
console.log(buf.toString('utf8')); // "\u0012\u0005"
```
You can detour with `String.fromCharCode()` to see the UTF-8 encoding.
```
var buf = new Buffer(String.fromCharCode(0x0512), 'utf8');
console.log(buf); // <Buffer d4 92>
```
|
NestJS Alphabetize Endpoints in SwaggerUI
[This SO answer](https://stackoverflow.com/questions/24951268/sort-api-methods-in-swagger-ui) shows that SwaggerUi will sort endpoints alphabetically if it is passed `apisSorter : "alpha"` when instantiated. In NestJS the config options are passed in the `SwaggerModule.createDocument`. I cannot see where in the config [eg here](https://github.com/nestjs/swagger/blob/master/lib/document-builder.ts) I can pass this.
|
You can pass it as the fourth parameter to the `SwaggerModule.setup` method like so:
```
const document = SwaggerModule.createDocument(app, options);
SwaggerModule.setup('docs', app, document, {
swaggerOptions: {
tagsSorter: 'alpha',
operationsSorter: 'alpha',
},
});
```
`swaggerOptions` is `untyped` which is why you just have to know what you're passing. Found the answer [in the discord server](https://discord.com/channels/520622812742811698/606125276845441034/755263731557990460) so hopefully that link doesn't expire.
|
Vuex getter not updating
I have the below getter:
```
withEarmarks: state => {
var count = 0;
for (let l of state.laptops) {
if (l.earmarks.length > 0) {
count++;
}
}
return count;
}
```
And in a component, this computed property derived from that getter:
```
withEarmarks() { return this.$store.getters.withEarmarks; },
```
The value returned is correct, until I change an element within the laptops array, and then the getter doesn't update.
|
In your case `state.laptops.earmarks` is an array, and you are manipulating it by its array index `state.laptops[index]`. Vue is unable to react to mutations on state arrays (by index). The documentation provides 2 workarounds for this:
```
// 1. use purpose built vue method:
Vue.set(state.laptops, index, laptop)
// 2. splice the value in at an index:
state.laptops.splice(index, 1, laptop)
```
*Although it is documented, I'm thinking a giant neon glowing sign on that page that says "You will waste hours of productivity if you don't know this" would be a nice addition*.
You can read more about this "caveat" here: <https://v2.vuejs.org/v2/guide/list.html#Caveats>
|
Performance implications of BeginInvoke
I've inherited code where BeginInvoke is called from the main thread (not a background thread, which is usually the pattern). I am trying to understand what it actually does in this scenario.
Does the method being called in the BeginInvoke get in line of messages that come down to the window? The docs say `asynchronously`, so that is my assumption.
How does the framework prioritize when to kick off the method called by BeginInvoke?
Edit: The code looks like this:
```
System.Action<bool> finalizeUI = delegate(bool open)
{
try
{
// do somewhat time consuming stuff
}
finally
{
Cursor.Current = Cursors.Default;
}
};
Cursor.Current = Cursors.WaitCursor;
BeginInvoke(finalizeUI, true);
```
This is happening in the Form\_Load event.
|
# edit
Now that we see the code, it's clear that this is just a way to move some initialization out of Form\_Load but still have it happen before the user can interact with the form.
The call to `BeginInvoke` is inside Form\_load, and is not called on another object, so this is a call to Form.BeginInvoke. So what's happening is this.
1. Form\_Load passes a delegate to Form.BeginInvoke, this puts a message in the form's message queue that is *ahead* of all user input messages. It sets the cursor to a wait cursor.
2. Form\_Load returns, and the rest of form initialization is allowed to complete, the form most likely becomes visible at this point.
3. Once the code falls into the message pump, the first thing is sees in the queue is the delegate, so it runs that.
4. as the delegate completes, it changes the cursor back to the normal cursor, and returns
5. *profit!*
original post below
---
I depends on the object that you call BeginInvoke on. If the object is derived from `Control` then [Control.BeginInvoke](http://msdn.microsoft.com/en-us/library/system.windows.forms.control.begininvoke.aspx) will run on the thread that created the control. See JaredPar's answer.
But there is another pattern for the use of BeginInvoke. if the object is a delegate, then BeginInvoke runs the callback on a separate thread, one that may be created specifically for that purpose.
```
public class Foo
{
...
public Object Bar(object arg)
{
// this function will run on a separate thread.
}
}
...
// this delegate is used to Invoke Bar on Foo in separate thread, this must
// take the same arguments and return the same value as the Bar method of Foo
public delegate object FooBarCaller (object arg);
...
// call this on the main thread to invoke Foo.Bar on a background thread
//
public IAsyncResult BeginFooBar(AsyncCallback callback, object arg)
{
Foo foo = new Foo();
FooBarCaller caller = new FooBarCaller (foo.Bar);
return caller.BeginInvoke (arg);
}
```
This pattern is one reason that BeginInvoke is called from the main thread rather than from a background thread.
|
Is there a generic method to iterate and print a values in an unknown collection?
Let's say, I have a Print method like this:
```
private static void Print(IEnumerable items)
{
// Print logic here
}
```
I want to pass a collection class to this Print method, which should print all the fields like a table. For example, my input collection can be "Persons" or "Orders" or "Cars" etc.
If I pass the "Cars" collection to the Print method, it should print the list of "Car" details such as: Make, Color, Price, Class etc.
I won't know the type of the collection until run-time. I tried and achieved a solution using `TypeDescriptors` and `PropertyDescriptorCollection`. But, I don't feel that is a good solution. Is there any other way to achieve this using expressions or generics?
|
You could implement Print like this:
```
static void Print<T>(IEnumerable<T> items)
{
var props = typeof(T).GetProperties();
foreach (var prop in props)
{
Console.Write("{0}\t", prop.Name);
}
Console.WriteLine();
foreach (var item in items)
{
foreach (var prop in props)
{
Console.Write("{0}\t", prop.GetValue(item, null));
}
Console.WriteLine();
}
}
```
It simply loops over each property of the class to print the name of the property, then prints over each item and for each item it prints the values of the properties.
I would argue that you should use generics here (as opposed to suggestions in other answers); you want the items in the collection to be of a single type so that you can print table headers.
For table formatting you can check the answers to [this question](https://stackoverflow.com/questions/856845/how-to-best-way-to-draw-table-in-console-app-c).
|
How to make a method call wait for an animation to finish
I want to make a notification window with animated text. A notification would be sent by a button click and the animation would start playing. My problem is that when I click the button again before the previous animation is done, two animations get executed at once. How do I make each method call of "sendMessage()" wait for the other to finish? If it has any significance there are multiple nodes that call the sendMessage() method in my program unlike in my MRE, so I want some kind of Queue with messages. Here is my MRE:
```
public class AnimationTest extends Application {
private final Label messageLabel = new Label();
@Override
public void start(Stage stage) throws IOException {
VBox vBox = new VBox();
vBox.setAlignment(Pos.CENTER);
Scene scene = new Scene(vBox, 320, 240);
vBox.getChildren().add(messageLabel);
Button button = new Button();
button.setOnAction(event -> sendMessage("Some animated text."));
vBox.getChildren().add(button);
stage.setScene(scene);
stage.show();
}
private void sendMessage(String message) {
final IntegerProperty i = new SimpleIntegerProperty(0);
Timeline timeline = new Timeline();
KeyFrame keyFrame = new KeyFrame(
Duration.millis(40),
event -> {
if (i.get() > message.length()) {
timeline.stop();
} else {
messageLabel.setText(message.substring(0, i.get()));
i.set(i.get() + 1);
}
});
timeline.getKeyFrames().add(keyFrame);
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
}
public static void main(String[] args) {
launch();
}
}
```
|
For the specific example you posted, the easiest approach is to disable the button immediately prior to starting the animation, and enable it again when the animation stops. Here is one way to do this:
```
public class AnimationTest extends Application {
private final Label messageLabel = new Label();
@Override
public void start(Stage stage) {
VBox vBox = new VBox();
vBox.setAlignment(Pos.CENTER);
Scene scene = new Scene(vBox, 320, 240);
vBox.getChildren().add(messageLabel);
Button button = new Button();
button.setOnAction(event -> {
Animation animation = sendMessage("Some animated text.");
button.disableProperty().bind(Bindings.equal(animation.statusProperty(), Animation.Status.RUNNING));
});
vBox.getChildren().add(button);
stage.setScene(scene);
stage.show();
}
private Animation sendMessage(String message) {
final IntegerProperty i = new SimpleIntegerProperty(0);
Timeline timeline = new Timeline();
KeyFrame keyFrame = new KeyFrame(
Duration.millis(40),
event -> {
if (i.get() > message.length()) {
timeline.stop();
} else {
messageLabel.setText(message.substring(0, i.get()));
i.set(i.get() + 1);
}
});
timeline.getKeyFrames().add(keyFrame);
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
return timeline ;
}
public static void main(String[] args) {
launch();
}
}
```
If you want to allow these messages to accumulate in a queue, and a new animation to start when the old one finishes, you need to keep a queue of the messages and a reference to a current animation that's running (if there is one). You can poll the queue from an `AnimationTimer` and start a new animation when a new message appears, if there is no current animation running.
I'd recommend thinking about whether this is the approach you want to take; there's no guarantee here that your messages will not appear more quickly than they can be animated, in which case the queue will grow indefinitely. However, this is an implementation if you can otherwise assure that this is not the case:
```
public class AnimationTest extends Application {
private final Label messageLabel = new Label();
private final Queue<String> messages = new LinkedList<>();
private Animation currentAnimation = null ;
@Override
public void start(Stage stage) {
VBox vBox = new VBox();
vBox.setAlignment(Pos.CENTER);
Scene scene = new Scene(vBox, 320, 240);
vBox.getChildren().add(messageLabel);
Button button = new Button();
button.setOnAction(event -> messages.add("Some animated text."));
AnimationTimer timer = new AnimationTimer() {
@Override
public void handle(long l) {
if (currentAnimation == null || currentAnimation.getStatus() == Animation.Status.STOPPED) {
String message = messages.poll();
if (message != null) {
currentAnimation = sendMessage(message);
currentAnimation.play();
}
}
}
};
timer.start();
vBox.getChildren().add(button);
stage.setScene(scene);
stage.show();
}
private Animation sendMessage(String message) {
final IntegerProperty i = new SimpleIntegerProperty(0);
Timeline timeline = new Timeline();
KeyFrame keyFrame = new KeyFrame(
Duration.millis(40),
event -> {
if (i.get() > message.length()) {
timeline.stop();
} else {
messageLabel.setText(message.substring(0, i.get()));
i.set(i.get() + 1);
}
});
timeline.getKeyFrames().add(keyFrame);
timeline.setCycleCount(Animation.INDEFINITE);
return timeline ;
}
public static void main(String[] args) {
launch();
}
}
```
Note there are no threading considerations here. The `handle()` method is invoked on the FX Application Thread, so the only requirement is that the messages are placed in the queue on the same thread. This happens in this example because the button's event handler is invoked on that thread. If your messages are coming from a background thread, you should ensure they are added to the queue on the FX Application Thread, either by using `Platform.runLater(...)` or (preferably) by using the JavaFX Concurrency API (i.e. by retrieving the messages in a `Task` or `Service` and adding them to the queue in an `onSucceeded` handler).
|
awk or sed, move first column to end?
I have a text file separated by space as follows. I need to re-arrange the columns so that the first column is at the end of each line.
I have an idea of how this could be done using `cut -d' ' f1` but I'm wondering if there is an easy way with `awk` or `sed`.
Text File:
```
™️ trade mark
ℹ️ information
↔️..↙️ left-right arrow..down-left arrow
↩️..↪️ right arrow curving left..left arrow curving right
⌚..⌛ watch..hourglass done
⌨️ keyboard
⏏️ eject button
⏩..⏳ fast-forward button..hourglass not done
⏸️..⏺️ pause button..record button
Ⓜ️ circled M
▪️..▫️ black small square..white small square
▶️ play button
◀️ reverse button
```
*Want symbol list to follow description instead.*
|
Use `sed`
```
sed 's/\([^ ]*\) *\(.*\)/\2 \1/' infile
```
This `\([^ ]*\)` will match everything until a non-space characters seen.
The parentheses `\(...\)` is used to made a group matching which its index would be `\1`.
The `\(.*\)` matches everything after first group and it's indexed `\2`.
The `*` in `\(...\) *\(...\)` out of matched groups will ignore to print in output which is matching spaces between group 1 and 2, you could use `\s*` (with GNU `sed`) or `[[:space:]]*` (standardly) instead to match any spacing characters instead of just ASCII SPC as well.
Then at the end first we are printing matched *group2* than *group1* with a space between.
|
How to draw rectangle outside of the plot frame in Matplotlib
I want to generate the subfigure's title in the style of followed figure:
[](https://i.stack.imgur.com/lmTue.png)
A gray box should be beneath the title which are at the top of the scatter point.
Here is the code I have tried:
```
x = random.sample(range(50), 50)
y= random.sample(range(50), 50)
fig = pyplot.figure()
ax = pyplot.subplot(111)
ax.scatter(x,y,label='a')
ax.set_aspect('equal')
ax.set_xlim(0,60)
ax.set_ylim(0,60)
ax.plot([0,60], [0, 60], color='k', linestyle='-', linewidth=1.25)
ax.add_patch(patches.Rectangle((0,60),60, 10,facecolor='silver',linewidth = 0))
TITLE = ax.text(26,61, r'$\mathregular{Title}$',fontsize = 14,zorder = 5,color = 'k')
```
The result show like:
[](https://i.stack.imgur.com/iA55S.png)
>
> The rectangle as the background box of title can't be shown in my result
>
>
>
Any advice or better solution are appreciate!
|
I think a better way is to use the **clip\_on=False** option for [Rectangle](https://matplotlib.org/devdocs/api/_as_gen/matplotlib.patches.Rectangle.html):
```
import random
import matplotlib.pyplot as pyplot
x = random.sample(range(50), 50)
y= random.sample(range(50), 50)
fig = pyplot.figure()
ax = pyplot.subplot(111)
ax.scatter(x,y,label='a')
ax.set_aspect('equal')
ax.set_xlim(0,60)
ax.set_ylim(0,60)
ax.plot([0,60], [0, 60], color='k', linestyle='-', linewidth=1.25)
ax.add_patch(pyplot.Rectangle((0,60),60, 10,facecolor='silver',
clip_on=False,linewidth = 0))
TITLE = ax.text(26,61, r'$\mathregular{Title}$',fontsize = 14,zorder = 5,
color = 'k')
pyplot.show()
```
This yields a rectangle drawn outside of the axes, without having to resort to extra spaces:
[](https://i.stack.imgur.com/CUYFM.png)
|
openssl\_decrypt tag value
I'm using the openssl\_encrypt / decrypt method in my website but i'm having some troubles with the $tag option
```
openssl_encrypt ( $data, $method, $key, $options, $iv, $tag )
openssl_decrypt ( $data, $method, $key, $options, $iv, $tag )
```
from <http://php.net/manual/en/function.openssl-encrypt.php>, the definition of tag is: The authentication tag passed by reference when using AEAD cipher mode (GCM or CCM). But i didn't understand it.
I tried it in my codes
```
$data = "text to be encrypted";
$cipher = "aes-128-gcm";
$key = "0123456789abcdefghijklmnob123456";
$option = 0;
$ivlen = openssl_cipher_iv_length($cipher);
$iv = openssl_random_pseudo_bytes($ivlen);
if (in_array($cipher, openssl_get_cipher_methods())){
$encryptedData = openssl_encrypt($data,$cipher,$key,$option,$iv,$tag);
echo $encryptedData;
$decryptedData = openssl_decrypt($encryptedData,$cipher,$key,$option,$iv,$tag);
echo $decryptedData;
}
```
i got this result:
```
encrypted text: Vlx/yKkPhg0DpD0YKvnFKRiCh/I=
decrypted text: text to be encrypted
```
which is correct. but if i directly decrypt the encrypted text this way:
```
$data = "text to be encrypted";
$cipher = "aes-128-gcm";
$key = "0123456789abcdefghijklmnob123456";
$option = 0;
$ivlen = openssl_cipher_iv_length($cipher);
$iv = openssl_random_pseudo_bytes($ivlen);
if (in_array($cipher, openssl_get_cipher_methods())){
$encryptedData = "Vlx/yKkPhg0DpD0YKvnFKRiCh/I=";
$decryptedData = openssl_decrypt($encryptedData,$cipher,$key,$option,$iv,$tag);
echo $decryptedData;
}
```
i'm getting:
```
Notice: Undefined variable: tag
```
if someone could explain to me why this is happening and what should be the value of $tags. thanks
|
The tag that PHP is complaining about is an essential aspect of AES when using GCM mode of operation. In this mode, not only does the AES block cipher get applied, but an authentication tag gets calculated as well. It is an array of bytes that represents a [MAC (Message Authentication Code)](https://en.wikipedia.org/wiki/Message_authentication_code) that can be used to verify the integrity of the data and wen decrypting. That same tag needs to be provided to do that verification. See the [Wikipedia page about Galois/Counter Mode](https://en.wikipedia.org/wiki/Galois/Counter_Mode) for more details.
So in order to successfully decrypt that ciphertext, you need to capture the `$tag` variable resulting from the `openssl_encrypt()` invocation and feed it into the `openssl_decrypt()` invocation. You did not do that, hence the complaint about the missing tag. Note that the tag (typically) contains non-readable characters so it is more convenient to store it in a base64 encoded format.
In addition to the `$tag` variable, you should also provide the same value for the `$iv` variable to the `openssl_decrypt()` method as you used in the `openssl_encrypt()` invocation. Again, base64 encoding makes that easier.
A quick test below demonstrates all this, where I first modified your script to print more stuff and then used the provided script to decrypt:
```
$ php test1.php
iv base64-ed: vBKbi8c6vCyvWonV
plaintext: text to be encrypted
ciphertext base64-ed: z28spOd3UEDmj+3a8n/WK11ls7w=
GCM tag base64-ed: OIAggQCGUbPgmPN6lFjQ8g==
$ php test2.php
decrypted ciphertext: text to be encrypted
```
where the code for `test2.php` is the following:
```
$cipher = "aes-128-gcm";
$key = "0123456789abcdefghijklmnob123456";
$option = 0;
$iv = base64_decode("vBKbi8c6vCyvWonV");
if (in_array($cipher, openssl_get_cipher_methods())){
$encryptedData = "z28spOd3UEDmj+3a8n/WK11ls7w=";
$tag = base64_decode("OIAggQCGUbPgmPN6lFjQ8g==");
$decryptedData = openssl_decrypt($encryptedData,$cipher,$key,$option,$iv,$tag);
echo("decrypted ciphertext: ".$decryptedData."\n");
}
```
|
How do I post non-ASCII characters using httplib when content-type is "application/xml"
I've implemented a Pivotal Tracker API module in Python 2.7. The [Pivotal Tracker API](https://www.pivotaltracker.com/help/api?version=v3) expects POST data to be an XML document and "application/xml" to be the content type.
My code uses urlib/httplib to post the document as shown:
```
request = urllib2.Request(self.url, xml_request.toxml('utf-8') if xml_request else None, self.headers)
obj = parse_xml(self.opener.open(request))
```
This yields an exception when the XML text contains non-ASCII characters:
```
File "/usr/lib/python2.7/httplib.py", line 951, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 809, in _send_output
msg += message_body
exceptions.UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 89: ordinal not in range(128)
```
As near as I can see, httplib.\_send\_output is creating an ASCII string for the message payload, presumably because it expects the data to be URL encoded (application/x-www-form-urlencoded). It works fine with application/xml as long as only ASCII characters are used.
Is there a straightforward way to post application/xml data containing non-ASCII characters or am I going to have to jump through hoops (e.g. using Twistd and a custom producer for the POST payload)?
|
You're mixing Unicode and bytestrings.
```
>>> msg = u'abc' # Unicode string
>>> message_body = b'\xc5' # bytestring
>>> msg += message_body
Traceback (most recent call last):
File "<input>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 0: ordinal \
not in range(128)
```
To fix it, make sure that `self.headers` content is properly encoded i.e., all keys, values in the `headers` should be bytestrings:
```
self.headers = dict((k.encode('ascii') if isinstance(k, unicode) else k,
v.encode('ascii') if isinstance(v, unicode) else v)
for k,v in self.headers.items())
```
Note: character encoding of the headers has nothing to do with a character encoding of a body i.e., xml text can be encoded independently (it is just an octet stream from http message's point of view).
The same goes for `self.url`—if it has the `unicode` type; convert it to a bytestring (using 'ascii' character encoding).
---
[HTTP message consists of a start-line, "headers", an empty line and possibly a message-body](https://www.rfc-editor.org/rfc/rfc7230#section-3) so `self.headers` is used for headers, `self.url` is used for start-line (http method goes here) and probably for `Host` http header (if client is http/1.1), XML text goes to message body (as binary blob).
It is always safe to use ASCII encoding for **`self.url`** (IDNA can be used for non-ascii domain names—the result is also ASCII).
Here's what [rfc 7230 says about http **headers** character encoding](https://www.rfc-editor.org/rfc/rfc7230#page-26):
>
> Historically, HTTP has allowed field content with text in the
> ISO-8859-1 charset [ISO-8859-1], supporting other charsets only
> through use of [RFC2047] encoding. In practice, most HTTP header
> field values use only a subset of the US-ASCII charset [USASCII].
> Newly defined header fields SHOULD limit their field values to
> US-ASCII octets. A recipient SHOULD treat other octets in field
> content (obs-text) as opaque data.
>
>
>
To convert XML to a bytestring, see [`application/xml` encoding condsiderations](https://www.rfc-editor.org/rfc/rfc7303#section-3):
>
> The use of UTF-8, without a BOM, is RECOMMENDED for all XML MIME entities.
>
>
>
|
How do I connect a LXC container to an IP alias?
I currently have 3 IP addresses going to the same server. The /etc/network/interfaces file on the host is as follows:
```
auto eth0
iface eth0 inet static
address XXX.XXX.132.107
gateway XXX.XXX.132.1
netmask 255.255.255.0
auto eth0:0
iface eth0:0 inet static
address XXX.XXX.130.21
gateway XXX.XXX.130.1
netmask 255.255.255.0
auto eth0:1
iface eth0:1 inet static
address XXX.XXX.132.244
gateway XXX.XXX.132.1
netmask 255.255.255.0
auto lo
iface lo inet loopback
```
I would like the host to be accessible from XXX.XXX.132.107, one LXC container to be accessible from XXX.XXX.130.21, and another LXC container accessible from the XXX.XXX.132.244. I have tried a few bridging set ups, but have been unsuccessful. Has anybody done this before? Is it even possible? Thank you!
|
I know of 2 ways to do what you would like.
1. Network bridging
2. IPTables Nat
We'll start out with IPTables NAT since your ifconfig output already has IP Aliases setup.
**Typical Host server**
My 'ifconfig' output shows 'eth0' as main interface with 2 IP Aliases setup, along with the LXC generated bridge interface.
```
# ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:d9:66:ac
inet addr:172.16.10.71 Bcast:172.16.10.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:578 (578.0 B)
eth0:1 Link encap:Ethernet HWaddr 08:00:27:d9:66:ac
inet addr:172.16.10.72 Bcast:172.16.10.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
eth0:2 Link encap:Ethernet HWaddr 08:00:27:d9:66:ac
inet addr:172.16.10.73 Bcast:172.16.10.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
lxcbr0 Link encap:Ethernet HWaddr de:45:c9:13:2b:74
inet addr:10.0.3.1 Bcast:10.0.3.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:508 (508.0 B)
```
The below command shows our 2 LXC containers and their IP addresses.
```
# lxc-ls -f<br>
NAME STATE IPV4 IPV6 AUTOSTART<br>
-------------------------------------------<br>
test1 RUNNING 10.0.3.247 - NO<br>
test2 RUNNING 10.0.3.124 - NO
```
Doing an 'ifconfig' will show your 2 new interfaces created for your LXC Containers. See
below for mine.
```
# ifconfig
veth05DUGY Link encap:Ethernet HWaddr fe:4c:2c:df:1d:c3
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:39 errors:0 dropped:0 overruns:0 frame:0
TX packets:40 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3706 (3.7 KB) TX bytes:3822 (3.8 KB)
vethTUTFID Link encap:Ethernet HWaddr fe:58:4b:19:25:3e
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:42 errors:0 dropped:0 overruns:0 frame:0
TX packets:57 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3956 (3.9 KB) TX bytes:5580 (5.5 KB)
```
Below shows them being part of the bridge.
```
# brctl show lxcbr0
bridge name bridge id STP enabled interfaces
lxcbr0 8000.fe4c2cdf1dc3 no veth05DUGY
vethTUTFID
```
So now the actual work. We will be using IPTables to do the forwarding. Below is the default setup before our additions
```
# iptables -nL -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 10.0.3.0/24 !10.0.3.0/24
```
So Here we go do the following.
```
# iptables -t nat -A PREROUTING -d 172.16.10.72 -j DNAT --to-destination 10.0.3.247
# iptables -t nat -A PREROUTING -d 172.16.10.73 -j DNAT --to-destination 10.0.3.124
```
The 2 above commands add IPTables rules to forward all IP traffic from the eth0:\* IP to the respective IP's on the LXC Containers.
You should see the below when verifying.
```
# iptables -nL -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DNAT all -- 0.0.0.0/0 172.16.10.72 to:10.0.3.247
DNAT all -- 0.0.0.0/0 172.16.10.73 to:10.0.3.124
```
So at this point you now have those IP's forwarded to the Containers. To make this persistent you can create a /etc/iptables.rules file and from your /etc/network/interfaces file add a "post-up" for 'iptables-restore' to restore those rules at bootup. e.g. 'post-up iptables-restore < /etc/iptables.rules' could be added under your iface line in /etc/network/interfaces.
Below is an example of network bridging. You need to remove your IP Aliases for the below to work. See example output below for what you should start out with.
**Host server**
```
$ ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:d9:66:ac
inet addr:172.16.10.71 Bcast:172.16.10.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:578 (578.0 B)
lxcbr0 Link encap:Ethernet HWaddr de:45:c9:13:2b:74
inet addr:10.0.3.1 Bcast:10.0.3.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:508 (508.0 B)
```
We won't be using the lxcbr0 interface in this case.
Create a bridge interface for use for the LXC containers.
The below command will create a 'br0' interface for use with our bridge. You will need to add the eth0 interface to the bridge. See that command farther down.
\*\* BE WARNED \*\* following the bellow commands will immediately brake remote connection with server and make the server not reachable via internet anymore. These instructions assume local connection.
```
# brctl addbr br0
# ip link set br0 up
# brctl addif br0 eth0
# brctl show br0<br>
bridge name bridge id STP enabled interfaces<br>
br0 8000.080027d966ac no eth0
```
So the above commands add 'eth0' to br0 bridge and shows it being there. Next we need to move the IP address from eth0 to br0.
```
# ip addr del 172.16.10.71/24 dev eth0
# ip addr add 172.16.10.71/24 dev br0
```
You should now have similar below.
```
# ifconfig
br0 Link encap:Ethernet HWaddr 08:00:27:d9:66:ac
inet addr:172.16.10.71 Bcast:172.16.10.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:77 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6281 (6.2 KB) TX bytes:648 (648.0 B)
eth0 Link encap:Ethernet HWaddr 08:00:27:d9:66:ac
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:87 errors:0 dropped:0 overruns:0 frame:0
TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8183 (8.1 KB) TX bytes:1296 (1.2 KB)
```
Next we need to edit your LXC configuration file for your 2 containers.
If your system is default, you should have the following.
```
ls -l /var/lib/lxc
total 12
drwxr-xr-x 3 root root 4096 Aug 10 11:23 test1
drwxr-xr-x 3 root root 4096 Aug 10 11:34 test2
```
The above output should show both of your LXC containers. Under each directory is a file named 'config' that we need to edit.
```
# vi /var/lib/lxc/test1/config
```
Replace the line that says 'lxc.network.link = lxcbr0' with 'lxc.network.link = br0'. Do this for both containers.
Next you need to edit both containers /etc/network/interfaces file and add the real IP address as eth0 for both.
So in my examples.
I would put the 172.16.10.72 IP in test1 configuration file such as '/var/lib/lxc/test1/rootfs/etc/network/interfaces'. This is updating the file from the Host machine without being inside the container yet. You of course can boot up the container and edit /etc/network/interfaces. Either way works.
If you need any clarification or additional help just add a comment asking for help.
-Frank
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.