prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Virtual function implemented in base class not being found by compiler
I've got a situation where it seems like the compiler isn't finding the base class definition/implementation of a virtual function with the same name as another member function.
```
struct One {};
struct Two {};
struct Base
{
virtual void func( One & );
virtual void func( Two & ) = 0;
};
struct Derived : public Base
{
virtual void func( Two & );
};
void Base::func( One & )
{}
void Derived::func( Two & )
{}
// elsewhere
void this_fails_to_compile()
{
One one;
Derived d;
d.func( one );
}
```
I'm using Visual C++ 2008. The error message is:
>
> error C2664: 'Derived::func' : cannot convert parameter 1 from 'One' to 'Two &'
>
>
>
I would have thought that the type based dispatch would work and call the defined base class function. If I add a `Derived::func( One & )` it does compile and get called correctly, but in my situation, that version of the function can be done in the base class and usually derived classes don't need to implement it themselves. I'm currently working around it by putting a differently named, non-virtual function in the base class that forwards the call to function causing the problem:
```
// not virtual, although I don't think that matters
void Base::work_around( One & one )
{
func( one );
}
```
That works but is obviously less than ideal.
What inheritance and/or name-hiding rule am I missing here?
|
You are hiding the method in the derived class. The simplest solution is to add a using declaration to the derived class.
```
struct Derived : public Base
{
using Base::func;
virtual void func( Two & );
};
```
The issue is that when the compiler tries to lookup the `func` identifier in the call `d.func(one)` it has to do that from `Derived` upwards, but it will stop in the first context where it finds the `func` identifier, which in this case it is `Derived::func`. No further lookup is performed and the compiler was seeing only the `Derived::func( Two& )`.
By adding the `using Base::func;` directive, when the compiler sees the `Derived` definition it brings all of `Base::func` declarations into scope, and it will find that there is a `Base::func( One & )` that was not overridden in `Derived`.
Note also that if you were calling through a reference to `Base`, then the compiler would find both overloads of `func` and would dispatch appropriately each one to the final overrider.
```
Derived d;
Base & b = d;
b.func( one ); // ok even without the 'using Base::func;' directive
```
|
python multiprocessing lock issue
I want to add a list of dicts together with python multiprocessing module.
Here is a simplified version of my code:
```
#!/usr/bin/python2.7
# -*- coding: utf-8 -*-
import multiprocessing
import functools
import time
def merge(lock, d1, d2):
time.sleep(5) # some time consuming stuffs
with lock:
for key in d2.keys():
if d1.has_key(key):
d1[key] += d2[key]
else:
d1[key] = d2[key]
l = [{ x % 10 : x } for x in range(10000)]
lock = multiprocessing.Lock()
d = multiprocessing.Manager().dict()
partial_merge = functools.partial(merge, d1 = d, lock = lock)
pool_size = multiprocessing.cpu_count()
pool = multiprocessing.Pool(processes = pool_size)
pool.map(partial_merge, l)
pool.close()
pool.join()
print d
```
1. I get this error when running this script. How shall I resolve this?
`RuntimeError: Lock objects should only be shared between processes through inheritance`
2. is the `lock` in `merge` function needed in this condition? or python will take care of it?
3. I think what's `map` supposed to do is to map something from one list to another list, not dump all things in one list to a single object. So is there a more elegant way to do such things?
|
The following should run cross-platform (i.e. on Windows, too) in both Python 2 and 3. It uses a process pool initializer to set the manager dict as a global in each child process.
FYI:
- Using a lock is unnecessary with a manager dict.
- The number of processes in a `Pool` defaults to the CPU count.
- If you're not interested in the result, you can use [`apply_async`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.apply_async) instead of `map`.
```
import multiprocessing
import time
def merge(d2):
time.sleep(1) # some time consuming stuffs
for key in d2.keys():
if key in d1:
d1[key] += d2[key]
else:
d1[key] = d2[key]
def init(d):
global d1
d1 = d
if __name__ == '__main__':
d1 = multiprocessing.Manager().dict()
pool = multiprocessing.Pool(initializer=init, initargs=(d1, ))
l = [{ x % 5 : x } for x in range(10)]
for item in l:
pool.apply_async(merge, (item,))
pool.close()
pool.join()
print(l)
print(d1)
```
|
How can I rename files by padding numbers anywhere within the filename?
I have a directory on a Deepin (Debian-based) system which contains video files, something like this:
```
SL Benfica - Match 1 vs FC Porto.mp4
SL Benfica - Match 2 vs FC Porto.mp4
...
SL Benfica - Match 20 vs FC Porto.mp4
...
SL Benfica - Match 100 vs FC Porto.mp4
SL Benfica - Match 101 vs FC Porto.mp4
```
I would like to add zeros before the numbers in the middle so that they are sorted as in
```
SL Benfica - Match 001 vs FC Porto.mp4
SL Benfica - Match 002 vs FC Porto.mp4
...
SL Benfica - Match 020 vs FC Porto.mp4
...
SL Benfica - Match 100 vs FC Porto.mp4
SL Benfica - Match 101 vs FC Porto.mp4
```
I'm looking for a general command that does not depend on patterns. Just to search numbers like 5 and 20, and change them to 005 and 020, anywhere in the filename, even in the end or beginning of it.
|
You can use `perl-rename` (which should be installable with `sudo apt install rename` on Debian-based systems). Using these files as input:
```
$ ls -1
'anotherFile.m4a'
'file 1 with 12 many 100 numbers 3.mp4'
'SL Benfica - Match 101 vs FC Porto.mp4'
'SL Benfica - Match 20 vs FC Porto.mp4'
'SL Benfica - Match 2 vs FC Porto.mp4'
```
You can run:
```
rename 's/(\d+)(?=.*\.)/sprintf("%03d",$1)/eg' *
```
Which will rename them to:
```
'anotherFile.m4a'
'file 001 with 012 many 100 numbers 003.mp4'
'SL Benfica - Match 002 vs FC Porto.mp4'
'SL Benfica - Match 020 vs FC Porto.mp4'
'SL Benfica - Match 101 vs FC Porto.mp4'
```
Note how the `100` was left unchanged, and the rest were padded. Also note how the 4 in `.m4a` and `mp4` was not affected.
***IMPORTANT***: run the command with `-n` first to see what will happen before actually renaming the files:
```
rename -n 's/(\d+)(?=.*\.)/sprintf("%03d",$1)/eg' *
```
The regular expression looks for stretches one or more digits (`\d+`) that come before at least one `.` (`?=.*\.`). This is to avoid changing a number found in the extension. However, that will not work if your file names have no extension. If you have files like that, use this instead to just pad all numbers:
```
rename 's/(\d+)/sprintf("%03d",$1)/eg' *
```
The `s/old/new/` is the substitution operator which will replace `old` with `new`. Here, because the `(\d+)` is in parentheses, whatever is matched will be *captured* and will then be available as `$1` on the right hand side of the substitution. We therefore replace the digits with themselves 0 padded ( `sprintf("%03d", $number)` will print `$number` padded with 0s until its length is 3). Finally, the flag `e` lets us use expressions (here, `sprintf`) and `g` makes the substitution global, for all matches on the input line (file name).
Note that if you have a number that is already 0-padded with more than 3 0s, this will trim it to 3-0 padded. So `file 00000001.mp3` will become `file 001.mp3`.
|
Why is WSL extremely slow when compared with native Windows NPM/Yarn processing?
I am working with WSL a lot lately because I need some native UNIX tools (and emulators aren't good enough). I noticed that the speed difference when working with NPM/Yarn is incredible.
I conducted a simple test that confirmed my feelings. The test was running `npx create-react-app my-test-app` and the WSL result was `Done in 287.56s.` while GitBash finished with `Done in 10.46s.`.
This is not the whole picture, because the perceived time was higher in both cases, but even based on that - there is a big issue somewhere. I just don't know where. The project I'm working on uses tens of libraries and changing even one of them takes minutes instead of seconds.
Is this something that I can fix? If so - where to look for clues?
Additional info:
- my processor: Processor AMD Ryzen 7 5800H with Radeon Graphics, 3201 Mhz, 8 Core(s), 16 Logical Processors
- I'm running Windows 11 with all the latest updates to both the system and the WSL. The chosen system is Ubuntu 20.04
- I've seen some questions that are somewhat similar like ['npm install' extremely slow on Windows](https://stackoverflow.com/questions/29395211/npm-install-extremely-slow-on-windows), but they don't touch WSL at all (and my pure Windows NPM works fast).
- the issue is not limited to NPM, it's also for Yarn
- another problem that I'm getting is that file watching is not happening (I need to restart the server with every change). In some applications I don't get any errors, sometimes I get the following:
```
Watchpack Error (initial scan): Error: EACCES: permission denied, lstat '/mnt/c/DumpStack.log.tmp'
Watchpack Error (initial scan): Error: EACCES: permission denied, lstat '/mnt/c/hiberfil.sys'
Watchpack Error (initial scan): Error: EACCES: permission denied, lstat '/mnt/c/pagefile.sys'
Watchpack Error (initial scan): Error: EACCES: permission denied, lstat '/mnt/c/swapfile.sys'
```
- `npm start` in an empty (freshly initialized) `create-react-app` takes ages to render something in the browser in WSL and when executed from GitBash - I can see stuff in 2-4 seconds
- it is possible that's it's purely a WSL problem, but it just hurts the most when using NPM/Yarn
|
Since you mention executing the same files (with proper performance) from within Git Bash, I'm going to make an assumption here. Correct me if I'm wrong on this, and I'll delete the answer and look for another possibility.
This would be explained (and expected) if your files are stored on `/mnt/c` (a.k.a. `C:`, or `/C` under Git Bash) or any other Windows drive, as they would likely need to be to be accessed by Git Bash.
WSL2 uses the 9P protocol to access Windows drives, and it is currently *(See Footnote)* known to be *very* slow when compared to:
- Native NTFS (obviously)
- The ext4 filesystem on the virtual disk used by WSL2
- And even the performance of WSL1 with Windows drives
I've seen a `git clone` of a large repo (the WSL2 Linux kernel Github) take 8 minutes on WSL2 on a Windows drive, but only seconds on the root filesystem.
Two possibilities:
- If possible (and it is for most Node projects), convert your WSL to version 1 with `wsl --set-version <distroname> 1`. I always recommend making a backup with `wsl --export` first.
And since you are making a backup anyway, you may as well just create a *copy* of the instance by `wsl --import`ing your backup as `--version 1` (as the last argument). WSL1 and WSL2 both have their uses, and you may find it helpful to keep both around.
See [this answer](https://stackoverflow.com/a/66067679/11810933) for more details on the exact syntax..
- Or just move the project over to somewhere under the WSL root, such as `/home/username/src/`.
---
*Footnote*:
There may be some hope for improvement in this area based on recent developments. Patches for 9P have been released upstream which are reported to provide a significant performance boost. See [this Github thread comment](https://github.com/microsoft/WSL/discussions/9412#discussioncomment-5173052) (and the parent thread) for more information.
|
How to constrain one type parameter by another
Is there a way to constrain one type parameter to be derived from another?
```
type Foo<'T, 'U when 'U :> 'T> =
member x.Bar() : 'T = upcast Unchecked.defaultof<'U>
```
This code produces the following errors:
>
> Error 1 Invalid constraint: the type used for the constraint is sealed, which means the constraint could only be satisfied by at most one solution
>
>
> Error 2 This type parameter has been used in a way that constrains it to always be ''T'
>
>
> Error 3 The static coercion from type 'T to 'T0 involves an indeterminate type based on information prior to this program point. Static coercions are not allowed on some types. Further type annotations are needed.
>
>
> Warning 4 This construct causes code to be less generic than indicated by the type annotations. The type variable 'U has been constrained to be type ''T'.
>
>
>
|
No :(. This is one of the most unfortunate limitations of F# at the moment (in my opinion). See the [Solving Subtype Constraints](http://research.microsoft.com/en-us/um/cambridge/projects/fsharp/manual/spec.html#_Toc270597644) section of the spec, which states that
>
> New constraints of the form type :> 'b are solved again as type = 'b.
>
>
>
This is really a shame since otherwise we could work around F#'s lack of generic variance:
```
let cvt<'a,'b when 'a :> 'b> (s:seq<'a>) : seq<'b> = // doesn't compile
s |> box |> unbox
```
|
\_ = Task.Run vs async void | Task.Run vs Async Sub
*In a console application. I need to load some long running code (network-stuff, REST calls) off my main thread. I want to pass it to a background thread and don't block the calling thread. I will invoke events in that method to handle it's result.*
Is there any difference beetween doing this,
```
public async Task MainThreadAsync() {
_ = Task.Run(async () => await DoSomethingAsync());
// Continue with other stuff and don't care about DoSomethingAsync()
}
private async Task DoSomethingAsync() {
// Doing long running stuff
}
```
or doing this?
```
public async Task MainThreadAsync() {
DoSomethingAsync();
// Continue with other stuff and don't care about DoSomethingAsync()
}
private async void DoSomethingAsync() {
// Doing long running stuff
}
```
---
VB.Net:
```
Public Async Function MainThreadAsync() As Task
Task.Run(Async Function() As Task
Await DoSomethingAsync()
End Function)
' Continue with other stuff and don't care about DoSomethingAsync()
End Function
Private Async Function DoSomethingAsync() As Task
' Doing long running stuff
End Function
```
vs
```
Public Async Function MainThreadAsync() As Task
DoSomethingAsync()
' Continue with other stuff and don't care about DoSomethingAsync()
End Function
Private Async Sub DoSomethingAsync()
' Doing long running stuff
End Sub
```
Or is there even a better way?
Also, is there a difference beetween c# and vb.net in this regard?
|
Firstly: *do not use* `async void`. I realize that it expresses the *semantics* of what you want, but there are some framework internals that *actively explode* if they encounter it (it is a long, uninteresting story), so: don't get into that practice.
Let's pretend that we have:
```
private async Task DoSomething() {...}
```
in both cases, for that reason.
---
The main *difference* here is that *from the caller's perspective* there is no *guarantee* that `DoSomething` won't run synchronously. So in the case:
```
public async task MainThread() {
_ = DoSomething(); // note use of discard here, because we're not awaiting it
}
```
`DoSomething` will run on the main thread *at least* as far as the first `await` - specifically, the first **incomplete** `await`. The good news is: you can just add:
```
await Task.Yield();
```
as the first line in `DoSomething()` and it is *guaranteed* to return immediately to the caller (because `Task.Yield` is always incomplete, essentially), avoiding having to go via `Task.Run`. Internally, `Task.Yield()` does something *very similar* to `Task.Run()`, but it can skip a few unnecessary pieces.
Putting that all together - if it was me, I would have:
```
public async Task MainThread() {
_ = DoSomething();
// Continue with other stuff and don't care about DoSomething()
}
private async Task DoSomething() {
await Task.Yield();
// Doing long running stuff
}
```
|
C# Dotnet core - Issues with implementing generic DBContext in constructor
I am struggling with this coming back from a long layoff.
I asked a [question](https://stackoverflow.com/questions/55875835/configuring-dbcontext-in-the-constructor-of-my-base-repository-class) regarding the configuring of a DBContext in my generic base repository. Only after a user has logged in can I then construct a connection string so I cannot register a service in startup.cs - I have to use a constructor argument to instantiate my DBContext.
I got this [answer](https://stackoverflow.com/a/55876702/5750758) which I thought would address the problem however I am getting an error in the following factory class:
```
public class ContextFactory<T> : IContextFactory<T> : where T : DbContext
{
public T CreateDbContext(string connectionString)
{
var optionsBuilder = new DbContextOptionsBuilder<T>();
optionsBuilder.UseSqlServer(connectionString);
return new T(optionsBuilder.Options);
}
}
```
The error is on the line return `new T(optionsBuilder.Options);` and is:
>
> Cannot create an instance of the variable type 'T' because it does not
> have the new() constraint
>
>
>
|
Even if you add `new()` constraint, you will end up with the following error
>
> 'T': cannot provide arguments when creating an instance of a variable type.
>
>
>
You were given invalid code.
>
> The new constraint specifies that any type argument in a generic class declaration must have a public parameterless constructor. To use the new constraint, the type cannot be abstract.
>
>
>
Reference [new constraint (C# Reference)](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/new-constraint)
Another option to consider could be to use [Activator.CreateInstance (Type, Object[])](https://learn.microsoft.com/en-us/dotnet/api/system.activator.createinstance?view=netstandard-2.0#System_Activator_CreateInstance_System_Type_System_Object___).
Given
```
public interface IContextFactory<TContext> where TContext : DbContext {
TContext Create(string connectionString);
}
```
You would implement it as follows
```
public class ContextFactory<TContext> : IContextFactory<TContext>
where TContext : DbContext {
public TContext Create(string connectionString) {
var optionsBuilder = new DbContextOptionsBuilder<TContext>();
optionsBuilder.UseSqlServer(connectionString);
return (TContext)Activator.CreateInstance(typeof(TContext), optionsBuilder.Options);
}
}
```
This could be refactored further to separate concerns
```
public class ContextFactory<TContext> : IContextFactory<TContext>
where TContext : DbContext {
public TContext Create(DbContextOptions<TContext> options) {
return (TContext)Activator.CreateInstance(typeof(TContext), options);
}
}
```
so that the builder will become the responsibility of where the factory is being used.
```
var connection = @"....";
var optionsBuilder = new DbContextOptionsBuilder<BloggingContext>();
optionsBuilder.UseSqlServer(connection);
//Assuming factory is `IContextFactory<BloggingContext>`
using (var context = factory.Create(optionsBuilder.Options))
{
// do stuff
}
```
**EDIT**
The factory can be registered as open generics in `ConfigureServices` method
```
services.AddSingleton(typeof(IContextFactory<>), typeof(ContextFactory<>));
```
|
AirPlay messes up localhost
Since the last OSX update (Yosemite), my localhost server is full of error messages from airplay (but I am not using it). Each times it's the same:
```
[31/Oct/2014 05:40:42] code 400, message Bad request version ('RTSP/1.0')
[31/Oct/2014 05:40:42] "GET /info?txtAirPlay&txtRAOP RTSP/1.0" 400 -
```
It's just annoying to have its server full of error messages so if anyone has a clue to fix that or to remove airplay, I would be very thankful :)
|
I think I found the answer: On a [cisco discovery forum](https://supportforums.cisco.com/discussion/11831886/appletv-bonjour-discovery) they listed an `nmap` output that revealed the Yosemite discoveryd port ranges. Turns out the Apple is using port 5000:
```
PORT STATE SERVICE VERSION
3689/tcp open daap Apple iTunes DAAP 11.0.1d1
5000/tcp open rtsp Apple AirTunes rtspd 160.10 (Apple TV)
7000/tcp open http Apple AirPlay httpd
7100/tcp open http Apple AirPlay httpd
62078/tcp open tcpwrapped
5353/udp open mdns DNS-based service discovery
```
As you can imagine this is the default Flask port, just change your running port to anything other than 5000, and this problem should disappear. This Flask extension <https://github.com/miguelgrinberg/Flask-Runner> can make your life much easier than hard coding the port in the run command.
|
Save html locally from Devtools
I am using [Emmet Livestyle](http://livestyle.emmet.io/) with Chrome and Sublime text editor. I am editing my files locally and can update my **css** and **js** from the devtools so that those changes are reflected at the same time in my local files too. But how can I update my **html** code too?
|
We cannot persist HTML edits back. Because, well, you aren't editing HTML. Here is what is going on...
The server sends Chrome the page (HTML.) This is taken, tokenized, then a DOM (Document Object Model) is constructed out of it. The page is then trashed and only the DOM is kept in memory. That is what you are seeing on the Elements panel. That panel is the full DOM as it is, made to look like HTML markup for ease-of-use.
That HTML sent down from the server, can be generated by PHP, Ruby, C, hand-coded, whatever. There is no way for DevTools to know. So, there is no way for us to transfer source edits in the DOM back automatically. CSS and JS both can take advantage of sourcemaps so DevTools know where to send things back. It isn't 100%, since say if you are using a variable you aren't editing the variable back but changing the set value (at least in CSS.) But it is good enough for the majority of uses. With HTML generation by backend languages, these kinds of kinks become even more complex, unmanageable, and will end up providing not the best experience for developers.
The best thing you can do is setup [workspaces](https://developers.google.com/web/tools/setup/setup-workflow?hl=en) and move to editing your source in DevTools. However, this has the pitfall that custom extensions are not supported at this time. So you have a very generic writing experience going this route.
|
Why {}.toString can check data type?
Can anyone explain a bit about why these codes can check data type? They does not make sense to me. I cannot understand what the codes do behind the scene. Thanks in advance!
```
var toClass = {}.toString // {} is for what? toString is a method?
alert(toClass); // alert it I get a function? = > function toString() {[native code]}
alert( toClass.call( [1,2] ) )
alert( toClass.call( new Date ) )
```
|
```
var toClass = {}.toString;
```
is equivalent to
```
var toClass = Object.prototype.toString;
```
because
```
{}
```
*as an expression* (`Object` initialiser) is equivalent to
```
new Object()
```
where `Object` is “the standard built-in constructor with that name” ([ECMAScript Language Specification, 5.1 Edition, section 11.1.5](http://www.ecma-international.org/ecma-262/5.1/#sec-11.1.5); and earlier Editions).
So `{}` stands in for a reference to a new `Object` instance. `Object` instances by default inherit properties from the object initially referred to by `Object.prototype` through the prototype chain ([section 15.2.5](http://www.ecma-international.org/ecma-262/5.1/#sec-15.2.5)), including the built-in property `Object.prototype.toString`. The property value is initially a reference to a `Function` instance, i.e. the property is a method that you can call ([section 15.2.4.2](http://www.ecma-international.org/ecma-262/5.1/#sec-15.2.4.2)).
`alert` is actually `window.alert` (and should be written so). `window` is a property of the ECMAScript global object; the object referred to by that property is a host object in the scope chain which has the `alert` method. Neither of those is specified in ECMAScript, but they are [provided by some host environments](https://developer.mozilla.org/en-US/docs/DOM/window.alert) (usually by browsers) as allowed by the Specification ([section 15.1](http://www.ecma-international.org/ecma-262/5.1/#sec-15.1)).
Because it is designed to display alert messages, the `alert` host method displays the *string representation* of its first argument. For objects, that includes calling their `toString` or `valueOf` method, whichever is supported first ([section 9.8](http://www.ecma-international.org/ecma-262/5.1/#sec-9.8)). `Function` instances, such as that referred to by `{}.toString`, inherit themselves a `toString` method that returns the implementation-dependent representation of the function ([section 15.3.4.2](http://www.ecma-international.org/ecma-262/5.1/#sec-15.3.4.2)).
There are no classes, though, and the ECMAScript concept of `[[Class]]` is somewhat different from that of "data type" (see `typeof` and `instanceof`). These are languages using *prototype-based* inheritance ([section 4.2.1](http://www.ecma-international.org/ecma-262/5.1/#sec-4.2.1)).
|
Setting environment variables in OS X for GUI applications
How does one set up environment variables in Mac OS X such that they are available for GUI applications without using [~/.MacOSX/environment.plist](http://developer.apple.com/library/mac/#documentation/MacOSX/Conceptual/BPRuntimeConfig/Articles/EnvironmentVars.html) or *Login Hooks* (since these are [deprecated](http://developer.apple.com/library/mac/#documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CustomLogin.html))?
|
The solution uses the functionality of `launchctl`, combined with a *Launch Agent* to mimic the login hooks of old. For other solutions using the store of `launchd`, see [this comparison](https://apple.stackexchange.com/a/64917/30415).
The launch agent used here is located in */Library/LaunchAgents/*:
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>user.conf.launchd</string>
<key>Program</key>
<string>/Users/Shared/conflaunchd.sh</string>
<key>ProgramArguments</key>
<array>
<string>~/.conf.launchd</string>
</array>
<key>EnableGlobbing</key>
<true/>
<key>RunAtLoad</key>
<true/>
<key>LimitLoadToSessionType</key>
<array>
<string>Aqua</string>
<string>StandardIO</string>
</array>
</dict>
</plist>
```
One important thing is the *RunAtLoad* key so that the launch agent is executed at the earliest time possible.
The real work is done in the shell script */Users/Shared/conflaunchd.sh*, which reads *~/.conf.launchd* and feeds it to `launchctl`:
```
#! /bin/bash
#filename="$1"
filename="$HOME/.conf.launchd"
if [ ! -r "$filename" ]; then
exit
fi
eval $(/usr/libexec/path_helper -s)
while read line; do
# skip lines that only contain whitespace or a comment
if [ ! -n "$line" -o `expr "$line" : '#'` -gt 0 ]; then continue; fi
eval launchctl $line
done <"$filename"
exit 0
```
Notice the call of `path_helper` to get **PATH** set up right. Finally, *~/.conf.launchd* looks like that
```
setenv PATH ~/Applications:"${PATH}"
setenv TEXINPUTS .:~/Documents/texmf//:
setenv BIBINPUTS .:~/Documents/texmf/bibtex//:
setenv BSTINPUTS .:~/Documents/texmf/bibtex//:
# Locale
setenv LANG en_US.UTF-8
```
These are `launchctl` commands, see its manpage for further information. Works fine for me (I should mention that I'm still a Snow Leopard guy), GUI applications such as *texstudio* and *TeXShop* can see my own tex tree. Things that can be improved:
1. The shell script has a `#filename="$1"` in it. This is not accidental, as the file name should be feeded to the script by the launch agent as an argument, but that doesn't work.
2. As mentioned [here](https://www.heise.de/artikel-archiv/ct/2012/19/154_Mac-OS-X-Automatische-Firewall-Blockade) (German and behind a paywall!), it is possible to put the script in the launch agent itsself.
3. I am not sure how secure this solution is, as it uses `eval` with user provided strings.
4. I think to remember that the definition of MANPATH using this method didn't work well, but I'm not sure.
It should be mentioned that Apple intended a somewhat similar approach by putting stuff in *∼/launchd.conf*, but it is currently unsupported as to this date and OS (see the manpage of `launchd.conf`). I guess that things like globbing would not work as they do in this proposal. And of course one can put these files anywhere else, except the launch agent which must reside in */Library/LaunchAgents/* or *~/Library/LaunchAgents/*.
Finally, I should mention the sources I used as information on Launch Agents:
[1](http://developer.apple.com/library/mac/#documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html), [2](http://developer.apple.com/library/mac/#technotes/tn2083/_index.html), [3](http://www.mactech.com/articles/mactech/Vol.25/25.10/2510MacEnterprise-SnowLeopard-launchdandLunch/index.html), [4](http://web.archive.org/web/20110707092748/http://www.afp548.com/article.php?story=20050620071558293).
*Update*: This does not work in version 10.8 at the moment. Workarounds on a per application basis are described [here](https://apple.stackexchange.com/a/65880/30415) and [here](https://apple.stackexchange.com/a/51737/30415).
|
Inserting arbitrary HTML into a DocumentFragment
I know that [adding `innerHTML` to document fragments](http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/0663) has been recently discussed, and will hopefully see inclusion in the DOM Standard. But, what is the workaround you're supposed to use in the meantime?
That is, take
```
var html = '<div>x</div><span>y</span>';
var frag = document.createDocumentFragment();
```
I want both the `div` and the `span` inside of `frag`, with an easy one-liner.
Bonus points for no loops. jQuery is allowed, but I've already tried `$(html).appendTo(frag)`; `frag` is still empty afterward.
|
Here is a way in [modern browsers](http://caniuse.com/template) without looping:
```
var temp = document.createElement('template');
temp.innerHTML = '<div>x</div><span>y</span>';
var frag = temp.content;
```
or, as a re-usable
```
function fragmentFromString(strHTML) {
var temp = document.createElement('template');
temp.innerHTML = strHTML;
return temp.content;
}
```
UPDATE:
I found a simpler way to use Pete's main idea, which adds IE11 to the mix:
```
function fragmentFromString(strHTML) {
return document.createRange().createContextualFragment(strHTML);
}
```
The coverage is better than the `<template>` method and tested ok in IE11, Ch, FF.
Live test/demo available <http://pagedemos.com/str2fragment/>
|
DownloadManager.ACTION\_DOWNLOAD\_COMPLETE broadcast receiver receiving same download id more than once with different download statuses in Android
I am using Android DownloadManger System Service for downloading some files in following way
```
dwnId = mgr.enqueue(new DownloadManager.Request(serveruri)
.setAllowedNetworkTypes(DownloadManager.Request.NETWORK_WIFI |
DownloadManager.Request.NETWORK_MOBILE)
.setAllowedOverRoaming(false)
.setTitle(getAlbumName())
.setDescription(getTrackName())
.setDestinationUri(deviceUri)
.setShowRunningNotification(true));
```
where `mgr` is Download Manager instance, `dwnId` is unique ID returned. I am also registering for `ACTION_DOWNLOAD_COMPLETE`
```
registerReceiver(onDownloadComplete, new IntentFilter(DownloadManager.ACTION_DOWNLOAD_COMPLETE));
```
and in the onDownloadComplete BroadcastReceiver's onReceive() method I am getting download Id like
```
Long dwnId = intent.getLongExtra(DownloadManager.EXTRA_DOWNLOAD_ID, 0);
```
After that I am querying Download Manager for Download status
```
Cursor c = downloadManager.query(new DownloadManager.Query().setFilterById(dwnId)); c.getInt(c.getColumnIndex(DownloadManager.COLUMN_STATUS));
```
for DownloadManager.STATUS\_\* constants.
The problem is I am receiving the same downId twice (means onReceive method is called twice), once with DownloadManager.STATUS\_SUCCESSFUL status and once with DownloadManager.STATUS\_FAILED status for same dwnId. I am issuing request to download some 10 files at a time and but on device download manager it is showing the download count as some 12 or 13 in the notification bar top left means. I think that Download manager has some problem in downloading files and resumed or automatically restarted to download the same file again. Thats why there is a difference between the files count I requested to download and actual number in download queue. Because of this only I am getting same DownloadId complete action twice. If this is true, how to restrict it. Am I wrong what might be the reason for count difference between what I requested to actual download? Why is the broadcast receiver receiving the same download Id twice. Can anybody please let me know?
Thanks In Advance...
|
This is a reported bug see: <http://code.google.com/p/android/issues/detail?id=18462>
The way around I found is to verify if the download was a success, if not ditch the intent or re-queue the file if it was never downloaded...
Lost a couple of hours figuring that one :(
\*\* Edit: adding code example \*\*
```
/**
* Check if download was valid, see issue
* http://code.google.com/p/android/issues/detail?id=18462
* @param long1
* @return
*/
private boolean validDownload(long downloadId) {
Log.d(TAG,"Checking download status for id: " + downloadId);
//Verify if download is a success
Cursor c= dMgr.query(new DownloadManager.Query().setFilterById(downloadId));
if(c.moveToFirst()){
int status = c.getInt(c.getColumnIndex(DownloadManager.COLUMN_STATUS));
if(status == DownloadManager.STATUS_SUCCESSFUL){
return true; //Download is valid, celebrate
}else{
int reason = c.getInt(c.getColumnIndex(DownloadManager.COLUMN_REASON));
Log.d(TAG, "Download not correct, status [" + status + "] reason [" + reason + "]");
return false;
}
}
return false;
}
```
For complete code see : <https://github.com/flegare/JAV387_LaboWidget/blob/master/src/com/mobidroid/widgetfact/service/FactService.java>
|
date command --iso-8601 option
This [answer](https://unix.stackexchange.com/a/164773/89609) and comments mention `--rfc-3339` and a "hidden" `--iso-8601` option that I have used for a long time and now seems to be undocumented.
When did that option documentation get removed from the `--help` text?
Will the option go away anytime soon?
|
The option was introduced in the coreutils `date` (which is probably what you have) in 1999 (Apr. 8).
The documentation was removed in [2005](http://git.sv.gnu.org/gitweb/?p=coreutils.git;a=commitdiff;h=2c29ba03) without much explanation in the commit.
In [2011](http://git.sv.gnu.org/gitweb/?p=coreutils.git;a=commitdiff;h=2f1384b7), the help for --iso-8601 was reintroduced with the following explanation:
```
We deprecated and undocumented the --iso-8601 (-I) option mostly
because date could not parse that particular format. Now that
it can, it's time to restore the documentation.
* src/date.c (usage): Document it.
* doc/coreutils.texi (Options for date): Reinstate documentation.
Reported by Hubert Depesz Lubaczewski in http://bugs.gnu.org/7444.
```
It looks like the help was taken out in version 5.90 and put back in, in version 8.15 (it is not in my 8.13) and the comment above suggests that it is now back to stay and not likely to be disappearing any time soon.
In version 8.31 (as provided by Solus July 2020) the man page descriptions for two two options are:
```
-I[FMT], --iso-8601[=FMT]
output date/time in ISO 8601 format. FMT='date' for date only (the default), 'hours', 'minutes', 'sec‐
onds', or 'ns' for date and time to the indicated precision. Example: 2006-08-14T02:34:56-06:00
--rfc-3339=FMT
output date/time in RFC 3339 format. FMT='date', 'seconds', or 'ns' for date and time to the indicated
precision. Example: 2006-08-14 02:34:56-06:00
```
|
is Short tag good practice in php?
if I enable short tag = true, then I can use like below
```
<?=$variableName ?>
```
Instead of
```
<?php echo $variableName ?>
```
I have below questions:
1. Does it good practice?
2. Is there any server dependency?
3. All open sources and framework support for this?
|
Short tags `<? doSomething(); ?>` are considered to be a bad practice because they are not XML compliant... whether you care about that or not is another issue.
Short echos `<?= $myString ?>` are not a bad practice, it's just not the best. PHP is a templating engine, however ***much*** better engines are available ([Twig](http://twig.sensiolabs.org/), [Mustache](https://mustache.github.io/), [Smarty](http://www.smarty.net/), etc). Most frameworks include their own templating engine so short tags don't need to be used.
Up to and including PHP 5.3, these types of tags were considered to be the same thing. Since PHP 5.4 however they've been separated out and short echo is allowed without `enable-short-tags` being turned on. Since PHP 5.3 is no longer supported, the only concern is if you're being forced to use an unsupported version, which obviously has it's own implications. :)
|
NoClassDefFoundError of com/fasterxml/jackson/databind/ObjectMapper with Maven
This is a similar question as the one [here](https://stackoverflow.com/q/26068128/3697757), which is unfortunately unresolved yet.
>
> If you want to debug the code, here is the [GitHub repo](https://github.com/hackjutsu/StackOverflow-Maven-Jackson/tree/master).
>
>
>
I got the following `NoClassDefFoundError` for `ObjectMapper` though I have added the related dependency to Mave `pom.xml`.
```
Exception in thread "main" java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/ObjectMapper
at demo.DemoMain.main(DemoMain.java:10)
Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.databind.ObjectMapper
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 1 more
```
Here is the source code `DemoMain.java`
```
package demo;
import com.fasterxml.jackson.databind.ObjectMapper;
public class DemoMain {
public static void main(String[] args) {
System.out.println("Start");
ObjectMapper mapper = new ObjectMapper();
System.out.println("End");
}
}
```
This is my `pom.xml`
```
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>Demo</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.8.3</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.8.3</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.8.3</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.0.0</version>
<configuration>
<archive>
<index>true</index>
<manifest>
<addClasspath>true</addClasspath>
<mainClass>demo.DemoMain</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
</plugins>
</build>
```
I compile and run the app by
```
mvn clean install
java -jar target/Demo-1.0-SNAPSHOT.jar
```
|
As I answered [here](https://stackoverflow.com/a/43827141/3697757)
---
The default maven plugin doesn't build a fat jar with dependencies.
To build a jar bundled with its dependencies so that we can execute it with `java -jar`, we can use [maven-assembly-plugin](http://maven.apache.org/plugins/maven-assembly-plugin/usage.html), which packages the jar with the name `xxx-jar-with-dependencies.jar`.
Here is a sample `pom.xml`
```
<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.0.0</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>com.example.yourMain</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id> <!-- this is used for inheritance merges -->
<phase>package</phase> <!-- bind to the packaging phase -->
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
```
Now you should be able to run your jar with
```
java -jar xxx-jar-with-dependencies.jar
```
|
Is it possible to have a optparse-applicative option with several parameters?
I just found out that my carefully crafted parser fails to parse any string I throw at it:
```
roi :: Parser (Maybe ROI)
roi = optional $ option (ROI <$> auto <*> auto <*> auto <*> auto)
$ long "roi" <> metavar "ROI" <> help "Only process selected region of interest"
```
where `ROI = ROI Int Int Int Int`
If that is important, it is nested in a higher parser
```
options :: Parser Opts
options = Opts <$> input <*> output <*> roi <*> startT <*> endT
```
where `Opts` is an appropriate ADT.
Now I assumed that the `roi` parser will parse expressions such as `--roi 1 2 3 4` but it fails with `Invalid argument '128'` and giving me usage message.
`--roi 1` instead parses but returns `Just (ROI 1 1 1 1)`
Is there a way to make this work?
|
I don't think options are supposed to consume multiple arguments. At least I'm not sure how you'd go about implementing that. I'd suggest simply going away from that idea and putting your ROI options into a single argument, using syntax like `--roi 1,2,3,4`.
You'd simply have to implement a custom reader for that, here's an example of how you could do that:
```
module Main where
import Options.Applicative
data ROI = ROI Int Int Int Int
deriving Show
-- didn't remember what this function was called, don't use this
splitOn :: Eq a => a -> [a] -> [[a]]
splitOn sep (x:xs) | sep==x = [] : splitOn sep xs
| otherwise = let (xs':xss) = splitOn sep xs in (x:xs'):xss
splitOn _ [] = [[]]
roiReader :: ReadM ROI
roiReader = do
o <- str
-- no error checking, don't actually do this
let [a,b,c,d] = map read $ splitOn ',' o
return $ ROI a b c d
roiParser :: Parser ROI
roiParser = option roiReader (long "roi")
main :: IO ()
main = execParser opts >>= print where
opts = info (helper <*> roiParser) fullDesc
```
|
Null Hypothesis when sample is already true
As a student I always learned a "rule" to work through my statistics problems: If the sample is true in my null hypothesis, then I'm doing things wrong and it should be the other way around. It worked most of the time in my exams.
Is there any reason or theory to backup this idea?
### Example:
X is the mean of the sample.
Sample returns X=24
The following would be "wrong" according to my rule, because the null hypothesis is already true in the sample:
Ho: X ≥ 22
Ha: X < 22
Then I would quickly turn things around to:
Ho: X ≤ 22
Ha: X > 22
Any idea why this works and what's the theory behind it?
I used it in my exams and always worked.
|
**There's no supporting theory, because this doesn't work except by chance alone.**
Your choice of null and alternative hypotheses depend only on what you're trying to prove and not the sample value. Suppose you have some new drug that's supposed to lower cholesterol, and you want to test if it works. The null hypotheses is that the drug increases cholesterol or has no effect, while the alternative is that the drug actually does lower cholesterol. To claim that the drug does lower cholesterol, you need a significant amount of data to support that, typically enough to have an alpha of 0.05 - only 1 time in 20 will you say the drug has the intended effect when it actually does not.
If you flip your hypotheses arbitrarily, you're implicitly assuming that the drug works as intended, and requiring significant evidence to show that it does not. But that's not how proof works - a new claim requires positive evidence, you don't just assume it's true and then look for evidence it's false (which you could always fail to find just by collecting little evidence). Failing to show that the drug doesn't work as intended might just be a failure of your sample size, but is not alone evidence that it *does* work as expected. If you collect only 3 patients' worth of data, you won't have enough evidence to reject the possibility that the drug works as intended, but that's not meaningful at all. You should view that data as showing that you don't have evidence to conclude that the drug works, not that you've failed to rule out the drug's efficacy.
As you can see, none of this line of reasoning requires you to know anything at all about the cholesterol values actually observed in the population. *How you test* whether the drug works should not depend on whether the drug *actually does work* or not. The only reason this worked in your exams is because you got lucky defining the null and alternative hypotheses as your instructor intended (you had a 50/50 chance each time). The sample value has absolutely no role in defining the directionality of the hypothesis test.
|
Does BLE device generates new LTK, CSRK, and IRK every time it bonds with new device?
I have a conceptual question, for BLE experts, regarding the keys generated and exchanged when bonding occurs between two BLE devices. I might be wrong or my question might be naive, so please bear with me.
Consider the following example, let's call it *Case-1*.
Let's say we have a peripheral device (P1) and a central device (C1).
P1 sends advertisements to connect to a nearby device. C1 initiates the connection and both devices start the connection procedure in which both devices exchange their I/O capabilities, pairing method, and some keys. Eventually, once the bonding is complete, both devices have exchanged `LTK`, `IRK`, and `CSRK` for encrypting the connection, resolving random addresses, and resolving signatures, along with `EDIV`, `RAND`. Now both P1 and C1 can communicate while using these keys for their respective purposes.
I have the following question:
Q1. The connection is terminated between P1 and C1. Later, when both P1 and C1 connect again, will the two devices use the same `LTK`, `IRK`, and `CSRK` keys that they used in *Case-1*?
Q2. Let's say a new central (C2) comes into the picture. P1 is no longer connected to C1. P1 now wants to connect (with bonding) with C2. Will the P1 use the same `LTK`, `EDIV`, `RAND`, `IRK`, and `CSRK` that it had used(generated) earlier to connect with C1 in *Case-1*?
Q3. Do the BLE devices use different keys (`LTK`, `EDIV`, `RAND`, `IRK`, and `CSRK`) with every *new* device they connect with?
Q4. If I take the keys (`LTK`, `EDIV`, `RAND`, `IRK`, and `CSRK`) stored in the C1 and store them in C2, can P1 connect to C2 using the same keys? Is it possible to make this work or it is incorrect logically and from the security point of view?
It would be a lifesaver if someone can clarify these points. Thanks
*PS: I am consulting core-spec v5.3 and some online resources for my reading.*
|
The IRK is always constant (the same IRK is shared in every pairing attempt) and is used to create random resolvable addresses. Devices that receive an IRK from a remote device can then figure out if a random resolvable address belongs to a particular IRK.
All other keys are unique for every bond.
The CSRK must in particular be unique for every bond since there is a counter associated with it, incremented on every packet write to avoid replay attacks. This would not work security wise if two bonds have the same CSRK, since an attacker could then replay packets from bond 1 when impersonating bond 2 (assuming the packet counter is less for bond 2).
The LTK, EDIV and RAND are used to derive a session key for encryption the next time a connection between the same pair of devices connect.
Thus the answer to Q1 is that the keys are reused for every new connection attempt between the same pair of devices, as long as the bond exists.
If you in particular copy the IRK from one central to another, then yes a connection between P1 and C2 can be made and all the other keys can be used during this connection, if C2 uses a random resolvable address. This is because P1 believes it talks to C1 due to the random resolvable address. After all, on the Link Layer it's impossible to tell C2 and C1 apart if random resolvable addresses are used with the same IRK.
If random resolvable addresses are not used, then it's usually not possible to copy a set of keys between devices. This is because the keys are associated with the Bluetooth device address (public or static random). If a central using a different address connects to a peripheral, the peripheral will look up bonding keys in its database to find out there are no LTK, EDIV, RAND, CSRK stored for this address. In some implementations though for encryption setup, the peripheral will look up the LTK based on EDIV and RAND instead of the address. In this case copying of keys will work, at least to establish encryption. The CSRK is always looked up from the address though. Note that for LE Secure Connections, EDIV and RAND are always 0 so in this case the LTK is always looked up from the address.
Note that the use of CSRK is not widespread at all. I have not seen any implementation use it in a real product. Since only the "Write Without Response" command can be used with CSRK and the fact that a counter must be stored persistently, normal authenticated encryption that covers every kind of packet is usually a better idea, since the few encryption setup packets can usually be tolerated.
|
PHP Simple XML remove all Children
i want to remove all children from my xml file before i fill it up again (or create an update but that seemed alot harder). So what i did is
```
$file = "data.xml";
$xml=simplexml_load_file($file);
$teller=0;
foreach( $entries as $entry ) {
foreach ($xml->xpath('//concerts') as $desc) {
if($teller == 0)
{
$lol=$desc->children();
unset($lol);
}
$concert = $desc->addChild( 'concert' );
$concert->addChild( 'artist', array_shift( $entry ) );
$concert->addChild( 'location', array_shift( $entry ) );
$concert->addChild( 'date', array_shift( $entry ) );
$teller++;
}
}
file_put_contents($file, $xml->asXML());
```
But this doesn't remove anything, any ideas on what i did wrong?
|
Here is one possible solution ([online demo](http://eval.in/30377)):
```
$xml = <<< XML
<?xml version='1.0' encoding='utf-8'?>
<concerts>
<concert>
<artist></artist>
<date></date>
</concert>
</concerts>
XML;
$concerts = simplexml_load_string($xml);
foreach ($concerts->xpath('/*/concert/*') as $child)
{
unset($child[0]);
}
echo $concerts->asXML();
```
Marking this CW because how to delete elements is given in my supplied closevote and this answer only expands on this. And this now has been edited showing [the *self-reference* method to delete a SimpleXML element node](https://stackoverflow.com/a/16062633/367456) as outlined in an answer of the question [*"Remove a child with a specific attribute, in SimpleXML for PHP"*](https://stackoverflow.com/q/262351/367456) which is also a possible duplicate.
|
Scheme early "short circuit return"?
I'm trying to find out how I can do an "early return" in a scheme procedure *without* using a top-level `if` or `cond` like construct.
```
(define (win b)
(let* ((test (first (first b)))
(result (every (lambda (i) (= (list-ref (list-ref b i) i) test))
(enumerate (length b)))))
(when (and (not (= test 0)) result) test))
0)
```
For example, in the code above, I want `win` to return `test` if the `when` condition is met, otherwise return 0. However, what happens is that the procedure will *always* return 0, regardless of the result of the `when` condition.
The reason I am structuring my code this way is because in this procedure I need to do numerous complex checks (multiple blocks similar to the `let*` in the example) and putting everything in a big `cond` would be very unwieldy.
|
Here is how to use call/cc to build `return` yourself.
```
(define (example x)
(call/cc (lambda (return)
(when (< x 0) (return #f))
; more code, including possible more calls to return
0)))
```
Some Schemes define a macro called let/cc that lets you drop some of the noise of the lambda:
```
(define (example x)
(let/cc return
(when (< x 0) (return #f))
0))
```
Of course if your Scheme doesn't, let/cc is trivial to write.
---
This works because call/cc saves the point at which it was called as a continuation. It passes that continuation to its function argument. When the function calls that continuation, Scheme abandons whatever call stack it had built up so far and continues from the end of the call/cc call. Of course if the function never calls the continuation, then it just returns normally.
Continuations don't get truly mind-bending until you start returning them from that function, or maybe storing them in a global data structure and calling them later. Otherwise, they're just like any other language's structured-goto statements (while/for/break/return/continue/exceptions/conditions).
---
I don't know what your complete code looks like, but it might be better to go with the cond and to factor out the complex checks into separate functions. Needing `return` and `let*` is usually a symptom of overly imperative code. However, the call/cc method should get your code working for now.
|
Spark csv to dataframe skip first row
I am loading csv to dataframe using -
```
sqlContext.read.format("com.databricks.spark.csv").option("header", "true").
option("delimiter", ",").load("file.csv")
```
but my input file contains date in the first row and header from second row.
example
```
20160612
id,name,age
1,abc,12
2,bcd,33
```
How can i skip this first row while converting csv to dataframe?
|
Here are several options that I can think of since the data bricks module doesn't seem to provide a skip line option:
**Option one**: Add a "#" character in front of the first line, and the line will be automatically considered as comment and ignored by the data.bricks csv module;
**Option two**: Create your customized schema and specify the `mode` option as `DROPMALFORMED` which will drop the first line since it contains less token than expected in the customSchema:
```
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType};
val customSchema = StructType(Array(StructField("id", IntegerType, true),
StructField("name", StringType, true),
StructField("age", IntegerType, true)))
val df = sqlContext.read.format("com.databricks.spark.csv").
option("header", "true").
option("mode", "DROPMALFORMED").
schema(customSchema).load("test.txt")
df.show
```
>
> 16/06/12 21:24:05 WARN CsvRelation$: Number format exception. Dropping
> malformed line: id,name,age
>
>
>
```
+---+----+---+
| id|name|age|
+---+----+---+
| 1| abc| 12|
| 2| bcd| 33|
+---+----+---+
```
Note the warning message here which says dropped malformed line:
**Option three**: Write your own parser to drop the line that doesn't have length of three:
```
val file = sc.textFile("pathToYourCsvFile")
val df = file.map(line => line.split(",")).
filter(lines => lines.length == 3 && lines(0)!= "id").
map(row => (row(0), row(1), row(2))).
toDF("id", "name", "age")
df.show
+---+----+---+
| id|name|age|
+---+----+---+
| 1| abc| 12|
| 2| bcd| 33|
+---+----+---+
```
|
How to obtain screen size from xaml?
I'm using wpf on C# to design GUI, and I want to get screen size (The value of Width and Height) from xaml code.
I knew how to get them from C# code as
```
Width = System.Windows.Forms.Screen.PrimaryScreen.Bounds.Width;
Height = System.Windows.Forms.Screen.PrimaryScreen.Bounds.Height;
```
But I don't know how to get them from `XAML` code.
|
This will work. You can read more [here](http://msdn.microsoft.com/en-us/library/system.windows.systemparameters.aspx) about SystemParameters
```
<Window x:Class="WpfApplication1.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="MainWindow" Height="350" Width="525">
<StackPanel>
<TextBlock Text="{Binding Source={x:Static SystemParameters.FullPrimaryScreenHeight}}" />
<TextBlock Text="{Binding Source={x:Static SystemParameters.FullPrimaryScreenWidth}}" />
<TextBlock Text="{Binding Source={x:Static SystemParameters.PrimaryScreenHeight}}" />
<TextBlock Text="{Binding Source={x:Static SystemParameters.PrimaryScreenWidth}}" />
</StackPanel>
</Window>
```
|
Cloning a Lua table in Lua C API
There are heaps of examples of how to clone a Lua table in Lua, however I wasn't able to find any example of how to do it with the native Lua C API. I tried to do it by hand twice, but ended up with a real (although working) mess.
Does anyone have any tips or links on how to elegantly do a shallow copy of a Lua table in the C API?
|
What you need to do is define the Lua function, and then break it down into the associated API calls.
```
shallow_copy = function(tab)
local retval = {}
for k, v in pairs(tab) do
retval[k] = v
end
return retval
end
```
So we're going to need to take the index of a table on the stack and the lua\_State.
```
void shallow_copy(lua_State* L, int index) {
/*Create a new table on the stack.*/
lua_newtable(L);
/*Now we need to iterate through the table.
Going to steal the Lua API's example of this.*/
lua_pushnil(L);
while(lua_next(L, index) != 0) {
/*Need to duplicate the key, as we need to set it
(one pop) and keep it for lua_next (the next pop). Stack looks like table, k, v.*/
lua_pushvalue(L, -2);
/*Now the stack looks like table, k, v, k.
But now the key is on top. Settable expects the value to be on top. So we
need to do a swaparooney.*/
lua_insert(L, -2);
/*Now we just set them. Stack looks like table,k,k,v, so the table is at -4*/
lua_settable(L, -4);
/*Now the key and value were set in the table, and we popped off, so we have
table, k on the stack- which is just what lua_next wants, as it wants to find
the next key on top. So we're good.*/
}
}
```
Now our copied table sits on the top of the stack.
Christ, the Lua API sucks.
|
Matlab calculate reflection of Vector
I have to calculate the Specular Highlights (phong) of an Image. the normal Vector and the "light vector" are given. Now I have to calculate the light reflection - is there an efficient matlab function to flip the light Vector over the normal vector to get the reflected-light-vector?
Ispec = ks \* I \* (r \* v)p
Where:
`l` is the light vector
`n` is the normal vector of surface
`r` is the reflection vector
`v` is the vector from reflection point to viewer
`p` is the shininess
|
I would solve this mathematically:
Let `N` be the normal vector.
Let `V` be the light vector.
Let `O` be the reflected vector.
1. `O` is in the same plane as `N`,`V`
2. The cosine of the angle between `V` and `N` is the same as the cosine of the angle between `V` and `O` (With a minus sign).
3. `O` has the same same length as `V`
This yields 3 equations:
1. dot(O, cross(N,V)) = 0
2. dot(N,V)/ norm(N) / norm(V) = - dot(N,O) / norm(N) / norm(O)
3. norm(O) = norm(V)
After manipulating these equations, you will reach a 3x3 equations system. All that is left is to solve it.
---
**Edit** My colleague has just told me of an easier way:
`V` can be separated into 2 parts, `V = Vp + Vn`
1. `Vp` - parallel to `N`
2. `Vn` - has straight angle with `N`
`O` has the same parallel part `Vp`, but exactly the opposite `Vn`
Thus, `O = Vp - Vn`, but `V = Vp + Vn` and then `O = V - 2 * Vn`
Where `Vn = dot(V,N) * N` (Assuming that `N` has norm of 1)
So the final answer is:
```
function O = FindReflected(V,N)
N = N / norm(N);
O = V - 2 * dot(V,N) * N;
end
```
**Edit 2**
I've just found a much better explanation on `Math.stackexchange`:
<https://math.stackexchange.com/questions/13261/how-to-get-a-reflection-vector>
|
ASP.NET - Redirect to Error Page if Roles Authorization Fails
I am using MVC 3 with Forms Authentication. On my controller or methods, I am doing the following:
```
[Authorize (Roles = "developer")]
```
In this situation, I want to check if the user is logged in and if not, return them to the login page. However, if the 'IsInRole' check for that user returns false, I want them to go to a different view that says something like 'Not authorized'.
What is the best way to accomplish something like this? I was hoping to avoid creating a new Authorization attribute so I didn't have to refactor every Authorize attribute in my entire application, but if that is what is required, I will go that route.
|
A custom authorize attribute overriding the [HandleUnauthorizedRequest](http://msdn.microsoft.com/en-us/library/system.web.mvc.authorizeattribute.handleunauthorizedrequest.aspx) method could do the job:
```
public class MyAuthorizeAttribute : AuthorizeAttribute
{
protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
{
if (!filterContext.HttpContext.User.Identity.IsAuthenticated)
{
// The user is not authenticated
base.HandleUnauthorizedRequest(filterContext);
}
else if (!this.Roles.Split(',').Any(filterContext.HttpContext.User.IsInRole))
{
// The user is not in any of the listed roles =>
// show the unauthorized view
filterContext.Result = new ViewResult
{
ViewName = "~/Views/Shared/Unauthorized.cshtml"
};
}
else
{
base.HandleUnauthorizedRequest(filterContext);
}
}
}
```
and then:
```
[MyAuthorize(Roles = "developer")]
public ActionResult Develop()
{
...
}
```
|
How Can I Create A Glow Effect on an Imported FBX Object in R3F?
I am currently playing around with a react-three-fiber project and I'm looking to create a glow effect around individual meshes in a scene (that are imported from Blender), similar to what is done here: <https://100gecs.com/>
I have tried experimenting with Bloom Postprocessing but I have found that
a) It slows down performance significantly
b) It applies to all meshes and I cannot target just one
Does anyone have advice for other techniques for this effect? Or if Bloom is the only way to go about it, are there methods for targeting a single mesh for the effect?
I'm very new to ThreeJS so i'm unsure where else to look for help on this specific issue! Let me know if I can clarify anything further
|
>
> a) It slows down performance significantly
>
>
>
Well, the unreal bloom pass is not a cheap post processing pass since it requires multiple renderings to produce a single frame.
>
> b) It applies to all meshes and I cannot target just one
>
>
>
There is an official three.js example that shows how you can selectively apply bloom to objects in your scene.
<https://threejs.org/examples/#webgl_postprocessing_unreal_bloom_selective>
There are also other techniques to produce a fake bloom/glow effect which do not require the above bloom pass. You should find information about this topic right here:
[three.js outer glow for sphere object?](https://stackoverflow.com/questions/16269815/three-js-outer-glow-for-sphere-object)
|
Switch Between Storyboards Using Swift
I have a storyboard that is getting too large in my Xcode project and slowing down my computer. How do I programmatically (or manually using storyboarding) go from one of my views in the current storyboard using a button to advance to a view on the new storyboard?
Semi-new with Xcode so the simpler the better. Thanks!
|
You can do it programatically this way:
**Swift 3+**
```
let storyboard = UIStoryboard(name: "StoryboardName", bundle: nil)
let vc = storyboard.instantiateViewController(withIdentifier: "ViewControllerID") as UIViewController
present(vc, animated: true, completion: nil)
```
**Older**
```
let storyboard = UIStoryboard(name: "myStoryboardName", bundle: nil)
let vc = storyboard.instantiateViewControllerWithIdentifier("nextViewController") as UIViewController
presentViewController(vc, animated: true, completion: nil)
```
In Sort you can do it like:
```
presentViewController( UIStoryboard(name: "myStoryboardName", bundle: nil).instantiateViewControllerWithIdentifier("nextViewController") as UIViewController, animated: true, completion: nil)
```
And don't forget to give ID to your nextViewController.
For more Info refer [THIS](http://sketchytech.blogspot.in/2012/11/instantiate-view-controller-using.html).
|
Firebase Cloud Firestore query not finding my document
Here's a picture of my data:
[](https://i.stack.imgur.com/ZNJQq.png)
I'm trying to get that document. This works:
```
var docRef = db.collection('users').doc('jPDKwHyrFNXNTFI5qgOY');
docRef.get().then(function(doc) {
if (doc.exists) {
console.log("Document data:", doc.data());
} else {
console.log("No such document!");
}
}).catch(function(error) {
console.log("Error getting document:", error);
});
```
It returns:
[](https://i.stack.imgur.com/3yaIR.png)
I.e., if I know the document's key I can get the document.
This doesn't work:
```
db.collection('users').where('uid', '==', 'bcmrZDO0X5N6kB38MqhUJZ11OzA3')
.get().then(function(querySnapshot) {
if (querySnapshot.exists) {
console.log(querySnapshot.data);
} else {
console.log("No such document!");
}
})
.catch(function(error) {
console.log("Error getting document: ", error);
});
```
It just returns `No such document!` What's wrong with my query?
|
The difference in your two requests is that in the first case you are retrieving one document which gives you a [`DocumentSnapshot`](https://firebase.google.com/docs/reference/js/firebase.firestore.DocumentSnapshot) which has the `exists` property and the `data()` method.
In the second case you do a query, which gives you a [`QuerySnapshot`](https://firebase.google.com/docs/reference/js/firebase.firestore.QuerySnapshot) that has to be handled differently from a `DocumentSnapshot`. Instead of a single document you get a list/collection of documents. You can check if data has been retrieved using the `empty` or `size` properties, and then go through the results using the `forEach` method or going through the `docs` array:
```
db.collection('users').where('uid', '==', 'bcmrZDO0X5N6kB38MqhUJZ11OzA3')
.get().then(function(querySnapshot) {
if (querySnapshot.size > 0) {
// Contents of first document
console.log(querySnapshot.docs[0].data());
} else {
console.log("No such document!");
}
})
.catch(function(error) {
console.log("Error getting document: ", error);
});
```
|
ESLint rule error for import
I am getting the below error by eslint.
I have added `ecmaFeatures: { "modules": true }` in the `.eslintrc` file as well.
[](https://i.stack.imgur.com/WOc4H.png)
|
Because you're getting that message, it looks like you've upgraded to ESLint 2.0, which is great! I can see two changes that you'll make to your configuration, though if anything else comes up, it's probably covered under the [2.0 migration guide](http://eslint.org/docs/user-guide/migrating-to-2.0.0):
1. In 2.0, `"ecmaFeatures": { "modules": true }` has become `"parserOptions": { "sourceType": "module" }`.
2. We replaced [space-after-keywords](http://eslint.org/docs/rules/space-after-keywords) with a new rule, [keyword-spacing](http://eslint.org/docs/rules/keyword-spacing), which was introduced in one of the 2.0 betas. If you were using `"space-after-keywords: 2`, you can change that to `"keyword-spacing": 2` now.
Putting that all together, your `.eslintrc` for ESLint 2.0 should include something like this:
```
{
"parserOptions": {
"sourceType": "module"
},
"env": {
"es6": true
},
"rules": {
"keyword-spacing": 2
}
}
```
|
Promise - is it possible to force cancel a promise
I use ES6 Promises to manage all of my network data retrieval and there are some situations where I need to force cancel them.
Basically the scenario is such that I have a type-ahead search on the UI where the request is delegated to the backend has to carry out the search based on the partial input. While this network request (#1) may take a little bit of time, user continues to type which eventually triggers another backend call (#2)
Here #2 naturally takes precedence over #1 so I would like to cancel the Promise wrapping request #1. I already have a cache of all Promises in the data layer so I can theoretically retrieve it as I am attempting to submit a Promise for #2.
But how do I cancel Promise #1 once I retrieve it from the cache?
Could anyone suggest an approach?
|
## In modern JavaScript - no
Promises have settled (hah) and it appears like it will never be possible to cancel a (pending) promise.
Instead, there is a cross-platform (Node, Browsers etc) cancellation primitive as part of WHATWG (a standards body that also builds HTML) called `AbortController`. You can use it to cancel *functions* that return promises rather than promises themselves:
```
// Take a signal parameter in the function that needs cancellation
async function somethingIWantToCancel({ signal } = {}) {
// either pass it directly to APIs that support it
// (fetch and most Node APIs do)
const response = await fetch('.../', { signal });
// return response.json;
// or if the API does not already support it -
// manually adapt your code to support signals:
const onAbort = (e) => {
// run any code relating to aborting here
};
signal.addEventListener('abort', onAbort, { once: true });
// and be sure to clean it up when the action you are performing
// is finished to avoid a leak
// ... sometime later ...
signal.removeEventListener('abort', onAbort);
}
// Usage
const ac = new AbortController();
setTimeout(() => ac.abort(), 1000); // give it a 1s timeout
try {
await somethingIWantToCancel({ signal: ac.signal });
} catch (e) {
if (e.name === 'AbortError') {
// deal with cancellation in caller, or ignore
} else {
throw e; // don't swallow errors :)
}
}
```
---
## No. We can't do that yet.
ES6 promises do not support cancellation *yet*. It's on its way, and its design is something a lot of people worked really hard on. *Sound* cancellation semantics are hard to get right and this is work in progress. There are interesting debates on the "fetch" repo, on esdiscuss and on several other repos on GH but I'd just be patient if I were you.
### But, but, but.. cancellation is really important!
It is, the reality of the matter is cancellation is *really* an important scenario in client-side programming. The cases you describe like aborting web requests are important and they're everywhere.
### So... the language screwed me!
Yeah, sorry about that. Promises had to get in first before further things were specified - so they went in without some useful stuff like `.finally` and `.cancel` - it's on its way though, to the spec through the DOM. Cancellation is *not* an afterthought it's just a time constraint and a more iterative approach to API design.
### So what can I do?
You have several alternatives:
- Use a third party library like [bluebird](https://github.com/petkaantonov/bluebird) who can move a lot faster than the spec and thus have cancellation as well as a bunch of other goodies - this is what large companies like WhatsApp do.
- Pass a cancellation *token*.
Using a third party library is pretty obvious. As for a token, you can make your method take a function in and then call it, as such:
```
function getWithCancel(url, token) { // the token is for cancellation
var xhr = new XMLHttpRequest;
xhr.open("GET", url);
return new Promise(function(resolve, reject) {
xhr.onload = function() { resolve(xhr.responseText); });
token.cancel = function() { // SPECIFY CANCELLATION
xhr.abort(); // abort request
reject(new Error("Cancelled")); // reject the promise
};
xhr.onerror = reject;
});
};
```
Which would let you do:
```
var token = {};
var promise = getWithCancel("/someUrl", token);
// later we want to abort the promise:
token.cancel();
```
### Your actual use case - `last`
This isn't too hard with the token approach:
```
function last(fn) {
var lastToken = { cancel: function(){} }; // start with no op
return function() {
lastToken.cancel();
var args = Array.prototype.slice.call(arguments);
args.push(lastToken);
return fn.apply(this, args);
};
}
```
Which would let you do:
```
var synced = last(getWithCancel);
synced("/url1?q=a"); // this will get canceled
synced("/url1?q=ab"); // this will get canceled too
synced("/url1?q=abc"); // this will get canceled too
synced("/url1?q=abcd").then(function() {
// only this will run
});
```
And no, libraries like Bacon and Rx don't "shine" here because they're observable libraries, they just have the same advantage user level promise libraries have by not being spec bound. I guess we'll wait to have and see in ES2016 when observables go native. They *are* nifty for typeahead though.
|
R: add letter to sequence in dataframe
I have a dataframe like below and i want to add 4 letter (CATG) in front of the sequences. Is that possible? Thank you All!
```
test<-as.data.frame("ACTCAATAAACATAGCT",
"TTTGACAGTATTGTTTG",
"CTTTTCAAGAGTGATGG",
"GCGACTCCCATCAGTGA",
"GCCAGCCACACATCAGG",
"TTTATTTAAGAGGAAGA",
"GTCTGAAGAATTGTTCA",
"ATTCTGATACTAATATA",
"CTCCACGTCCACCCCAA",
"GGGAAGTCTGCCCTGCT")
```
This should be the result!
```
CATGACTCAATAAACATAGCT
CATGTTTGACAGTATTGTTTG
CATGCTTTTCAAGAGTGATGG
CATGGCGACTCCCATCAGTGA
CATGGCCAGCCACACATCAGG
CATGTTTATTTAAGAGGAAGA
CATGGTCTGAAGAATTGTTCA
CATGATTCTGATACTAATATA
CATGCTCCACGTCCACCCCAA
CATGGGGAAGTCTGCCCTGCT
```
|
```
test<-data.frame(V1=c("ACTCAATAAACATAGCT",
"TTTGACAGTATTGTTTG",
"CTTTTCAAGAGTGATGG",
"GCGACTCCCATCAGTGA",
"GCCAGCCACACATCAGG",
"TTTATTTAAGAGGAAGA",
"GTCTGAAGAATTGTTCA",
"ATTCTGATACTAATATA",
"CTCCACGTCCACCCCAA",
"GGGAAGTCTGCCCTGCT"))
> test$V1<-paste0('CATG',test$V1)
> test
V1
1 CATGACTCAATAAACATAGCT
2 CATGTTTGACAGTATTGTTTG
3 CATGCTTTTCAAGAGTGATGG
4 CATGGCGACTCCCATCAGTGA
5 CATGGCCAGCCACACATCAGG
6 CATGTTTATTTAAGAGGAAGA
7 CATGGTCTGAAGAATTGTTCA
8 CATGATTCTGATACTAATATA
9 CATGCTCCACGTCCACCCCAA
10 CATGGGGAAGTCTGCCCTGCT
```
|
Class that tracks data of all its active instantiations?
I have a class `Foo` with its instances having a "balance" attribute. I'm designing it in such a way that Foo can track all the balances of its active instances. By active I mean instances that are currently assigned to a declared variable, of part of a List that is a declared variable.
```
a = Foo(50) # Track this
b = [ Foo(20) for _ in range(5) ] # Track this
Foo(20) # Not assigned to any variable. Do not track this.
```
Another feature of `Foo` is that is has an overloaded "add" operator, where you can add two `Foo`'s balances together or add to a `Foo`'s balance by adding it with an int or float.
**Example:**
```
x = Foo(200)
x = x + 50
y = x + Foo(30)
```
Here is my code so far:
```
from typing import List
class Foo:
foo_active_instances: List = []
def __init__(self, balance: float = 0):
Foo.foo_active_instances.append(self)
self.local_balance: float = balance
@property
def balance(self):
"""
The balance of only this instance.
"""
return self.local_balance
def __add__(self, addend):
"""
Overloading the add operator
so we can add Foo instances together.
We can also add more to a Foo's balance
by just passing a float/int
"""
if isinstance(addend, Foo):
return Foo(self.local_balance + addend.local_balance)
elif isinstance(addend, float | int):
return Foo(self.local_balance + addend)
@classmethod
@property
def global_balance(cls):
"""
Sum up balance of all active Foo instances.
"""
return sum([instance.balance for instance in Foo.foo_active_instances])
```
But my code has several issues. One problem is when I try to add a balance to an already existing instance, like:
```
x = Foo(200)
x = x + 50 # Problem: This instantiates another Foo with 200 balance.
y = Foo(100)
# Expected result is 350, because 250 + 100 = 350.
# Result is 550
# even though we just added 50 to x.
print(Foo.global_balance)
```
Another problem is replacing a `Foo` instance with `None` doesn't remove it from `Foo.foo_active_instances`.
```
k = Foo(125)
k = None
# Expected global balance is 0,
# but the balance of the now non-existing Foo still persists
# So result is 125.
print(Foo.global_balance)
```
I tried to make an internal method that loops through `foo_active_instances` and counts how many references an instance has. The method then pops the instance from `foo_active_instance` if it doesn't have enough. This is very inefficient because it's a loop and it's called each time a `Foo` instance is made and when the add operator is used.
How do I rethink my approach? Is there a design pattern just for this problem? I'm all out of ideas.
|
The `weakref` module is perfect for this design pattern. Instead of making `foo_active_instances` a `list`, you can make it a `weakref.WeakSet`. This way, when a `Foo` object's reference count falls to zero (e.g., because it wasn't bound to a variable), it will be automatically removed from the set.
```
class Foo:
foo_active_instances = weakref.WeakSet()
def __init__(self, balance: float = 0) -> None:
Foo.foo_active_instances.add(self)
...
```
In order to add `Foo` objects to a set, you'll have to make them hashable. Maybe something like
```
class Foo:
...
def __hash__(self) -> int:
return hash(self.local_balance)
```
|
Cost saving alternatives to Oracle Tuxedo
What are cost saving alternatives to Oracle Tuxedo middleware platform to whom is possible to migrate existing code base?
|
There are few open source alternatives like [RedHat Narayana](http://narayana.io/) [BlackTie module](https://github.com/jbosstm/narayana/tree/master/blacktie/blacktie), and [Mavimax](https://www.mavimax.com/products/endurox) [Enduro/X](https://github.com/endurox-dev/endurox). I have check the source code for both and it seems that Enduro/X is most complete, it has FML and VIEW buffers, which usually are preferred "protocol buffer" for Tuxedo apps.
BlackTie have it' own buffers like NBF, but their API is quite different from FML, thus migration might be painful.
Also BlackTie comes with Java integration, so some might benefit from that like mixing new Java apps with "Old" C. But this mean you have to boot the whole JBoss instance for your migrated app. On the other hand Enduro/X has it's own C application server, which seems to be with quite small footprint.
Regarding the licensing:
- the BlackTie will comes with LGPL 2.1 license, so as long you do not change the RedHat's code you can use it for closed source/open source solutions.
- Enduro/X uses GPL 2.0, which means that for in-house developments or open source projects you may use it with out acquiring the license, but if you distributed closed source solution, you need to acquire the license. However by looking on their [license page](https://www.mavimax.com/products/prices-and-licensing), there are plenty of options, the closest license (per operating system instance vs Tuxedo's per CPU) is almost 10 times cheaper than mentioned in [Oracle Global Price list](http://www.oracle.com/us/corporate/pricing/price-lists/index.html).
There are other, non open source competitors, but I cannot find any pricing infos in public, thus you may need to ask direclty them:
- [Hitachi OpenTP1](http://www.hitachi.com/products/it/software/documents/opentp1.pdf)
- [Bull LiberTp](https://support.bull.com/ols/product/applications/liber/)
There are some attempted XATMI implementation on Github (try to search "tpcall" or "tpadvertise" for example), but those seems to be very limited implementation.
|
Do lazy variables exist in Clojure?
I have a bit of computation that is somewhat expensive (starting a database), and I only want to create the database if I actually am going to use it. I am looking for a reference variable (or just a plain variable, if that is possible) that would only evaluate its value in the event that it is used (or dereferenced). Something conceptually like the following.
```
(def v (lazy-var (fn [] (do (println "REALLY EXPENSIVE FUNCTION") true))))
```
and in the future, when I either just use var v, or call @v, I then get it to print out "REALLY EXPENSIVE FUNCTION", and from thereon v has a value of true. The important thing here is that the fn was not evaluated until the variable was (de)referenced. When needed, the function is evaluated once and only once to calculate the value of the variable. Is this possible in clojure?
|
[`delay`](http://clojuredocs.org/clojure_core/clojure.core/delay) would be perfect for this application:
>
> **`delay`**`- (delay & body)`
>
>
> Takes a body of expressions and yields a Delay object that will invoke the body only the first time it is forced (with `force` or `deref`/`@`), and will cache the result and return it on all subsequent `force` calls.
>
>
>
Place the code to construct the database handle within the body of a `delay` invocation, stored as a Var. Then dereference this Var whenever you need to use the DB handle — on the first dereference the body will be run, and on subsequent dereferences the cached handle will be returned.
```
(def db (delay (println "DB stuff") x))
(select @db ...) ; "DB stuff" printed, x returned
(insert @db ...) ; x returned (cached)
```
|
Do sessions really violate RESTfulness?
Is using sessions in a RESTful API really violating RESTfulness? I have seen many opinions going either direction, but I'm not convinced that sessions are *RESTless*. From my point of view:
- authentication is not prohibited for RESTfulness (otherwise there'd be little use in RESTful services)
- authentication is done by sending an authentication token in the request, usually the header
- this authentication token needs to be obtained somehow and may be revoked, in which case it needs to be renewed
- the authentication token needs to be validated by the server (otherwise it wouldn't be authentication)
So how do sessions violate this?
- client-side, sessions are realized using cookies
- cookies are simply an extra HTTP header
- a session cookie can be obtained and revoked at any time
- session cookies can have an infinite life time if need be
- the session id (authentication token) is validated server-side
As such, to the client, a session cookie is exactly the same as any other HTTP header based authentication mechanism, except that it uses the `Cookie` header instead of the `Authorization` or some other proprietary header. If there was no session attached to the cookie value server-side, why would that make a difference? The server side implementation does not need to concern the client as long as the server *behaves* RESTful. As such, cookies by themselves should not make an API *RESTless*, and sessions are simply cookies to the client.
Are my assumptions wrong? What makes session cookies *RESTless*?
|
First, let's define some terms:
- RESTful:
>
> One can characterise applications conforming to the REST constraints
> described in this section as "RESTful".[15] If a service violates any
> of the required constraints, it cannot be considered RESTful.
>
>
>
according to [wikipedia](http://en.wikipedia.org/wiki/Representational_state_transfer).
- stateless constraint:
>
> We next add a constraint to the client-server interaction:
> communication must be stateless in nature, as in the
> client-stateless-server (CSS) style of Section 3.4.3 (Figure 5-3),
> such that each request from client to server must contain all of the
> information necessary to understand the request, and cannot take
> advantage of any stored context on the server. Session state is
> therefore kept entirely on the client.
>
>
>
according to the [Fielding dissertation](http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_3).
So server side sessions violate the stateless constraint of REST, and so RESTfulness either.
>
> As such, to the client, a session cookie is exactly the same as any
> other HTTP header based authentication mechanism, except that it uses
> the Cookie header instead of the Authorization or some other
> proprietary header.
>
>
>
By session cookies you store the client state on the server and so your request has a context. Let's try to add a load balancer and another service instance to your system. In this case you have to share the sessions between the service instances. It is hard to maintain and extend such a system, so it scales badly...
In my opinion there is nothing wrong with cookies. The cookie technology is a client side storing mechanism in where the stored data is attached automatically to cookie headers by every request. I don't know of a REST constraint which has problem with that kind of technology. So there is no problem with the technology itself, the problem is with its usage. [Fielding wrote a sub-section](http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm#sec_6_3_4_2) about why he thinks HTTP cookies are bad.
>
> From my point of view:
>
>
> - authentication is not prohibited for RESTfulness (otherwise there'd be little use in RESTful services)
> - authentication is done by sending an authentication token in the request, usually the header
> - this authentication token needs to be obtained somehow and may be revoked, in which case it needs to be renewed
> - the authentication token needs to be validated by the server (otherwise it wouldn't be authentication)
>
>
>
Your point of view was pretty solid. The only problem was with the concept of creating authentication token on the server. You don't need that part. What you need is storing username and password on the client and send it with every request. You don't need more to do this than HTTP basic auth and an encrypted connection:

- *Figure 1. - Stateless authentication by trusted clients*
You probably need an in-memory auth cache on server side to make things faster, since you have to authenticate every request.
Now this works pretty well by trusted clients written by you, but what about 3rd party clients? They cannot have the username and password and all the permissions of the users. So you have to store separately what permissions a 3rd party client can have by a specific user. So the client developers can register they 3rd party clients, and get an unique API key and the users can allow 3rd party clients to access some part of their permissions. Like reading the name and email address, or listing their friends, etc... After allowing a 3rd party client the server will generate an access token. These access token can be used by the 3rd party client to access the permissions granted by the user, like so:

- *Figure 2. - Stateless authentication by 3rd party clients*
So the 3rd party client can get the access token from a trusted client (or directly from the user). After that it can send a valid request with the API key and access token. This is the most basic 3rd party auth mechanism. You can read more about the implementation details in the documentation of every 3rd party auth system, e.g. OAuth. Of course this can be more complex and more secure, for example you can sign the details of every single request on server side and send the signature along with the request, and so on... The actual solution depends on your application's need.
|
Regular expression in regards to question mark "lazy" mode
I understand the `?` mark here means "lazy".
My question essentially is `[0-9]{2}?` vs `[0-9]{2}`
Are they same?
If so, why are we writing the former expression? Aren't lazy mode more expensive performance wise?
If not, can you tell the difference?
|
**There is *not* a difference between `[0-9]{2}` and `[0-9]{2}?`.**
The difference between greedy matching and lazy matching (the addition of a `?`) has to do with backtracking. Regular expression engines are built to *match* text (from left to right). Therefore it is logical that when you ask an expression to match a range of character(s), it matches as many as possible.
---
Assume we have the string `acac123`.
If we use a greedy match of `[a-z]+c` (`+` standing for 1+ repetitions or `{1,}`):
- `[a-z]+` would match `acac` and fail at `1`
- then we would try to match the `c`, but fail at `1`
- now we start backtracking, and successfully match `aca` and `c`
If we make this lazy (`[a-z]+?c`), we will get both a different response (in *this* case) and be more efficient:
- `[a-z]+?` would match `a`, but stop because it sees the next character matches the rest of the expression `c`
- the `c` would then match, successfully matching `a` and `c` (with no backtracking)
---
Now you can see that there will be *no* difference between `X{#}` and `X{#}?`, because `{#}` is not a range and even a greedy match will not experience any backtracking. Lazily matches are often used with `*` (0+ repetitions or `{0,}`) or `+`, but can also be used with ranges `{m,n}` (where `n` is optional).
This is essential when you want to match the least amount of characters possible and you will often see `.*?` in an expression when you want to fill up some space (`foo.*?bar` on a string `foo bar filler text bar`). However, many times a lazy match is an example of bad/inefficient regex. Many people will do something like `foo:"(.*?)"` to match everything within double quotes, when you can avoid a lazy match by writing your expression like `foo:"([^"]+)"` and match anything *but* `"`s.
---
Final note, `?` typically means "optional" or match `{0,1}` times. `?` only will make a match lazy if you use it on a range (`{m,n}`, `*`, `+`, or another `?`). This means `X?` will not make `X` lazy (since we already said `{#}?` is pointless), but instead it will be optional. However, you can do a lazy "optional" match: `[0-9]??` will lazily match 0-1 times.
|
Sending an HTTP Post using Javascript triggered event
I'm pretty new to javascript and am working on an embedded system which decodes video over IP.
I have written a small app for setting up and changing channels using javascript and included a key handler for remote controls and an event handler so I can take some action or present a message if video stops or the network goes down, but now I also want to set up an automatic HTTP POST that gets sent when I change channel to include some data about the device and the url currently being played.
This is a small embedded hardware device running busybox, so I can't use Ajax or add any other normal web technologies, I just need to use Javascript to send a HTTP POST triggered by events I am monitoring, so my first goal is to be able to press a button and send that POST message then work out when to trigger it later.
Anyone familiar with doing such things that can give me a quick overview of how to send a post to a known listening device/location and include data in it?
Many thanks
|
This is easy if your Javascript engine supports XMLHttpRequest (XHR), which is ubiquitous on the web. Google it or see [this page](https://developer.mozilla.org/en-US/docs/DOM/XMLHttpRequest) for details. I've provided a code snippet below. Read it carefully, particularly the comments on "async" being true and closures in response handlers. Also, this code is super lightweight as far as Javascript goes and I would expect it would work fine on just about any contemporary hardware footprint.
```
var url = "http://www.google.com/";
var method = "POST";
var postData = "Some data";
// You REALLY want shouldBeAsync = true.
// Otherwise, it'll block ALL execution waiting for server response.
var shouldBeAsync = true;
var request = new XMLHttpRequest();
// Before we send anything, we first have to say what we will do when the
// server responds. This seems backwards (say how we'll respond before we send
// the request? huh?), but that's how Javascript works.
// This function attached to the XMLHttpRequest "onload" property specifies how
// the HTTP response will be handled.
request.onload = function () {
// Because of javascript's fabulous closure concept, the XMLHttpRequest "request"
// object declared above is available in this function even though this function
// executes long after the request is sent and long after this function is
// instantiated. This fact is CRUCIAL to the workings of XHR in ordinary
// applications.
// You can get all kinds of information about the HTTP response.
var status = request.status; // HTTP response status, e.g., 200 for "200 OK"
var data = request.responseText; // Returned data, e.g., an HTML document.
}
request.open(method, url, shouldBeAsync);
request.setRequestHeader("Content-Type", "application/json;charset=UTF-8");
// Or... request.setRequestHeader("Content-Type", "text/plain;charset=UTF-8");
// Or... whatever
// Actually sends the request to the server.
request.send(postData);
```
|
Merge two Lists in C# and merge objects with the same id into one list item
I have already thought about how I'm going to solve this by rolling my own solution, but I wondered if .NET already has the functionality for what I'm trying to acheive - if so, I'd rather use something built-in.
Suppose I have two instances of a `Widget` object, let's call them `PartA` and `PartB`. The information from each has been garnered from two different web services, but both have matching IDs.
```
PartA
{
ID: 19,
name: "Percy",
taste: "",
colour: "Blue",
shape: "",
same_same: "but different"
}
PartB
{
ID: 19,
name: "",
taste: "Sweet",
colour: "",
shape: "Hexagon",
same_same: "but not the same"
}
```
I want to merge these to create the following:
```
Result
{
ID: 19,
name: "Percy",
taste: "Sweet",
colour: "Blue",
shape: "Hexagon",
same_same: "but different"
}
```
Notice how the value for `same_same` differs between each, but we consider PartA the master, so the result retains the value `but different`.
Now to complicate matters:
Suppose we have two lists:
```
List<Widget> PartA = getPartA();
List<Widget> PartB = getPartB();
```
Now here's some pseudocode describing what I want to do:
```
List<Widget> Result = PartA.MergeWith(PartB).MergeObjectsOn(Widget.ID).toList();
```
|
You could write your own extension method(s), something like this:
```
static class Extensions
{
public static IEnumerable<T> MergeWith<T>(this IEnumerable<T> source, IEnumerable<T> other) where T : ICanMerge
{
var otherItems = other.ToDictionary(x => x.Key);
foreach (var item in source)
{
yield return (T)item.MergeWith(otherItems[item.Key]);
}
}
public static string AsNullIfEmpty(this string s)
{
if (string.IsNullOrEmpty(s))
return null;
else
return s;
}
}
```
Where `ICanMerge` is like:
```
public interface ICanMerge
{
object Key { get; }
ICanMerge MergeWith(ICanMerge other);
}
```
Implemented e.g. like:
```
public class Widget : ICanMerge
{
object ICanMerge.Key { get { return this.ID; } }
int ID {get;set;}
string taste {get;set;}
public ICanMerge MergeWith(ICanMerge other)
{
var merged = new Widget();
var otherWidget = (Widget)other;
merged.taste = this.taste.AsNullIfEmpty() ?? otherWidget.taste;
//...
return merged;
}
}
```
Then it's as simple as `PartA.MergeWith(PartB).ToList()`.
|
ASP.NET Core: Complex Model with comma separated values list
Our request models are growing according to the growing complexity of our APIs and we decided to use complex types instead of using simple types for the parameters of the actions.
One typical type is `IEnumerable` for comma-separated values, like `items=1,2,3,5...` and we solved the issue of converting from string to IEnumerable using the workaround provided in <https://www.strathweb.com/2017/07/customizing-query-string-parameter-binding-in-asp-net-core-mvc/> where the key point is implementing the `IActionModelConvention` interface to identify the parameters marked with a specific attribute `[CommaSeparated]`.
Everything worked fine until we moved the simple parameters into a single complex parameter, now we are unable to inspect the complex parameters in the `IActionModelConvention` implementation. The same happens using `IParameterModelConvention`. Please, see the code below:
this works fine:
```
public async Task<IActionResult> GetByIds(
[FromRoute]int day,
[BindRequired][FromQuery][CommaSeparated]IEnumerable<int> ids,
[FromQuery]string order)
{
// do something
}
```
while this variant does not work
```
public class GetByIdsRequest
{
[FromRoute(Name = "day")]
public int Day { get; set; }
[BindRequired]
[FromQuery(Name = "ids")]
[CommaSeparated]
public IEnumerable<int> Ids { get; set; }
[FromQuery(Name = "order")]
public string Order { get; set; }
}
public async Task<IActionResult> GetByIds(GetByIdsRequest request)
{
// do something
}
```
the `IActionModelConvention` implementation is very simple:
```
public void Apply(ActionModel action)
{
SeparatedQueryStringAttribute attribute = null;
for (int i = 0; i < action.Parameters.Count; i++)
{
var parameter = action.Parameters[i];
var commaSeparatedAttr = parameter.Attributes.OfType<CommaSeparatedAttribute>().FirstOrDefault();
if (commaSeparatedAttr != null)
{
if (attribute == null)
{
attribute = new SeparatedQueryStringAttribute(",", commaSeparatedAttr.RemoveDuplicatedValues);
parameter.Action.Filters.Add(attribute);
}
attribute.AddKey(parameter.ParameterName);
}
}
}
```
As you can see, the code is inspecting the parameters marked with `CommaSeparatedAttribute`...but it doesn't work with complex parameters like the one used in my second variant.
Note: I added some minor changes to the original code provided in the post above mentioned like enabling the `CommaSeparatedAttribute` to be used not only for parameters but also for properties, but still it doesn't work
|
Based on itminus's answer I could work out my final solution. The trick was - as itminus pointed out - in the IActionModelConvention implementation. Please, see my implementation which considers other aspects like nested models and also the real name assigned to each property:
```
public void Apply(ActionModel action)
{
SeparatedQueryStringAttribute attribute = null;
for (int i = 0; i < action.Parameters.Count; i++)
{
var parameter = action.Parameters[i];
var commaSeparatedAttr = parameter.Attributes.OfType<CommaSeparatedAttribute>().FirstOrDefault();
if (commaSeparatedAttr != null)
{
if (attribute == null)
{
attribute = new SeparatedQueryStringAttribute(",", commaSeparatedAttr.RemoveDuplicatedValues);
parameter.Action.Filters.Add(attribute);
}
attribute.AddKey(parameter.ParameterName);
}
else
{
// here the trick to evaluate nested models
var props = parameter.ParameterInfo.ParameterType.GetProperties();
if (props.Length > 0)
{
// start the recursive call
EvaluateProperties(parameter, attribute, props);
}
}
}
}
```
the EvaluateProperties method:
```
private void EvaluateProperties(ParameterModel parameter, SeparatedQueryStringAttribute attribute, PropertyInfo[] properties)
{
for (int i = 0; i < properties.Length; i++)
{
var prop = properties[i];
var commaSeparatedAttr = prop.GetCustomAttributes(true).OfType<CommaSeparatedAttribute>().FirstOrDefault();
if (commaSeparatedAttr != null)
{
if (attribute == null)
{
attribute = new SeparatedQueryStringAttribute(",", commaSeparatedAttr.RemoveDuplicatedValues);
parameter.Action.Filters.Add(attribute);
}
// get the binding attribute that implements the model name provider
var nameProvider = prop.GetCustomAttributes(true).OfType<IModelNameProvider>().FirstOrDefault(a => !IsNullOrWhiteSpace(a.Name));
attribute.AddKey(nameProvider?.Name ?? prop.Name);
}
else
{
// nested properties
var props = prop.PropertyType.GetProperties();
if (props.Length > 0)
{
EvaluateProperties(parameter, attribute, props);
}
}
}
}
```
I also changed the definition of the comma separated attribute
```
[AttributeUsage(AttributeTargets.Property | AttributeTargets.Parameter, Inherited = true, AllowMultiple = false)]
public class CommaSeparatedAttribute : Attribute
{
public CommaSeparatedAttribute()
: this(true)
{ }
/// <summary>
/// ctor
/// </summary>
/// <param name="removeDuplicatedValues">remove duplicated values</param>
public CommaSeparatedAttribute(bool removeDuplicatedValues)
{
RemoveDuplicatedValues = removeDuplicatedValues;
}
/// <summary>
/// remove duplicated values???
/// </summary>
public bool RemoveDuplicatedValues { get; set; }
}
```
There are other moving parts I changed too...but this is basically the most important ones. Now, we can use models like this:
```
public class GetByIdsRequest
{
[FromRoute(Name = "day")]
public int Day { get; set; }
[BindRequired]
[FromQuery(Name = "ids")]
[CommaSeparated]
public IEnumerable<int> Ids { get; set; }
[FromQuery(Name = "include")]
[CommaSeparated]
public IEnumerable<IncludingOption> Include { get; set; }
[FromQuery(Name = "order")]
public string Order { get; set; }
[BindProperty(Name = "")]
public NestedModel NestedModel { get; set; }
}
public class NestedModel
{
[FromQuery(Name = "extra-include")]
[CommaSeparated]
public IEnumerable<IncludingOption> ExtraInclude { get; set; }
[FromQuery(Name = "extra-ids")]
[CommaSeparated]
public IEnumerable<long> ExtraIds { get; set; }
}
// the controller's action
public async Task<IActionResult> GetByIds(GetByIdsRequest request)
{
// do something
}
```
For a request like this one (not exactly the same as the one defined above but very similar):
<http://.../vessels/algo/days/20190101/20190202/hours/1/2?page=2&size=12&filter=eq(a,b)&order=by(asc(a))&include=all,none&ids=12,34,45&extra-include=all,none&extra-ids=12,34,45>
[](https://i.stack.imgur.com/MVDYb.jpg)
If anyone needs the full code, please, let me know. Again, thanks to itminus for his valuable help
|
How to change the wallpaper of all clients using puppet?
I have set up puppet (Central Management Server). Can anyone tell me how to change wallpapers of all clients from this puppet server?
|
To set the wallpaper image from command line (without puppet), you can use something like this:
```
gsettings set org.gnome.desktop.background picture-uri "file:///path/to/file.jpg"
```
which obviously needs to be run as the user you're changing the background for.
In terms of puppet, I believe you would be able to upload the file to the controlled machines using `file` resource:
```
file { "/usr/share/backgrounds/warty-final-ubuntu.png":
source => "puppet://server/modules/module_name/background.jpg"
}
```
then, to run a command, there's `exec` directive:
```
define set_bg($name) {
exec {"set bg for $name":
command => "/usr/bin/gsettings set org.gnome.desktop.background picture-uri file:///usr/share/backgrounds/warty-final-ubuntu.png",
user => "$name",
}
}
```
which you can execute for each of your users:
```
user { "joe":
ensure => "present",
uid => "1005",
comment => "Joe",
home => "/home/joe",
shell => "/bin/bash",
managehome => "true"
}
user { "ted":
ensure => "present",
uid => "1006",
comment => "Ted",
home => "/home/ted",
shell => "/bin/bash",
managehome => "true"
}
set_bg { "joe": name=>"joe" }
set_bg { "ted": name=>"ted" }
```
Also, you may want to restrict user's choice of backgrounds only to the one you're setting with Puppet. For that, you need to modify `/usr/share/gnome-background-properties/ubuntu-wallpapers.xml` (obviously, using Puppet). The file itself would look like:
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE wallpapers SYSTEM "gnome-wp-list.dtd">
<wallpapers>
<wallpaper>
<name>Common Background</name>
<filename>/usr/share/backgrounds/warty-final-ubuntu.png</filename>
<options>zoom</options>
<pcolor>#000000</pcolor>
<scolor>#000000</scolor>
<shade_type>solid</shade_type>
</wallpaper>
</wallpapers>
```
The rule to upload it would look like:
```
file { "/usr/share/gnome-background-properties/ubuntu-wallpapers.xml":
source => "puppet://server/modules/module_name/backgrounds.xml",
}
```
Also, note that the default Ubuntu background is in the file `/usr/share/backgrounds/warty-final-ubuntu.png` - I'm finding that replacing this file gives more predictable results then creating another one (i.e. gsettings is unable to change background for new users who never logged in, for example). This also changes the background of the login screen etc, which I suppose is a good thing.
|
how to position Jquery validate() error message along the side of my text input
As you'll see in my code I'm been using the errorPlacement method. I first tried using an if else statement inside the method and the error message would at least show, but only underneath the input, not along side like I want. Since I have 6 inputs to test, I'm using the switch statement, which won't work for me at all. I have some css applied to the divs that the inputs are in, I don't know if this is interfering with where the message goes or what.
```
HTML
<div id ="column1">
<label>Name of Show</label><input type="text" name="performance" id="performance" />
<label>Name of location</label> <input type="text" name="location" id="location"/>
<label>Date</label> <input type="text" id="date" name="date"/>
</div>
<div id="column2">
<label>Time</label><input type="text" id="time" name="time"/>
<label># of Volunteers Needed</label><input type="text" id="guests" name="guests"/>
<label>Cover</label><input type="text" id="price" name="price"/>
</div>
JS
//this is just the error placement block, the other validate() code is working fine and omitted
errorPlacement: function(error, element){
switch(element)
{
case element.attr("name") === 'performance':
error.insertAfter( $("#performance") );
break;
case element.attr("name") === 'location':
error.insertAfter( $("#location") );
break;
case element.attr("name") === 'date':
error.insertAfter( $("#date") );
break;
case element.attr("name") === 'time':
error.insertAfter( $("#time") );
break;
case element.attr("name") === 'guests':
error.insertAfter( $("#guests") );
break;
case element.attr("name") === 'price':
error.insertAfter( $("#price") );
break;
default:
//nothing
}
},
```
|
Your code:
```
errorPlacement: function (error, element) {
switch (element) {
case element.attr("name") === 'performance':
error.insertAfter($("#performance"));
break;
case element.attr("name") === 'location':
error.insertAfter($("#location"));
break;
case element.attr("name") === 'date':
error.insertAfter($("#date"));
break;
case element.attr("name") === 'time':
error.insertAfter($("#time"));
break;
case element.attr("name") === 'guests':
error.insertAfter($("#guests"));
break;
case element.attr("name") === 'price':
error.insertAfter($("#price"));
break;
default:
//nothing
}
},
```
**Your code is unnecessarily repetitive**, to say the least, and a bit redundant.
`error.insertAfter(element)` is already the default and it generically applies to **all** fields automatically. There is no need to over-ride this unless you want to do something else.
This is the default `errorPlacement` function:
```
errorPlacement: function(error, element) {
error.insertAfter(element); // <- the default
},
```
**`error`** - the `label` object containing the error message.
**`element`** - the input object with the error.
As you can see, you do not need to specify `errorPlacement` at all if you simply want each error messages to be inserted after each element.
See this jsFiddle which shows the default error message placement:
<http://jsfiddle.net/85xwh/>
You can "un-comment" the `errorPlacement` section and see that the behavior is exactly identical:
<http://jsfiddle.net/85xwh/1/>
If you're not getting the expected message placement, it's either because your CSS is moving it, your HTML layout is blocking or wrapping it, or some combination of both. You have not shown enough code for me to duplicate anything meaningful.
However, if you want to try to apply some CSS to only the error messages, use something like this, assuming you're using the default error container, `<label>`, and the default error class, `.error`...
```
label.error {
/* my custom css */
}
```
|
SQL-Server Merge Statement with .NET DataTable
**Background:**
I'm trying to synchronize order information between Oracle and SQL-Server databases and allow users to be able to track any changes between what has been replicated via an ASP.NET web page. Users will be able to select which line items they want to be able to work on via this page and the remainder will be excluded going forward. Users will also have the option to change which items have been excluded at any time until the process has been finalized.
The initial load of this order information replicates (from Oracle to SQL-Server) all of the line items and presents them for assignment to the user. Any subsequent load of the order will compare replicated data to the original data. I'm using the following statement to handle the replication and reconciliation based on user selections:
```
CREATE TABLE #tempSpecOrderReplication
(
Ord_L_ID INT,
Order_Number INT,
Ord_ID INT,
Item_Number VARCHAR(50),
Quantity DECIMAL(18, 2),
UOM VARCHAR(50),
Price DECIMAL(18, 2),
Is_Spec_Order BIT
);
INSERT INTO #tempSpecOrderReplication
([Ord_L_ID], [Ord_ID], [Quantity], [UOM], [Price],
[Is_Spec_Order], [Item_Number], [Order_Number])
VALUES (...); --These values come from the .NET DataTable
MERGE Spec_Order_Replication WITH (HOLDLOCK) AS target
USING (SELECT * FROM #tempSpecOrderReplication) AS source
ON (target.Ord_L_ID = source.Ord_L_ID)
WHEN MATCHED THEN
UPDATE SET Is_Spec_Order = source.Is_Spec_Order
WHEN NOT MATCHED THEN
INSERT
(Ord_L_ID, Order_Number, Ord_ID, Item_Number,
Quantity, UOM, Price, Is_Spec_Order)
VALUES
(source.Ord_L_ID, source.Order_Number, source.Ord_ID, source.Item_Number,
source.Quantity, source.UOM, source.Price, source.Is_Spec_Order);
DROP TABLE #tempSpecOrderReplication
```
**Questions:**
1. Is there any way I can improve the existing SQL statement?
2. Is there a better way that I can go about this?
I'm using SQL-Server version 2008 R2.
Please ask if you need any further information.
|
# Well done.
I honestly cannot find anything bad to say about your SQL code. Your capitalization of keywords and indentation are consistent. Your query is properly explicit, there is no guessing work that the SQL engine would have to make. You clean up after your operation by dropping your `#tempSpecOrderReplication`.
The only thing that looked unusual, though not bad, is that you used square brackets in this clause but nowhere else. I would personally prefer if it were consistent thoughout, if I had to maintain it. But it's just a nitpick.
```
INSERT INTO #tempSpecOrderReplication
([Ord_L_ID], [Ord_ID], [Quantity], [UOM], [Price],
[Is_Spec_Order], [Item_Number], [Order_Number])
VALUES (...); --These values come from the .NET DataTable
```
|
Specifying font for many Text-elements in Qt QML
I have a widget specified through a QML file. This widget contains a top level`Rectangle` which contains two `Columns`. Each of these `Columns` contains many `Text`-elements. This QML widget is wrapped in a subclass of `QDeclarativeView` in C++.
I want to specify the font for each of these `Text`-elements. Today I do this by specifying top-level properties:
```
property string fontfamily: "Arial"
property bool fontbold: false
property bool fontitalic: false
property int fontpixelsize: 11
property string fontcolor: "White"
```
and bind each `Text`-elements to these properties:
```
Text
{
color: fontcolor
font.family: fontfamily
font.bold: fontbold
font.italic: fontitalic
font.pixelSize: fontpixelsize
...
}
```
This isn't very elegant and new fields needs to be added every time I need support for something new (e.g. underlined fonts). I have not been able to declare a property of type `font` and bind to this instead (widget is blank and qmlviewer warns about "expected type after property").
Is there a better way to specify a font for all `Text`-elements?
Note! I'm handwriting the QML files.
|
Another possibility is to write a new QML component, that inherits from `Text` an sets some properties by default:
**StyledText.qml**
```
import QtQuick 1.0
Text {
// set default values
color: "blue"
font.family: "Arial"
font.bold: true
font.italic: true
font.pixelSize: 12
}
```
**main.qml**
```
import QtQuick 1.0
Rectangle {
Row {
spacing: 10
Column {
StyledText {
text: "Foo1"
}
StyledText {
text: "Bar1"
}
StyledText {
text: "Baz1"
}
}
Column {
StyledText {
text: "Foo2"
}
StyledText {
text: "Bar2"
}
StyledText {
text: "Baz2"
}
}
}
}
```
|
Jenkins server --httpListenAddress=127.0.0.1 not working
recently I installed Jenkins server, and wanted to hide it behind Nginx proxy.
My Nginx proxy works fine and I read to restrict Jenkins to `127.0.0.1:8080` therefore, I edited the config file `/etc/default/jenkins` and put below line of code:
```
JENKINS_ARGS="--webroot=/var/cache/$NAME/war --httpPort=8080 --httpListenAddress=127.0.0.1"
```
After restarting jenkins, I still have access to Jenkins on port `8080`
Environment:
Ubuntu 20.04
OpenJDK 11
Jenkins 2.332.1
Netstat output:
```
sudo netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 2313/java
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 970/nginx: master p
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 708/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 946/sshd: /usr/sbin
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 757/cupsd
tcp6 0 0 :::80 :::* LISTEN 970/nginx: master p
tcp6 0 0 :::22 :::* LISTEN 946/sshd: /usr/sbin
tcp6 0 0 ::1:631 :::* LISTEN 757/cupsd
```
P.S. I tried on EC2/Amazom linux 2, same issue
|
As of Jenkins version 2.332.1, which you indicated you are running, Jenkins made the switch from running as a service using classic SysV init scripts over to fully integrating with systemd on Linux distributions that support it, which includes Ubuntu 20.04. I don't see any signs that the systemd unit file for Jenkins ever parses `/etc/default/jenkins`, meaning those settings are only parsed by the SysV init script, which would explain why your configuration had no effect there.
As you found, setting the environment variable in `/lib/systemd/system/jenkins.service` indeed works, but your instinct is absolutely correct that it is not best practice to directly edit the unit file managed by the packaging system. As with most things in Linux, the `/etc` directory is where administrators are meant to put their configuration files, and `/lib` and `/usr/lib` are reserved for the package manager, so luckily systemd is no exception to this and provides a mechanism for such changes.
Systemd has the concept of "drop-in" directories where you can place ".conf" files with partial systemd unit configurations whose directives will override those in the main unit file. From the [systemd.unit man page](https://www.freedesktop.org/software/systemd/man/systemd.unit.html):
>
> Along with a unit file `foo.service`, a "drop-in" directory `foo.service.d/` may exist. All files with the suffix ".conf" from this directory will be merged in the alphanumeric order and parsed after the main unit file itself has been parsed. This is useful to alter or add configuration settings for a unit, without having to modify unit files. Each drop-in file must contain appropriate section headers.
>
>
>
##### Here's how I set up Jenkins 2.332.1 on Ubuntu 20.04 using a systemd drop-in override to bind the listener to 127.0.0.1:
Verify Jenkins is running and listening on all addresses/interfaces:
```
$ sudo ss -tlnp | grep 8080
LISTEN 0 50 *:8080 *:* users:(("java",pid=2688,fd=116))
```
Create a systemd drop-in directory for Jenkins:
```
$ sudo mkdir /etc/systemd/system/jenkins.service.d
```
Create an override file using your favorite editor. You can name it whatever you want as long as it has a `.conf` extension. Personally, I prefer something descriptive and to begin with a number so that I can control the lexicographic order in which the files are parsed, should I ever end up with multiple override files. Given that, I created a file `/etc/systemd/system/jenkins.service.d/50-listen-address-override.conf` with the following content:
```
[Service]
Environment="JENKINS_LISTEN_ADDRESS=127.0.0.1"
```
Now, all we have to do is tell systemd that we made some changes we want it to reparse:
```
$ sudo systemctl daemon-reload
```
And we can restart Jenkins to give it its new config:
```
$ sudo systemctl restart jenkins
```
If we verify our work, we can now see that Jenkins is only bound to 127.0.0.1:
```
$ sudo ss -tlnp | grep 8080
LISTEN 0 50 [::ffff:127.0.0.1]:8080 *:* users:(("java",pid=31636,fd=116))
```
For what it's worth, you can also use the command `systemctl edit jenkins` to create the override, and systemd will create the drop-in directory and override file automatically for you and drop you into your default editor to write the file contents, however it does not give you the freedom to choose your own name for the override file, giving it instead a generic name of `override.conf`.
|
Java subclass method overriding
I'm pretty sure I have the Inheritance overriding right, but I was hoping for confirmation:
```
class C1 {
public int relation( C1 MyC2 ){
//Do stuff
}
}
class C2 extends C1{
public int relation( C2 MyC2){
//Do stuff
return super.relation((C1) MyC2);
}
}
```
My understanding is that C2#relation() is not actually overriding C1#relation, and thus the following statements are all legal:
```
MyC1_1.relation(MyC1_2); //Calls C1#relation()
MyC2_1.relation(MyC1_1); //Calls C1#relation()
MyC2_1.relation(MyC2_2); //Calls C2#relation()
```
While the following is erroneous
```
MyC1_1.relation(MyC2_1);
```
If I'm right, then great. Otherwise, I'd love to know why...
|
In order to override a method, it must have a signature which is a *subsignature* of the overridden method's signature. As per Java Language Specification:
>
> The signature of a method m1 is a subsignature for the signature of a method
> m2 if either:
>
>
> - m2 has the same signature as m1, or
> - the signature of m1 is the same as the erasure of the signature of m2.
>
>
>
This basically means that methods must have the same name and arguments (and also taking into account type erasure).
Therefore in your case `relation(C1 c1)` can't be overridden by `relation(C2 c2)`. This means that class C1 has only the first method, and C2 has both. That's why your assumptions are absolutely correct, except for the last one (`MyC1_1.relation(MyC2_1)`), which is a correct statement too. C2 is a subclass of C1 and therefore C2 can be used wherever C1 can.
|
When a CSS selector matches N elements, how to get only the one at a certain index?
Here's where I'm at:
```
// Running this CSS selector:
$(".contact-form form")
// Returns this:
// [form, form]
```
I want to be able to get the first of these forms only, or perhaps the second only.
I've tried to use some CSS pseudoselectors but nothing has worked:
```
$(".contact-form form:first-of-type")
// [form, form]
$(".contact-form form:nth-of-type(1)")
// [form, form]
```
I've also tried using these pseudoselectors on the `.contact-form` instead of the `form`:
```
$(".contact-form")
// [div.modal-inner.contact-form, div.modal-inner.contact-form]
$(".contact-form:first-of-type")
// [div.modal-inner.contact-form, div.modal-inner.contact-form]
$(".contact-form:nth-of-type(1)")
// [div.modal-inner.contact-form, div.modal-inner.contact-form]
```
I understand that I can use array indexing to get the first or second but I am hoping there is a way to do it using just the CSS selector. This is because I have an existing API that accepts input for the CSS selector, but doesn't process other jQuery instructions.
|
You could use the [`.get()`](https://api.jquery.com/get/) method, as documented:
>
> Retrieve one of the elements matched by the jQuery object.
>
>
>
```
$(".contact-form form").get(0)
```
This gives you the DOM element. If instead you want a jQuery object for the same, then use [`.eq()`](https://api.jquery.com/eq/):
>
> Reduce the set of matched elements to the one at the specified index.
>
>
>
```
$(".contact-form form").eq(0)
```
So this is virtually equivalent to:
```
$($(".contact-form form").get(0))
```
There is no pure CSS solution for this, but jQuery has [its own extension of selectors](https://api.jquery.com/category/selectors/) and supports things like [`:eq(n)`](https://api.jquery.com/eq-selector/), [`:gt(n)`](https://api.jquery.com/gt-selector/), [`:lt(n)`](https://api.jquery.com/lt-selector/), [`:first`](https://api.jquery.com/first-selector/), [`:last`](https://api.jquery.com/last-selector/), [`:even`](https://api.jquery.com/even-selector/), [`:odd`](https://api.jquery.com/odd-selector/), ...
|
Calculating gravity in XNA
Alright, so I'm trying to create my own Physics engine for a 3D XNA game, and I'm having troubles calculating how much I should move my object by for gravity.
XNA's game timer occurs every 16 milliseconds, so after some calculations, and using 9.81m/s as my gravitational velocity, you can see that you should increase the velocity of the object that has gravity by:
0.15696 meters/16milliseconds
- basically every update call should increase the object by 0.15696 meters
The questions is, how do I convert 0.15696 meters into pixels. Obviously if I just use a 1:1 relationship the object will only move 9.81 pixels/second. Which does not really simulate gravity :P Does anyone have a good idea on how I can determine how many pixels I should move the object by?
Thanks for all the help!
|
Although 2d game dev is more pixel-centric, 3d game dev doesn't concern itself with pixels for this kind of stuff, but rather units. Unit size is completely arbitrary.
Typically, you have a velocity vector whose magnitude is equivalent to the meters(or feet or ??) per second that your object is going. That way the position is updated each frame by adding this velocity \* the elapsed time since last frame (the 16.6666 ms). The acceleration from gravity is added to the velocity vector in the same way:
```
Vector3 gravity = Vector3.Down * 9.81f;
//in the update loop
float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds;//0.01666667 (or 16.67 ms) @ 60FPS
velocity += gravity * elapsed;
position += velocity * elapsed;
```
In this example, I've arbitrarily established that 1 xna unit == 1 meter.
|
jQuery Datatable Sorted Row
**Note: I will be hiding the data column and hence I have to use jQuery Datatable API.**
For my jQuery Datatable, each row have a button beside it.
The purpose of the button is to retrieve the column data.
The column data will be hidden.
For my button click event, this is my code.
```
$('#Table').on('click', '.Button', function () {
var tr = $(this).closest("tr");
var rowindex = tr.index();
//Get row based on index
var rowData = $("#Table").DataTable().row(rowindex).data();
var data = rowData.Data;
});
```
This code is working, however there is one problem.
It is not able to retrieve the data of the sorted column.
For example, before Sorting,
Row 1 - Index 0 Data - A
Row 2 - Index 1 Data - B
After sorting,
Row 2 - Index 0 Data - B
Row 1 - Index 1 Data - A
Clicked on Data B row button,
Data Gotten: A
Hopefully I have explained my problem clear enough. Thanks!
Updated Fiddle: <https://jsfiddle.net/mt4zrm4b/3/>
|
You need to pass in your selector `tr` as the `rowSelector` parameter for [`row()`](https://datatables.net/reference/api/row()).
>
> DOM elements can be given as a row selector to select a row in the DataTabels API from that DOM element. This can be useful for getting data from a row, or performing other row based operations, when you have only the DOM node for reference, for example in an event handler.
>
>
>
The reason is because if you were to sort, the row indexes that DataTables doesn't get updated. It's recommended to select the row via your `tr`, like this:
```
$('#Table').on('click', '.Button', function() {
var tr = $(this).closest("tr");
// Get row based on tr instead.
var rowData = $("#Table").DataTable().row(tr).data();
var data = rowData.Data;
alert(data);
});
```
See this updated [fiddle](https://jsfiddle.net/mt4zrm4b/7/) for an example.
|
How to get username from AWS Cognito - Swift
Q & A Style: See Answer Below
How Can I get the username from a user logged in with Cognito?
I've done this and my user is logged in, now what?
```
AWSAuthUIViewController.presentViewController(
with: self.navigationController!,
configuration: config, completionHandler: { (provider: AWSSignInProvider, error: Error?) in
if error == nil {
//get parameters
}
} else {
print(error as Any)
}
})
```
}
|
Prerequisites:
- App registered with MobileHub
- Cognito Setup in MobileHub
- Mobilehub integrated with Swift Project using AWS SDK
If you're like me, you did this with little to no difficulty and now you're stuck trying to get the username and other parameters from the logged in user. There are a lot of answers, but thus far, I haven't stumbled upon one that gets you all the way there.
I was able to piece this together from various sources:
```
func getUsername() {
//to check if user is logged in with Cognito... not sure if this is necessary
let identityManager = AWSIdentityManager.default()
let identityProvider = identityManager.credentialsProvider.identityProvider.identityProviderName
if identityProvider == "cognito-identity.amazonaws.com" {
print("************LOGGED IN WITH COGNITO************")
let serviceConfiguration = AWSServiceConfiguration(region: .USWest2, credentialsProvider: nil)
let userPoolConfiguration = AWSCognitoIdentityUserPoolConfiguration(clientId: "YourClientID", clientSecret: "YourSecretKey", poolId: "YourPoolID")
AWSCognitoIdentityUserPool.register(with: serviceConfiguration, userPoolConfiguration: userPoolConfiguration, forKey: "YourPoolName (typically formatted as YourAppName_userpoool_MOBILEHUB_12345678")
let pool = AWSCognitoIdentityUserPool(forKey: "YourPoolName")
// the following line doesn't seem to be necessary and isn't used so I've commented it out, but it is included in official documentation
// let credentialsProvider = AWSCognitoCredentialsProvider(regionType: .USWest2, identityPoolId: "YourPoolID", identityProviderManager:pool)
if let username = pool.currentUser()?.username {
print("Username Retrieved Successfully: \(username)")
} else {
print("Error getting username from current user - attempt to get user")
let user = pool.getUser()
let username = user.username
print("Username: \(username)")
}
}
}
```
To get your ClientID, Secret Key, and PoolID, check your `awsconfiguration.json`
To get your PoolName, login to MobileHub, and in your project's backend, go to User Sign in, click Email and Password, then click Edit in Cognito. The following page will have your Pool Name as "YourAppName\_userpool\_MOBILEHUB\_12345678"
Edit: To get all of the attributes as well:
```
if let userFromPool = pool.currentUser() {
userFromPool.getDetails().continueOnSuccessWith(block: { (task) -> Any? in
DispatchQueue.main.async {
if let error = task.error as NSError? {
print("Error getting user attributes from Cognito: \(error)")
} else {
let response = task.result
if let userAttributes = response?.userAttributes {
print("user attributes found: \(userAttributes)")
for attribute in userAttributes {
if attribute.name == "email" {
if let email = attribute.value {
print("User Email: \(email)")
}
}
}
```
|
Java Thread Pool with a Bounded Queue
I'm using `java.util.concurrent`'s [`Executors`](http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/Executors.html) class to create a fixed thread pool for running request handlers for a web server:
```
static ExecutorService newFixedThreadPool(int nThreads)
```
and the description is:
>
> Creates a thread pool that reuses a fixed set of threads operating off a shared **unbounded** queue.
>
>
>
However, I am looking for thread pool implementation which will do the exact same thing, except with a **bounded** queue. Is there such an implementation? Or do I need to implement my own wrapper for the fixed thread pool?
|
What you want to do is new your own ExecutorService, probably using [ThreadPoolExecutor](http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/ThreadPoolExecutor.html). ThreadPoolExecutor has a constructor which takes a BlockingQueue and to get a bounded queue you use for example [ArrayBlockingQueue](http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/ArrayBlockingQueue.html) properly constructed for bounding. You can also include a [RejectedExecutionHandler](http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/RejectedExecutionHandler.html) in order to determine what to do when your queue is full, or hang on to a reference to the blocking queue and use the offer methods.
Here's a mini example:
```
BlockingQueue<Runnable> linkedBlockingDeque = new LinkedBlockingDeque<Runnable>(
100);
ExecutorService executorService = new ThreadPoolExecutor(1, 10, 30,
TimeUnit.SECONDS, linkedBlockingDeque,
new ThreadPoolExecutor.CallerRunsPolicy());
```
|
How to make a deb pkg for my pygtk app?
I write a pygtk app and want to make a deb pkg for it.
```
$ tree WebPad/
WebPad/
|-- jspad
| |-- __init__.py
| |-- main.py
| |-- t.py
|-- pixmaps
| |-- c.png
| |-- run.png
| `-- webpad.png
|-- README
|-- run.py
`-- templates
`-- webpad.tpl
3 directories, 19 files
```
Do I need to change the directory layout?
|
First of all, you need to create a `setup.py` file using [distutils](http://docs.python.org/distutils/index.html) in the root directory of your project. It should contain text similar to the following:
```
#!/usr/bin/env python
from distutils.core import setup
from glob import glob
setup(name = "WebPad",
version="0.1",
author="znetor",
packages=["jspad"],
data_files=[('share/webpad/pixmaps', glob("pixmaps/*"), ('share/webpad/templates', ['templates/webpad.tpl'])],
scripts=['run.py'])
```
And you should also create a MANIFEST.in file:
```
recursive-include pixmaps *
recursive-include templates *
```
One you've done that, you can run various commands on the terminal to distribute your project:
```
chmod +x setup.py
sudo ./setup.py install # installs your project to /usr/local
./setup.py sdist # creates a source distribution in dist/
```
The last command is the one we're interested in. Once you've got a source distribution with a distutils setup.py script, you can then follow the [Python packaging guide for Ubuntu](https://wiki.ubuntu.com/PackagingGuide/Python). Basically, it involves creating a `debian/` directory in the root of your project with various bits of information and running `debuild`.
I wrote [a tutorial](http://ubuntuforums.org/showthread.php?t=1002909) on how to do this a while ago, some of it is not best practise, but it will help you understand a few concepts.
|
function to returning reference of iterator of object
I would like to write a function to returning reference of iterator of an map entry in order to update the value of the entry. However, it fails at compile stage
```
#include <map>
#include <iostream>
using namespace std;
pair<int, string>& map_find(map<int,string>& m, int k){
return *(m.find(k));
}
int main()
{
map<int,string> m;
// insert an entry
m.insert(make_pair<int,string>(128490,"def"));
// search entry by key
auto e = map_find(m,128490);
cout << e.first << e.second<<endl;;
e.second = "abc"; // Update value
// verify if the entry is updated
e = map_find(m,128490);
cout << e.first << e.second<<endl;;
return 0;
}
```
```
main.cpp: In function ‘std::pair<int, std::__cxx11::basic_string<char> >& map_find(std::map<int, std::__cxx11::basic_string<char> >&, int)’:
main.cpp:15:12: error: invalid initialization of reference of type ‘std::pair >&’ from expression of type ‘std::pair >’
return *(m.find(k));
^~~~~~~~~~~~
```
|
The value type of `std::map<Key, Value>` is `std::pair<const Key, Value>`, not `std::pair<Key, Value>`. There are also useful member type aliases.
```
#include <map>
#include <iostream>
using MyMap = std::map<int, std::string>;
MyMap::reference map_find(MyMap& m, int k){
return *(m.find(k));
}
int main()
{
MyMap m;
m.emplace(128490,"def");
auto e = map_find(m,128490);
std::cout << e.first << e.second << std::endl;
// Does not update the string in the map, because e is a copy.
e.second = "abc";
e = map_find(m,128490);
std::cout << e.first << e.second << std::endl;
return 0;
}
```
|
Deadlock involving SELECT FOR UPDATE
I have transaction with several queries. First, a select rows with `FOR UPDATE` lock:
```
SELECT f.source_id FROM files AS f WHERE
f.component_id = $1 AND
f.archived_at IS NULL
FOR UPDATE
```
Next, there is an update query:
```
UPDATE files AS f SET archived_at = NOW()
WHERE
hw_component_id = $1 AND
f.source_id = ANY($2::text[])
```
And then there is an insert:
```
INSERT INTO files AS f (
source_id,
...
)
VALUES (..)
ON CONFLICT (component_id, source_id) DO UPDATE
SET archived_at = null,
is_valid = excluded.is_valid
```
I have two application instances and sometimes I see deadlock errors in PostgreSQL log:
```
ERROR: deadlock detected
DETAIL: Process 3992939 waits for ShareLock on transaction 230221362; blocked by process 4108096.
Process 4108096 waits for ShareLock on transaction 230221365; blocked by process 3992939.
Process 3992939: SELECT f.source_id FROM files AS f WHERE f.component_id = $1 AND f.archived_at IS NULL FOR UPDATE
Process 4108096: INSERT INTO files AS f (source_id, ...) VALUES (..) ON CONFLICT (component_id, source_id) DO UPDATE SET archived_at = null, is_valid = excluded.is_valid
CONTEXT: while locking tuple (41116,185) in relation \"files\"
```
I assume that it may be caused by `ON CONFLICT DO UPDATE` statement, which may update rows which are not locked by previous `SELECT FOR UPDATE`
But I can't understand how can `SELECT ... FOR UPDATE` query cause deadlock if it is the first query in transaction. There is not queries before it.
Can `SELECT ... FOR UPDATE` statement lock several rows and then wait for other rows in condition to be unlocked?
|
`SELECT FOR UPDATE` is no safeguard against deadlocks. It just locks rows. Locks are acquired along the way, in the order instructed by `ORDER BY`, or in arbitrary order in the absence of `ORDER BY`. The best defense against deadlocks is to lock rows in consistent order across the whole transaction - and doing likewise in all concurrent transactions. Or, as [the manual puts it](https://www.postgresql.org/docs/current/explicit-locking.html#LOCKING-DEADLOCKS):
>
> The best defense against deadlocks is generally to avoid them by being
> certain that all applications using a database acquire locks on
> multiple objects in a consistent order.
>
>
>
Else, this can happen (*row1*, *row2*, ... are rows numbered according to the virtual consistent order):
```
T1: SELECT FOR UPDATE ... -- lock row2, row3
T2: SELECT FOR UPDATE ... -- lock row4, wait for T1 to release row2
T1: INSERT ... ON CONFLICT ... -- wait for T2 to release lock on row4
--> deadlock
```
Adding **`ORDER BY`** to your `SELECT... FOR UPDATE` *may* already avoid your deadlocks. (It would avoid the one demonstrated above.) Or this happens and you have to do more:
```
T1: SELECT FOR UPDATE ... -- lock row2, row3
T2: SELECT FOR UPDATE ... -- lock row1, wait for T1 to release row2
T1: INSERT ... ON CONFLICT ... -- wait for T2 to release lock on row1
--> deadlock
```
Everything within the transaction must happen in consistent order to be absolutly sure.
Also, your `UPDATE` does not seem to be in line with the `SELECT FOR UPDATE`. `component_id` <> `hw_component_id`. Typo?
Also, `f.archived_at IS NULL` does not guarantee that the later `SET archived_at = NOW()` only affects these rows. You would have to add `WHERE f.archived_at IS NULL` to the `UPDATE` be in line. (Seems like a good idea in any case?)
>
> I assume that it may be caused by `ON CONFLICT DO UPDATE` statement,
> which may update rows which are not locked by previous `SELECT FOR UPDATE`.
>
>
>
As long as the UPSERT (`ON CONFLICT DO UPDATE`) sticks to the consistent order, that wouldn't be a problem. But that may be hard or impossible to enforce.
>
> Can `SELECT ... FOR UPDATE` statement lock several rows and then wait for other rows in condition to be unlocked?
>
>
>
Yes, as explained above, locks are acquired along the way. It can have to stop and wait half way through.
### `NOWAIT`
If all that still can't resolve your deadlocks, the slow and sure method is to use [Serializable Isolation Level](https://www.postgresql.org/docs/current/transaction-iso.html#XACT-SERIALIZABLE). Then you have to be prepared for serialization failures and retry the transaction in this case. Considerably more expensive overall.
Or it might be enough to add `NOWAIT`:
```
SELECT FROM files
WHERE component_id = $1
AND archived_at IS NULL
ORDER BY id -- whatever you use for consistent, deterministic order
FOR UPDATE NOWAIT;
```
[The manual:](https://www.postgresql.org/docs/current/sql-select.html#SQL-FOR-UPDATE-SHARE)
>
> With `NOWAIT`, the statement reports an error, rather than waiting, if a selected row cannot be locked immediately.
>
>
>
You may even skip the `ORDER BY` clause with `NOWAIT` if you cannot establish consistent order with the UPSERT anyway.
Then you have to catch that error and retry the transaction. Similar to catching serialization failures, but much cheaper - and less reliable. For example, multiple transactions can still interlock with their UPSERT alone. But it gets less and less likely.
|
PHP Explode between an integer and letter
```
array (
[0] => 3 / 4 Bananas
[1] => 1 / 7 Apples
[2] => 3 / 3 Kiwis
)
```
Is it possible, to say, iterate through this list, and `explode` between the first letter and first integer found, so I could seperate the text from the set of numbers and end up with something like:
```
array (
[0] => Bananas
[1] => Apples
[2] => Kiwis
)
```
I have no idea how you would specify this as the delimiter. Is it even possible?
```
foreach ($fruit_array as $line) {
$var = explode("??", $line);
}
```
Edit: updated example. exploding by a space wouldn't work. see above example.
|
You could use `preg_match` instead of `explode`:
```
$fruit_array = array("3 / 4 Bananas", "1 / 7 Apples", "3 / 3 Kiwis");
$result = array();
foreach ($fruit_array as $line) {
preg_match("/\d[^A-Za-z]+([A-Za-z\s]+)/", $line, $match);
array_push($result, $match[1]);
}
```
It will almost literally match your expression, that is, a digit `\d`, followed by one or more non-letters `[^A-Za-z]`, followed by one or more letters or whitespace (to account for multiple words) `[A-Za-z\s]+`. This final matched string, between parentheses, will be *captured* in the first match, i.e., `$match[1]`.
Here's a **[DEMO](http://ideone.com/FZJdt)**.
|
ChartJS Doughnut Charts Gradient Fill
So I tried to make a gradient fill for the ChartJS doughnut chart, but this only works horizontal and not in a circle.
This is the code that I'm using:
```
var ctx = document.getElementById("chart-area").getContext("2d");
var gradient1 = ctx.createLinearGradient(0, 0, 0, 175);
gradient1.addColorStop(0.0, '#ACE1DB');
gradient1.addColorStop(1.0, '#7FBDB9');
var gradient2 = ctx.createLinearGradient(0, 0, 400, 400);
gradient2.addColorStop(0, '#B5D57B');
gradient2.addColorStop(1, '#98AF6E');
var gradient3 = ctx.createLinearGradient(0, 0, 0, 175);
gradient3.addColorStop(0, '#E36392');
gradient3.addColorStop(1, '#FE92BD');
var gradient4 = ctx.createLinearGradient(0, 0, 0, 175);
gradient4.addColorStop(1, '#FAD35E');
gradient4.addColorStop(0, '#F4AD4F');
/* ADD DATA TO THE DOUGHNUT CHART */
var doughnutData = [
{
value: 80,
color: gradient1,
highlight: "#E6E6E6",
label: "NUTRIENTS"
},
{
value: 20,
color:"#E6F1EE"
},
{
value:50,
color: gradient2,
highlight: "#E6E6E6",
label: "PROTEINE"
},
{
value: 50,
color:"#E6F1EE"
},
{
value: 75,
color: gradient3,
highlight: "#E6E6E6",
label: "FETTE"
},
{
value:25,
color:"#E6F1EE"
},
{
value: 77,
color: gradient4,
highlight: "#E6E6E6",
label: "CARBS"
}
{
value: 23,
color:"#E6F1EE"
},
];
```
Is it possible to implement the gradient on a radius, as seen on this design?

Thanks!
|
ChartJS will not (properly) use gradient fill colors when drawing a linear gradient on non-linear paths like your donut chart. A linear gradient does not curve.
**Possibility #1 -- use a radial gradient**
You might experiment with a radial gradient and see if the results meets your design needs.
**Possibility #2 -- use a gradient stroke (a DIY project)**
Also, canvas's stroke will curve around a circle.
If you want to "roll-your-own" gradient donut chart, here's example code and a Demo that uses a gradient strokeStyle on a circular path (see my previous answer here: [Angle gradient in canvas](https://stackoverflow.com/questions/22223950/angle-gradient-in-canvas/22231473#22231473)):

```
var canvas=document.getElementById("canvas");
var ctx=canvas.getContext("2d");
function drawMultiRadiantCircle(xc, yc, r, radientColors) {
var partLength = (2 * Math.PI) / radientColors.length;
var start = 0;
var gradient = null;
var startColor = null,
endColor = null;
for (var i = 0; i < radientColors.length; i++) {
startColor = radientColors[i];
endColor = radientColors[(i + 1) % radientColors.length];
// x start / end of the next arc to draw
var xStart = xc + Math.cos(start) * r;
var xEnd = xc + Math.cos(start + partLength) * r;
// y start / end of the next arc to draw
var yStart = yc + Math.sin(start) * r;
var yEnd = yc + Math.sin(start + partLength) * r;
ctx.beginPath();
gradient = ctx.createLinearGradient(xStart, yStart, xEnd, yEnd);
gradient.addColorStop(0, startColor);
gradient.addColorStop(1.0, endColor);
ctx.strokeStyle = gradient;
ctx.arc(xc, yc, r, start, start + partLength);
ctx.lineWidth = 30;
ctx.stroke();
ctx.closePath();
start += partLength;
}
}
var someColors = [];
someColors.push('#0F0');
someColors.push('#0FF');
someColors.push('#F00');
someColors.push('#FF0');
someColors.push('#F0F');
drawMultiRadiantCircle(150, 150, 120, someColors);
```
```
body{ background-color: ivory; }
#canvas{border:1px solid red;}
```
```
<canvas id="canvas" width=300 height=300></canvas>
```
|
Long to wide data. Pandas
I'm trying to take my dataframe from a long format in which I have a column with a categorical variable, into a wide format in which each category has it's own price column. Currently, my data looks like this:
```
date-time date vendor payment_type price
03-10-15 10:00:00 03-10-15 A1 1 50
03-10-15 10:00:00 03-10-15 A1 2 60
03-10-15 10:00:00 03-11-15 A1 1 45
03-10-15 10:00:00 03-11-15 A1 2 70
03-10-15 10:00:00 03-12-15 B1 1 40
03-10-15 10:00:00 03-12-15 B1 2 45
03-10-15 10:00:00 03-10-15 C1 1 60
03-10-15 10:00:00 03-10-15 C1 1 65
```
My goal is to have a column for every vendor's price and for each payment type and one row per day. When there are multiple values per day, I want to use the maximum value. The end result should look something like this.
```
Date A1_Pay1 A2_Pay2 ... C1_Pay1 C1_Pay2
03-10-15 50 60 ... 65 NaN
03-11-15 45 70 ... NaN NaN
03-12-15 NaN NaN ... NaN NaN
```
I tried using unstack and pivot, but I either wasn't getting what I was going for, or was getting an error about Date not being a unique index.
Any ideas?
|
You can use [`pivot_table`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html):
```
#convert column payment_type to string
df['payment_type'] = df['payment_type'].astype(str)
df = pd.pivot_table(df, index='date', columns=['vendor', 'payment_type'], aggfunc=max)
#remove top level of multiindex
df.columns = df.columns.droplevel(0)
#reset multicolumns
df.columns = ['_Pay'.join(col).strip() for col in df.columns.values]
print df
A1_Pay1 A1_Pay2 B1_Pay1 B1_Pay2 C1_Pay1
date
2015-03-10 50 60 NaN NaN 65
2015-03-11 45 70 NaN NaN NaN
2015-03-12 NaN NaN 40 45 NaN
```
EDIT:
If you need other statistics, you can add them as list to `aggfunc`:
```
#convert column payment_type to string
df['payment_type'] = df['payment_type'].astype(str)
df = pd.pivot_table(df, index='date', columns=['vendor', 'payment_type'],
aggfunc=[np.mean, np.max, np.median])
print df
mean amax median \
price price price
vendor A1 B1 C1 A1 B1 C1 A1 B1
payment_type 1 2 1 2 1 1 2 1 2 1 1 2 1 2
date
2015-03-10 50 60 NaN NaN 62.5 50 60 NaN NaN 65 50 60 NaN NaN
2015-03-11 45 70 NaN NaN NaN 45 70 NaN NaN NaN 45 70 NaN NaN
2015-03-12 NaN NaN 40 45 NaN NaN NaN 40 45 NaN NaN NaN 40 45
vendor C1
payment_type 1
date
2015-03-10 62.5
2015-03-11 NaN
2015-03-12 NaN
#remove top level of multiindex
df.columns = df.columns.droplevel(1)
#reset multicolumns
df.columns = ['_Pay'.join(col).strip() for col in df.columns.values]
```
```
print df
mean_PayA1_Pay1 mean_PayA1_Pay2 mean_PayB1_Pay1 \
date
2015-03-10 50 60 NaN
2015-03-11 45 70 NaN
2015-03-12 NaN NaN 40
mean_PayB1_Pay2 mean_PayC1_Pay1 amax_PayA1_Pay1 \
date
2015-03-10 NaN 62.5 50
2015-03-11 NaN NaN 45
2015-03-12 45 NaN NaN
amax_PayA1_Pay2 amax_PayB1_Pay1 amax_PayB1_Pay2 \
date
2015-03-10 60 NaN NaN
2015-03-11 70 NaN NaN
2015-03-12 NaN 40 45
amax_PayC1_Pay1 median_PayA1_Pay1 median_PayA1_Pay2 \
date
2015-03-10 65 50 60
2015-03-11 NaN 45 70
2015-03-12 NaN NaN NaN
median_PayB1_Pay1 median_PayB1_Pay2 median_PayC1_Pay1
date
2015-03-10 NaN NaN 62.5
2015-03-11 NaN NaN NaN
2015-03-12 40 45 NaN
```
|
How to change the value of variable with gdb
I have a file named exploit.c inside which:
```
#include <stdbool.h>
#include <stdio.h>
const char y1 = 'a';
const char y2 = 'b';
const char y3 = 'x';
const char y4 = 'y';
const char y5 = 'i';
const char y6 = 'j';
char x1 = 'f' ^ 'a';
char x2 = 'l' ^ 'b';
char x3 = 'a' ^ 'x';
char x4 = 'g' ^ 'y';
char x5 = 'y' ^ 'i';
char x6 = '-' ^ 'j';
int main() {
bool c = false;
if(c) { printf("The flag is: %c%c%c%c%c%c%c%c%c%c%c\n", x1 ^ y1, x2 ^ y2, x3 ^ y3, x4 ^
y4, x4 ^ y4, x5 ^ y5, x6 ^ y6, x1 ^ y1, x2 ^ y2, x3 ^ y3, x4 ^ y4); }
return 0;
}
```
I know I can print out the flag by changing the value of c. But I want to do it by assembling/executing. How can I print out the flag without changing the value of boolean but with gcc and gdb?
|
You can do it via `gdb`. The first thing you have to know is that `c` is a local variable, that means that it will be placed on the stack in an un-optimized build that doesn't optimize it away entirely and remove dead code.
At this point you have two options: you can either modify the memory location where the variable is (so somewhere on the stack) or you can use [gdb assignment](https://sourceware.org/gdb/current/onlinedocs/gdb/Assignment.html#Assignment).
To set the memory you can use something like `set *((char *) address_of_c) = 1`. (`bool` and `char` have the same size on most architectures, and bool uses 0 or non-0 as false/true.) You could probably just use `bool*`.
With gdb assignment the same memory zone will be modified, but that depends on GDB being able to use debug symbols (created by `gcc -g`) to find the name and location of a local variable within this function's stack frame. That's nice if it's available: you don't have to find the address of your variable manually.
|
Capitalize the first word of a sentence (regex, gsub, gregexpr)
Suppose I have the following text:
```
txt <- as.character("this is just a test! i'm not sure if this is O.K. or if it will work? who knows. regex is sorta new to me.. There are certain cases that I may not figure out?? sad! ^_^")
```
I want to capitalize the first alphabetical character of a sentence.
I figured out the regular expression to match as: `^|[[:alnum:]]+[[:alnum:]]+[.!?]+[[:space:]]*[[:space:]]+[[:alnum:]]`
A call to `gregexpr` returns:
```
> gregexpr("^|[[:alnum:]]+[[:alnum:]]+[.!?]+[[:space:]]*[[:space:]]+[[:alnum:]]", txt)
[[1]]
[1] 1 16 65 75 104 156
attr(,"match.length")
[1] 0 7 7 8 7 8
attr(,"useBytes")
[1] TRUE
```
Which are the correct substring indices that match.
However, how do I implement this to properly capitalize the characters I need? I'm assuming I have to `strsplit` and then... ?
|
It appears that your `regex` did not work for your example, so I stole one from [this question](https://stackoverflow.com/questions/5553410/regular-expression-match-a-sentence).
```
txt <- as.character("this is just a test! i'm not sure if this is O.K. or if it will work? who knows. regex is sorta new to me.. There are certain cases that I may not figure out?? sad! ^_^")
print(txt)
gsub("([^.!?\\s])([^.!?]*(?:[.!?](?!['\"]?\\s|$)[^.!?]*)*[.!?]?['\"]?)(?=\\s|$)", "\\U\\1\\E\\2", txt, perl=T, useBytes = F)
```
|
Is it safe to remove the .oh-my-zsh directory?
One point in time, I tried oh my zsh but it causes a lot of issues so I'm back at bash. I'm trying to clean up some files and notice there is a oh my zsh folder. The github instructions tell me to run uninstall oh my zsh but I don't see that script in my folder.
Is it safe to remove `.oh-my-zsh` folder?
|
`.oh-my-zsh` isn't used by anything but oh-my-zsh. If you use bash, you can just remove it.
The [instructions](https://github.com/robbyrussell/oh-my-zsh) tell you to run the command `uninstall_oh_my_zsh`. This is a function that you can invoke from zsh running oh-my-zsh. If you aren't running oh-my-zsh, you can run `tools/uninstall.sh`, but all it does is:
- remove `~/.oh-my-zsh`, which you were going to do anyway;
- switch your login shell to bash, which you've already done;
- restore your old `~/.zshrc`, which you didn't have if you never used zsh without oh-my-zsh.
You could also use zsh without oh-my-zsh.
|
Numpy, BLAS and CUBLAS
Numpy can be "linked/compiled" against different BLAS implementations (MKL, ACML, ATLAS, GotoBlas, etc). That's not always straightforward to configure but it is possible.
Is it also possible to "link/compile" numpy against NVIDIA's CUBLAS implementation?
I couldn't find any resources in the web and before I spend too much time trying it I wanted to make sure that it possible at all.
|
In a word: no, you can't do that.
There is a rather good scikit which provides access to CUBLAS from scipy called [`scikits.cuda`](https://github.com/lebedov/scikits.cuda/) which is built on top of [PyCUDA](http://mathema.tician.de/software/pycuda/). PyCUDA provides a `numpy.ndarray` like class which seamlessly allows manipulation of numpy arrays in GPU memory with CUDA. So you can use CUBLAS and CUDA with numpy, but you can't just link against CUBLAS and expect it to work.
There is also a commercial library that provides numpy and cublas like functionality and which has a Python interface or bindings, but I will leave it to one of their shills to fill you in on that.
|
Create a mapping operation so each input element produces 1 or more output elements?
Recently I am trying to figure out how to do some programming in Haskell.
I'm trying to do some simple operations. Right now I'm stuck with an operation like in this example:
```
input = [1,2,3,4]
output = [1,2,2,3,3,3,4,4,4,4]
```
That is, for each element `x` in `input`, produce `x` elements of `x` in `output`. So, for element `1` in input, append `[1]` to output. Then, for element `2` in input, append elements `[2,2]` to output. Then, for element `3`, append `[3,3,3]`, etc. The algorithm should work only on standard numbers.
I know it's very easy, and it's trivial to perform it in "normal" imperative programming, but as Haskell's functions are stateless, I'm having a problem in how to approach this.
Could anyone please give me some hint how can an absolute Haskell beginner cope with this?
|
You've just discovered monads!
Here's the general idea of what you're doing:
*For each `a`-element in the input (which is a container-type `M a`, here `[a]`), you specify an entire new container `M b`. But as a final result, you want just a single "flat" container `M b`.*
Well, let's take a look at the definition of the `Monad` type class:
```
class (Applicative m) => Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
```
which is exactly what you need. And lists are an instance of `Monad`, so you can write
```
replicates :: [Int] -> [Int]
replicates l = l >>= \n -> replicate n n
```
Alternatively, this can be written
```
replicates l = do
n <- l
replicate n n
```
---
It might be interesting to know that the, perhaps easier to understand, list comprehension
```
replicates l = [ n | n <- l, _ <- [1..n] ]
```
as suggested by chi, is actually just syntactic sugar for another monad expression:
```
[ n | n <- l, _ <- [1..n] ] ≡ l >>= \n -> [1..n] >>= \_ -> return n
```
... or least it used to be in some old version of GHC, I think it now uses a more optimised implementation of list comprehensions. You can still turn on that de-sugaring variant with the `-XMonadComprehensions` flag.
|
NodeJS WebSocket Handshake Silently Failing?
I'm trying to write a very simple websocket server in nodejs, and I'm running into an issue. On the browser, the the WebSocket.onclose function is the only one that is fired (onopen, onmessage, and onerror are not). I've tested in Chrome7 and FireFox4. Here's my server code:
```
var http = require('http'),
net = require('net'),
crypto = require('crypto');
var server = http.createServer(function (req, res) {
console.log(req);
});
server.on('connection', function (stream) {
stream.setEncoding('utf8');
stream.setTimeout(0);
stream.setNoDelay(true);
stream.on('data', function (data) {
var sec1_regex = /Sec-WebSocket-Key1:(.*)/g;
var sec2_regex = /Sec-WebSocket-Key2:(.*)/g;
var origin_regex = /Origin: (.*)/g;
var protocol_regex = /Sec-WebSocket-Protocol: (.*)/g;
console.log(stream);
console.log("****Incoming****\r\n" + data);
var key1 = sec1_regex.exec(data)[1];
var num1 = parseInt(key1.match(/\d/g).join(''))/(key1.match(/\s/g).length - 1);
console.log("num1: " + num1);
var key2 = sec2_regex.exec(data)[1];
var num2 = parseInt(key2.match(/\d/g).join(''))/(key2.match(/\s/g).length - 1);
console.log("num2: " + num2);
var lastbytes = data.slice(-8);
var origin = origin_regex.exec(data)[1];
var md5 = crypto.createHash('md5');
md5.update(String.fromCharCode(num1 >> 24 & 0xFF, num1 >> 16 & 0xFF, num1 >> 8 & 0xFF, num1 & 0xFF));
md5.update(String.fromCharCode(num2 >> 24 & 0xFF, num2 >> 16 & 0xFF, num2 >> 8 & 0xFF, num2 & 0xFF));
md5.update(lastbytes);
var response = "HTTP/1.1 101 WebSocket Protocol Handshake\r\nUpgrade: WebSocket\r\nConnection: Upgrade\r\nSec-WebSocket-Origin: "
+ origin + "\r\nSec-WebSocket-Location: ws://127.0.0.1:8124/\r\n" +
md5.digest('binary');
stream.write(response, 'binary');
console.log("****Outgoing****\r\n" + response);
});
});
server.listen(8124, '127.0.0.1');
```
And my client code:
```
function connect() {
if (window.WebSocket) {
try {
ws = new WebSocket('ws://127.0.0.1:8124');
ws.onopen = function () {
alert("open");
};
ws.onclose = function() {
alert("close");
};
ws.onerror = function(err) {
alert("err!");
};
ws.onmessage = function() {
alert('message');
};
} catch (ex) {
alert(ex);
}
}
}
```
|
OK there a couple of things wrong here, the reason why only the `onclose` handle is fired is because the browser does not receive a valid handshake and therefore terminates the connection.
1. You always send `ws://127.0.0.1:8124/` as the location, the location should exactly match whatever the browser send in the request in this case it would be most likely `localhost:8124` so you should return `ws://localhost:8124/` in such a case.
2. You're missing another `\r\n` after the response headers, so you're in fact not sending any body.
3. There seems to be something wrong with your calculation of the hash value, I'm still trying to figure out what though
For a working(and pretty small) implementation see here:
<http://github.com/BonsaiDen/NodeGame-Shooter/blob/master/server/ws.js>
|
Why does a Gaussian Process need to have a PSD kernel? Can I use a non-PSD kernel?
Is there an absolute need to use a PSD kernel for Gaussian processes (and maybe SVMs?)
For example, If I used a Minkowski distance with 0 < p < 1, the function would is not convex, and thus I would assume that the matrix K would not be positive semi definite.
In Gaussian Process regression, we would need to invert the covariance matrix to do inference. What would it mean to have negative eigenvalues in terms of this inference?
Is there any corrections that can be done so I can use a Minkowski distance function 0 < p < 1?
|
Say that $X \sim \mathcal{GP}(m(\cdot), k(\cdot, \cdot))$.
If $k$ is not a PSD kernel, then there is some set of $n$ points $\{ t\_i \}\_{i=1}^n$ and corresponding weights $\alpha\_i \in \mathbb R$ such that
$$\sum\_{i=1}^n \sum\_{j=1}^n \alpha\_i k(t\_i, t\_j) \alpha\_j < 0.$$
Now, consider the joint distribution of $\big(X(t\_i) \big)$. By the GP assumption, $$\operatorname{Cov}(X(t\_i), X(t\_j)) = k(t\_i, t\_j).$$
But then
$$
\operatorname{Var}\left( \sum\_{i=1}^n \alpha\_i X(t\_i) \right)
= \sum\_{i=1}^n \sum\_{j=1}^n \alpha\_i \operatorname{Cov}(X(t\_i), X(t\_j)) \alpha\_j
< 0
,$$
which is nonsensical. So a GP with a non-PSD kernel isn't a valid random process. How much that matters to your application depends on what you're doing with it, but the probabilistic foundations are definitely shot.
If you're just running an SVM or doing ridge regression (formally equivalent to GP regression), then the Hilbert space foundations are also definitely shot, but it's possible to define what you're doing in terms of a [Krein space](https://en.wikipedia.org/wiki/Indefinite_inner_product_space). There was a bit of work in the mid-to-late-aughts on this, but I think it was mostly abandoned because the theoretical motivation wasn't super satisfying and neither were the empirical results; I can dig out some of these papers if you want.
Another option is to "patch" your kernel to the closest (in some sense) PSD kernel on the particular points you consider. The following paper studied that; I have also used these techniques, but wouldn't generally recommend them if you can avoid it, because it adds a lot of headaches.
>
> Chen, Garcia, Gupta, Rahimi, and Cazzanti. Similarity-based Classification: Concepts and Algorithms. JMLR 2009 ([pdf](http://www.jmlr.org/papers/volume10/chen09a/chen09a.pdf)).
>
>
>
|
Why int a = 5 - '0'; is possible in java?
```
public static void main( String... args){
int a = 5 - '0';
System.out.println(a); //-43
Integer b = 5 - '0';
System.out.println(b); //-43
System.out.println(Integer.valueOf(a)); //-43
System.out.println(String.valueOf(b)); //-43
}
```
So I have two questions for this code.
- Why `int=5-'0';` is possible?? `5` is an int which is ok but next to that is a character than why it's not throwing any error??
- Is it the ASCII value of result? than how result will be calculated?
I know ASCII value of `43` is `+` but is it will convert `'0'` to it's ASCII and then do the operation?
|
[Doc](https://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html):
char: The char data type is a single 16-bit Unicode character. It has a minimum value of '\u0000' (or 0) and a maximum value of '\uffff' (or 65,535 inclusive).
In this case it is ~~casted~~ to int, that is why it works.
**Correction**
This is called [Widening Primitive Conversion](https://docs.oracle.com/javase/specs/jls/se8/html/jls-5.html#jls-5.1.2). Thanks @Andreas!
+1 curiosity
```
public static void main(String[] args) {
int a = 5 - '0';
}
```
I compiled this to bytecode:
```
public static main([Ljava/lang/String;)V
L0
LINENUMBER 24 L0
BIPUSH -43
ISTORE 1
L1
LINENUMBER 25 L1
RETURN
L2
LOCALVARIABLE args [Ljava/lang/String; L0 L2 0
LOCALVARIABLE a I L1 L2 1
MAXSTACK = 1
MAXLOCALS = 2
}
```
Notice the line with `BIPUSH -43` which means this value is calculated buildtime not runtime!
|
jsp autocomplete="off" not working in Chrome or Firefox
I've made a modification to the following, to prevent a users username appearing in the login box.
```
<div class="row clearfix">
<label for="j_username">Username:</label>
<input tabindex="1" type="text" name="j_username" id="j_userName" class="text" value='<c:if test="${param.login_error == 'authFailure'}">${SPRING_SECURITY_LAST_USERNAME}</c:if>' />
<p class="forgot-password">
<a tabindex="5" href="forgot-username-password.htm">Forgot your username or password?</a></p>
</div>
<input tabindex="1" type="text" name="j_username" id="j_userName" **autocomplete="off"** class="text" value='<c:if test="${param.login_error == 'authFailure'}">${SPRING_SECURITY_LAST_USERNAME}</c:if>' />
```
I thought this would prevent usernames being saved, do you know why Chrome and FF seem to ignore this?
cheers,
Ben
|
I think the problem lies in a higher implementation of auto-form filling that browsers are using to assist in ease-of-use. Even when specifying `autocomplete="off"`, Google Chrome will still fill in these passwords, but in a predictable manner that we can use to *break* this functionality, thus achieving the functionality we're looking for.
Note: One solid reason for disabling autocomplete of passwords is for CSRF prevention as outlined [here](https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet#Client.2FUser_Prevention).
Consider the following:
```
<form action="login.asp" method="post" class="form-horizontal" name="login-form" role="form" autocomplete="off">
<input type="hidden" name="<%=STATIC_CSRF_TOKEN_NAME%>" value="<%=AntiXssInstance.HtmlAttributeEncode(GetExternalCSRFToken())%>" />
<input type="email" class="form-control" id="email" name="email" placeholder="Email address">
<input type="password" class="form-control" id="password" name="password" placeholder="Password">
<button type="submit" class="btn btn-success">Login</button>
</form>
```
Now compare it with this:
```
<form action="login.asp" method="post" class="form-horizontal" name="login-form" role="form" autocomplete="off">
<input type="hidden" name="<%=CSRF_TOKEN_NAME%>" value="<%=GetExternalCSRFToken()%>" />
<input type="email" class="form-control" id="email" name="email" placeholder="Email address">
<input type="password" style="display: none;" id="password-breaker" />
<input type="password" class="form-control" id="password" name="password" placeholder="Password">
<button type="submit" class="btn btn-success">Login</button>
</form>
```
Notice how I'm doubling up on password fields, but the one that contains the value I actually want to process will be properly named. Hence, the predictable manner in which browsers like Google Chrome "override" the `autocomplete="off"` attribute is by *searching* for `<input type="password" />` that comes immediately after the login ID field for which the browser has stored the user's password. This is why this attribute doesn't seem to be working in certain browsers, including Chrome.
However, by using this approach I've outlined, it essentially tricks the browser into pre-filling that `#password-breaker` field (which we're not going to use) with the user's password and leaves the actual `#password` field blank.
I haven't tested this in every possible browser and version as of yet, but I find it now to be a reasonable "cross-browser" compatibility workaround and complementary step to solely relying on the non-working `autocomplete` attribute.
|
Using fork() in C
I'm writing a program that uses the cpu power to process some information. The program depends on the CPU cores. If there are 2 cores, the program will fork() twice to create 2 instances of the work and return the results.
```
#define CORES 4
void worker(int id)
{
// blablabla work here
printf("worker %d\n",id);
printf("[%d] I'm child of %d\n",getpid(),getppid());
}
int main (int argc, const char * argv[])
{
int pid;
for (int i=0; i<CORES; i++)
{
pid = fork();
if (pid == 0) // if child
{
worker(i);
exit(0);
}
else if (pid>0)
{
printf("[%d] Big father here!\n",getpid());
}
else
{
printf("--- Fork problem ---");
}
}
return 0;
}
```
My questions:
1. What can I do so the program only terminates when ALL the child processes are done processing the required information? (i think they're becoming orphans)
2. How to count the time it took since the first process started working until the last process terminates
|
Use [`wait()`](http://linux.die.net/man/2/wait) to wait for children to terminate:
```
int status;
pid_t pid;
while ((pid = wait(&status)) != -1) {
// pid just terminated
}
// all children terminated
```
See [`man 2 wait`](http://linux.die.net/man/2/wait).
For measuring the time, see [`gettimeofday()`](http://linux.die.net/man/2/gettimeofday):
```
struct timeval tv = {0};
gettimeofday(&tv, NULL);
```
`struct timeval`:
```
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
```
|
Can Haskell's monads be thought of as using and returning a hidden state parameter?
I don't understand the exact algebra and theory behind Haskell's monads. However, when I think about functional programming in general I get the impression that state would be modelled by taking an initial state and generating a copy of it to represent the next state. This is like when one list is appended to another; neither list gets modified, but a third list is created and returned.
Is it therefore valid to think of monadic operations as implicitly taking an initial state object as a parameter and implicitly returning a final state object? These state objects would be hidden so that the programmer doesn't have to worry about them and to control how they gets accessed. So, the programmer would not try to copy the object representing the IO stream as it was ten minutes ago.
In other words, if we have this code:
```
main = do
putStrLn "Enter your name:"
name <- getLine
putStrLn ( "Hello " ++ name )
```
...is it OK to think of the IO monad and the "do" syntax as representing this style of code?
```
putStrLn :: IOState -> String -> IOState
getLine :: IOState -> (IOState, String)
main :: IOState -> IOState
-- main returns an IOState we can call "state3"
main state0 = putStrLn state2 ("Hello " ++ name)
where (state2, name) = getLine state1
state1 = putStrLn state0 "Enter your name:"
```
|
No, that's not what monads in general do. However, your analogy is in fact **exactly correct** with regards to the data type `State s a`, which happens to be **a** monad. `State` is defined like this:
```
newtype State s a = State { runState :: s -> (a, s) }
```
...where the type variable `s` is the state value and `a` is the "regular" value that you use. So a value in "the State monad" is just a function from an initial state, to a return value and final state. The monadic style, as applied to `State`, does nothing more than automatically thread a state value through a sequence of functions.
The `ST` monad is superficially similar, but uses magic to allow computations with *real* side-effects, but only such that the side effects can't be observed from outside particular uses of `ST`.
The `IO` monad is essentially an `ST` monad set to "more magic", with side effects that touch the outside world and only a single point where `IO` computations are run, namely the entry point for the entire program. Still, on some conceptual level, you can still think of it as threading a "state" value through functions the way regular `State` does.
However, **other monads** don't necessarily have anything whatsoever to do with threading state, or sequencing functions, or whatnot. The operations needed for something to be a monad are *incredibly* general and abstract. For instance, using `Maybe` or `Either` as monads lets you use functions that might return errors, with the monadic style handling escaping from the computation when an error occurs the same way that `State` threads a state value. Using lists as a monad gives you *nondeterminism*, letting you simultaneously apply functions to multiple inputs and see all possible outputs, with the monadic style automatically applying the function to each argument and collecting all the outputs.
|
Is it possible to subclass DataFrame in Pyspark?
The documentation for Pyspark shows DataFrames being constructed from `sqlContext`, `sqlContext.read()`, and a variety of other methods.
(See <https://spark.apache.org/docs/1.6.2/api/python/pyspark.sql.html>)
Is it possible to subclass Dataframe and instantiate it independently? I would like to add methods and functionality to the base DataFrame class.
|
It really depends on your goals.
- Technically speaking it is possible. `pyspark.sql.DataFrame` is just a plain Python class. You can extend it or monkey-patch if you need.
```
from pyspark.sql import DataFrame
class DataFrameWithZipWithIndex(DataFrame):
def __init__(self, df):
super(self.__class__, self).__init__(df._jdf, df.sql_ctx)
def zipWithIndex(self):
return (self.rdd
.zipWithIndex()
.map(lambda row: (row[1], ) + row[0])
.toDF(["_idx"] + self.columns))
```
Example usage:
```
df = sc.parallelize([("a", 1)]).toDF(["foo", "bar"])
with_zipwithindex = DataFrameWithZipWithIndex(df)
isinstance(with_zipwithindex, DataFrame)
```
```
True
```
```
with_zipwithindex.zipWithIndex().show()
```
```
+----+---+---+
|_idx|foo|bar|
+----+---+---+
| 0| a| 1|
+----+---+---+
```
- Practically speaking you won't be able to do much here. `DataFrame` is an thin wrapper around JVM object and doesn't do much beyond providing docstrings, converting arguments to the form required natively, calling JVM methods, and wrapping the results using Python adapters if necessary.
With plain Python code you won't be able to even go near `DataFrame` / `Dataset` internals or modify its core behavior. If you're looking for standalone, Python only Spark `DataFrame` implementation it is not possible.
|
How to make a custom button display a menu
I have a custom button in WPF that I want to use as my main menu button for style reasons. I am trying to create a menu similar to that of [Microsoft Office 2007](http://office.microsoft.com/en-us/word-help/what-happened-to-the-file-menu-HA010120473.aspx). I've added menu items to the button, but a menu does not open when I click on the button. In fact, I get a compiler error that says that `Content` has been set too many times. I think this may have to do with my image. What other code do I need to add or change in my button to make it actually open the menu when clicked, error free?
This is what I have under my button right now:
```
<!--- MAIN MENU BUTTON -->
<Button Width="50"
Height="50" HorizontalAlignment="Left" Margin="12,12,0,0" VerticalAlignment="Top">
<!--- MAIN MENU BUTTON IMAGE -->
<Button.Content>
<StackPanel Orientation="Horizontal">
<Image
Source="..."
Width="40"
Height="40" />
</StackPanel>
</Button.Content>
<!-- MENU COMMAND -->
<Menu>
<MenuItem x:Name="MainMenu">
<MenuItem Header="New" />
<MenuItem Header="Open" />
<MenuItem Header="Exit" Click="Exit_Click" />
</MenuItem>
</Menu>
</Button>
```
I've read examples that talk about binding, but I am not really sure how to do that in this specific instance.
Thank you.
|
You can use the *RoutedEvent* `Button.Click`, to show `ContextMenu`:
```
<Button Name="MainButton" Content="Button with ContextMenu" Width="150" Height="30">
<Button.ContextMenu>
<ContextMenu x:Name="MainContextMenu" PlacementRectangle="{Binding RelativeSource={RelativeSource Self}}">
<MenuItem Header="Main">
<MenuItem Header="Find" />
<MenuItem Header="Add" />
<MenuItem Header="View" />
<MenuItem Header="Edit" />
</MenuItem>
</ContextMenu>
</Button.ContextMenu>
<Button.Triggers>
<EventTrigger SourceName="MainButton" RoutedEvent="Button.Click">
<BeginStoryboard>
<Storyboard>
<ObjectAnimationUsingKeyFrames Storyboard.TargetName="MainContextMenu" Storyboard.TargetProperty="(ContextMenu.IsOpen)">
<DiscreteObjectKeyFrame KeyTime="0:0:0">
<DiscreteObjectKeyFrame.Value>
<sys:Boolean>True</sys:Boolean>
</DiscreteObjectKeyFrame.Value>
</DiscreteObjectKeyFrame>
</ObjectAnimationUsingKeyFrames>
</Storyboard>
</BeginStoryboard>
</EventTrigger>
</Button.Triggers>
</Button>
```
`Output`

**`Note:`** Add `sys` namespace like that:
```
xmlns:sys="clr-namespace:System;assembly=mscorlib"
```
to your `Window`:
```
<Window x:Class="ShowContextMenu.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:sys="clr-namespace:System;assembly=mscorlib"
Title="MainWindow" Height="350" Width="525" .../>
```
|
Text color relative to image behind it
I have a strong feeling that this isn't possible - but that's why I'm here.
I have a site which has a full screen slideshow on the homepage and a header with a transparent background and dark text/content. The problem is that on darker images you cannot see the content as it is dark too.
Obviously I could give the div a background but this isn't what the design calls for.
Is there any way I can color the text relative to the background, possibly an 'invert' color effect or something similar?
Thanks in advance
|
1. store the information in the file name (easiest) - e.g image1\_light.png, imagex\_dark.jpg and look at the \_light or \_dark. OR
2. [How to pick good contrast RGB colors programmatically?](https://stackoverflow.com/questions/7260989/how-to-pick-good-contrast-rgb-colors-programmatically)
3. canvas/html5 [Using JavaScript or jQuery, how can I get the RGB color where ever the mouse is moving specially in <img> or <div> elements](https://stackoverflow.com/questions/7985722/using-javascript-or-jquery-how-can-i-get-the-rgb-color-where-ever-the-mouse-is)
The demo to look at is <http://www.script-tutorials.com/demos/158/index.html> where you need to determine the position of the text and see what the main color is under the text
`$(".navigation").position()` is {top: 30, left: 381}
`$(".navigation").width())` is 214
`$(".navigation").height())` is 68
I would think the code would be something like this
[WORKING EXAMPLE](http://jsfiddle.net/mplungjan/jwL5S/)
```
var ctx,canvas;
$(function(){ // on page load, this should likely be on image transition
// creating canvas object in memory.
// Since I do not append it anywhere it is not rendered
canvas = $("<canvas/>")[0]; // a jQuery object on its own does not work
ctx = canvas.getContext('2d');
var image = new Image(); // create an image object in memory
image.onload = function () {
// resize
canvas.width=this.width;
canvas.height=this.height;
// render the image on the canvas
ctx.drawImage(image, 0, 0);
var nav = $(".navigation"); // get the navigation container
// find a pixel about in the middle where the navigation would be
var pos = {left: nav.position().left+(nav.width()/2), top: nav.position().top+(nav.height()/2) }
var pixel = ctx.getImageData(pos.left,pos.top, 1, 1).data;
$canvas=null; // no longer need this canvas
var invertedPixelColor = "rgba("+(255-pixel[0])+", "+(255-pixel[1])+", "+(255-pixel[2])+", "+1+")"; // invert it, ignoring the alpha channel
nav.css("color",invertedPixelColor); // set the nav text to inverted color
// here you could save the colour and reuse it
// if the user navigates to the same image
}
image.src = $(body).css('background-image'); // load the image, triggering the calc
});
```
|
Trying to download html page via url in JAVA. Getting some weird symbols instead
So I am trying to download this page <http://www.csfd.cz/film/895-28-dni-pote/prehled/>.
I am using this code:
```
URL url = new URL("http://www.csfd.cz/film/895-28-dni-pote/prehled/");
try(BufferedReader br = new BufferedReader(new InputStreamReader(url.openStream(),Charset.forName("UTF-8")))){
String line = br.readLine();
while(line != null){
System.out.println(line);
line = br.readLine();
}
```
It worked on some other pages, but now it is giving me some weird symbols. For example the second line I get is: "�\�?�����c��n��". (It has not been copied exactly as I see it in eclipse console.)
I think I am using UTF-8 encoding as is the page.
In case you are wondering it is in Czech.
Thanks for help.
|
```
$ curl -D- http://www.csfd.cz/film/895-28-dni-pote/prehled/
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 01 Feb 2016 08:11:36 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: close
X-Frame-Options: SAMEORIGIN
X-Powered-By: Nette Framework
Vary: X-Requested-With
X-From-Cache: TRUE
Content-Encoding: gzip`
▒}I▒▒▒▒^▒▒29B▒▒▒$R▒M▒$nER▒▒4X, @
etc....
```
Notice `Content-Encoding: gzip` - the content is compressed using gzip, and you will need to decompress it in order to use it.
Study the classes in `java.util.zip`, especially `GzipInputStream`, which I believe you can wrap around a regular input stream.
|
Unexpected mix of '||' and '&&' no-mixed-operators
I have the following hook and we use it to bind to channels.
```
import { useEffect, useState } from "react";
import { Channel, PresenceChannel } from "pusher-js";
import PrivateChannel from 'pusher-js/types/src/core/channels/private_channel';
import { usePusher } from "./usePusher";
export type ChannelType = PrivateChannel | PresenceChannel
function _useChannel<T extends ChannelType>(channelName: string) {
const pusher = usePusher()
const [channel, setChannel] = useState<T>();
useEffect(() => {
const pusherChannel = pusher.subscribe(channelName) as T;
setChannel(pusherChannel);
return () => pusher?.unsubscribe(channelName);
}, [channelName, pusher]);
return channel;
}
```
When I open the console, I see this message: `Unexpected mix of '||' and '&&' no-mixed-operators`
Why does this happen?
|
As @Drew Reese said, you have somewhere a complex logic expression mixing `&&` and `||`, and you have configured ESLint to warn you if that happens. You will have to find where that's happening, and add the necessary parenthesis to clarify your intention.
From [ESLint docs](https://eslint.org/docs/rules/no-mixed-operators)
```
var foo = a && b || c || d; /*BAD: Unexpected mix of '&&' and '||'.*/
var foo = a && b ? c : d; /*BAD: Unexpected mix of '&&' and '?:'.*/
var foo = (a && b) ? c : d; /*GOOD*/
var foo = (a && b) || c || d; /*GOOD*/
var foo = a && (b || c || d); /*GOOD*/
```
If you are uncertain about what the correct precedence of boolean operators should be, think that `&&` can be thought as the equivalent of `multiplication` and `||` as `addition`, thus `&&` takes precedence over `||`
For example:
```
a && b || c || d
<=> a*b + c + d
<=> (a && b) || c || d
```
|
f:websocket in JSF 2.3
I trying to get BalusC's JSF 2.3+ example ([How can server push asynchronous changes to a HTML page created by JSF?](https://stackoverflow.com/questions/3787514/how-can-server-push-asynchronous-changes-to-a-html-page-created-by-jsf/49852105#49852105)) working using JBoss WildFly 12.0.0.Final
I have added the JBoss JSF JAR:
```
jboss-jsf-api_2.3_spec-2.3.3.SP1.jar
```
However when I try and display my XHTML page which has:
```
<h:form>
<f:websocket channel="push">
<f:ajax event="updateNotifications" render=":panelGridSelect" />
</f:websocket>
</h:form>
```
I get:
>
> javax.faces.view.facelets.TagException: /enterProduct.xhtml @61,45
> Tag Library supports namespace:
> <http://xmlns.jcp.org/jsf/core>, but no tag was defined for name:
> websocket
>
>
>
In IntelliJ IDEA the:
```
f:websocket
```
is shown in red, as though it can't find it?
Where is the taglib for f:websocket?
|
`f:websocket` is available since JSF 2.3 and although Wildfly 12 contains **some** JavaEE 8 features, it by default starts in JavaEE 7 mode which is 'just' JSF 2.2.
From the very recent (edit: at time of writing ;-)) [WildFly 12 release documentation](http://wildfly.org/news/2018/02/28/WildFly12-Final-Released/)
>
> By default WildFly 12 launches in EE7 mode. In order to use these new capabilities you have to enable EE8 preview mode. This can be accomplished by passing the ee8.preview.mode property during startup:
>
>
> `./standalone.sh -Dee8.preview.mode=true`
>
>
>
There are other options too to enable this. From the same docs:
>
> Alternatively, this can be achieved by launching the server using the new standalone-ee8.xml configuration, which simply includes the property in the config.
>
>
> `./standalone.sh -c standalone-ee8.xml`
>
>
> The CLI can also be used to modify the existing configuration to add this property. For example:
>
>
> `embed-server --admin-only=true
> /system-property=ee8.preview.mode:add(value=true)
> stop-embedded-server`
>
>
>
You should then have JSF 2.3 and `f:websockets`
|
What is a python thread
I have several questions regarding Python threads.
1. Is a Python thread a Python or OS implementation?
2. When I use htop a multi-threaded script has multiple entries - the same memory consumption, the same command but a different PID. Does this mean that a [Python] thread is actually a special kind of process? (I know there is a setting in htop to show these threads as one process - `Hide userland threads`)
3. [Documentation](http://docs.python.org/library/threading.html#thread-objects) says:
>
> A thread can be flagged as a “daemon thread”. The significance of this
> flag is that the entire Python program exits when only daemon threads
> are left.
>
>
>
My interpretation/understanding was: main thread terminates when all non-daemon threads are terminated.
So python daemon threads are not part of Python program if "the entire Python program exits when only daemon threads are left"?
|
1. Python threads are implemented using OS threads in all implementations I know (C Python, PyPy and Jython). For each Python thread, there is an underlying OS thread.
2. Some operating systems (Linux being one of them) show all different threads launched by the same executable in the list of all running processes. This is an implementation detail of the OS, not of Python. On some other operating systems, you may not see those threads when listing all the processes.
3. The process will terminate when the last non-daemon thread finishes. At that point, all the daemon threads will be terminated. So, those threads are part of your process, but are not preventing it from terminating (while a regular thread will prevent it). That is implemented in pure Python. A process terminates when the system `_exit` function is called (it will kill all threads), and when the main thread terminates (or `sys.exit` is called), the Python interpreter checks if there is another non-daemon thread running. If there is none, then it calls `_exit`, otherwise it waits for the non-daemon threads to finish.
---
The daemon thread flag is implemented in pure Python by the `threading` module. When the module is loaded, a `Thread` object is created to represent the main thread, and it's `_exitfunc` method is registered as an `atexit` hook.
The code of this function is:
```
class _MainThread(Thread):
def _exitfunc(self):
self._Thread__stop()
t = _pickSomeNonDaemonThread()
if t:
if __debug__:
self._note("%s: waiting for other threads", self)
while t:
t.join()
t = _pickSomeNonDaemonThread()
if __debug__:
self._note("%s: exiting", self)
self._Thread__delete()
```
This function will be called by the Python interpreter when `sys.exit` is called, or when the main thread terminates. When the function returns, the interpreter will call the system `_exit` function. And the function will terminate, when there are only daemon threads running (if any).
When the `_exit` function is called, the OS will terminate all of the process threads, and then terminate the process. The Python runtime will not call the `_exit` function until all the non-daemon thread are done.
All threads are part of the process.
---
>
> My interpretation/understanding was: main thread terminates when all
> non-daemon threads are terminated.
>
>
> So python daemon threads are not part of python program if "the entire
> Python program exits when only daemon threads are left"?
>
>
>
Your understanding is incorrect. For the OS, a process is composed of many threads, all of which are equal (there is nothing special about the main thread for the OS, except that the C runtime add a call to `_exit` at the end of the `main` function). And the OS doesn't know about daemon threads. This is purely a Python concept.
The Python interpreter uses native thread to implement Python thread, but has to remember the list of threads created. And using its `atexit` hook, it ensures that the `_exit` function returns to the OS only when the last non-daemon thread terminates. When using "the entire Python program", the documentation refers to the whole process.
---
The following program can help understand the difference between daemon thread and regular thread:
```
import sys
import time
import threading
class WorkerThread(threading.Thread):
def run(self):
while True:
print 'Working hard'
time.sleep(0.5)
def main(args):
use_daemon = False
for arg in args:
if arg == '--use_daemon':
use_daemon = True
worker = WorkerThread()
worker.setDaemon(use_daemon)
worker.start()
time.sleep(1)
sys.exit(0)
if __name__ == '__main__':
main(sys.argv[1:])
```
If you execute this program with the '--use\_daemon', you will see that the program will only print a small number of `Working hard` lines. Without this flag, the program will not terminate even when the main thread finishes, and the program will print `Working hard` lines until it is killed.
|
The difference between .mk file and Makefile
I've just begun to study Porting Android. And I come across a new type of file which is .mk file. It's is an extension of Makefile but I don't know what it is different from a Makefile ? So, can somebody help you clarify them. Thanks very much !
|
A `make` file can have any name. The `-f` option of `make` is used to specify which file to use:
```
make -f foobar
```
You can even use `-f` several times:
```
make -f foo -f bar
```
In which case `make` processes the files in order (or, equivalently, concatenates the files and processes the result).
`makefile` and `Makefile` are special names because if `make` is called without the `-f` option it automatically searches for them, in this order, and use the first it finds. Note that GNU `make` also considers `GNUmakefile`, and prefers it over `makefile` and `Makefile`. Other `make` implementations can have other default names.
The `.mk` extension is a more or less standard extension for make files that have other names than the defaults. It is a reasonable extension if you want humans to quickly understand what these files are: `convert.mk` is more informative than `foobar`. Some editors use this extension to identify the file type and apply syntax coloring. They usually apply the same syntax coloring to `makefile` and `Makefile`.
|
Creating a Class of Objects within an Object
I have what might seem like a simple question, but for some reason I am having a problem getting my brain around the concept of having a object with multiple object with in it. For example lets say we have a object with a header and a footer with multiple objects in between.
Like a report, the header would have the name and address. The footer would have a total of the item that where bought. In between would be the line items with the part number, description and a price.
I guess the I can have a object with an header, footer and an array of line item object, all with their own properties. I am using a report as the example, because it’s the only concept that I can think of that will get close to explaining my question.
Can someone please send me a link to or explain on how to create this type of object(s).
I am using VS 2010 and VB.net and I can translate from C# to VB.
```
Report Object
Header Object
Property Name
Property Date
End
LineItem() Array Object
Property Part Number
Property Part Description
Property Number of Items
Property Per Item Price
Property Total price
End
Footer Object
Property Total Items count
Property Total Price
End
End
```
|
Jeff, in c# and at its most basic:
```
public class Report
{
// typical simple property in report
public string ReportUid { get; set; }
// object properties
public Header Header { get; set; }
public Body Body { get; set; }
public Footer Footer { get; set; }
public Report()
{
Header = new Header();
Body = new Body();
Footer = new Footer();
}
internal void CalculateFooterTotals()
{
// summerize the lineitems values in the footer
Footer.TotalItems = Body.LineItems
.Sum(x => x.Quantity);
Footer.TotalPrice = Body.LineItems
.Sum(x => x.Total);
}
}
public class Header
{
public string Name { get; set; }
public DateTime Date { get; set; }
}
public class Body
{
public IList<LineItem> LineItems { get; set; }
public Body()
{
LineItems = new List<LineItem>();
}
}
public class LineItem
{
public string PartNumber { get; set; }
public string PartDescription { get; set; }
public int Quantity { get; set; }
public float ItemPrice { get; set; }
// computed
public float Total
{
get { return Quantity * ItemPrice; }
}
}
public class Footer
{
// populated via report.CalculateFooterTotals()
public int TotalItems { get; internal set; }
public float TotalPrice { get; internal set; }
}
```
Some of the properties are of course computed, rather than get/set.
**[edit]** - thought it'd be good practice to add a bit of usage, as I saw you ask Douglas this question (more than likely from DB or other source):
```
// usage - set up report
var report = new Report {
ReportUid = Guid.NewGuid().ToString(),
Header =
{
Name = "My new report",
Date = DateTime.UtcNow
}};
// add lineitems to body (in real case, probably a loop)
report.Body.LineItems.Add(new LineItem()
{
Quantity = 1,
ItemPrice = 12.30f,
PartDescription = "New shoes",
PartNumber = "SHOE123"
});
report.Body.LineItems.Add(new LineItem()
{
Quantity = 3,
ItemPrice = 2.00f,
PartDescription = "Old shoes",
PartNumber = "SHOE999"
});
report.Body.LineItems.Add(new LineItem()
{
Quantity = 7,
ItemPrice = 0.25f,
PartDescription = "Classic Sox",
PartNumber = "SOX567"
});
// summerize the lineitems values in the footer
report.CalculateFooterTotals();
```
now apply report to your canvas surface (html etc..)
```
private static void DispalyData(Report report)
{
// set out the basics
Console.WriteLine("Header");
Console.WriteLine(report.ReportUid);
Console.WriteLine(report.Header.Date);
Console.WriteLine(report.Header.Name);
// now loop round the body items
Console.WriteLine("Items");
foreach (var lineItem in report.Body.LineItems)
{
Console.WriteLine("New Item---");
Console.WriteLine(lineItem.PartDescription);
Console.WriteLine(lineItem.Quantity);
Console.WriteLine(lineItem.ItemPrice);
Console.WriteLine(lineItem.PartNumber);
Console.WriteLine(lineItem.Total);
Console.WriteLine("End Item---");
}
// display footer items
Console.WriteLine("Footer");
Console.WriteLine(report.Footer.TotalItems);
Console.WriteLine(report.Footer.TotalPrice);
}
// called in code as:
DispalyData(report);
```
Hope this scans ok... pushed it to community wiki (via edits), as it's a universally sought after topic.
[btw] - altho you'll be aware of c# to vb.net convertors, I tried this one and it looks pretty promising: <http://www.developerfusion.com/tools/convert/csharp-to-vb>
|
How to deploy Dockerfile and application files to boot2docker
I'm just starting to learn Docker, and I'm trying to get a simple setup with boot2docker on Windows. I could not find in any tutorial or samples online how to make my project files accessible to the boot2docker VM.
I downloaded boot2docker, and succeeded in installing it and getting it up and running. I can also ssh into it and run the hello world docker example, so I know I have a working Docker installation in the VM.
**Now, the question**: How do I get my `Dockerfile` and my application files on the boot2docker VM so that I can build a Docker container?
In other words, I'm trying to do this from the boot2docker VM:
```
$ cd /some-directory-from-my-host-os
$ docker build .
```
I tried following the instructions on <https://github.com/boot2docker/boot2docker#folder-sharing> and setup a SAMBA share, and I can successfully connect from Windows to the share at `\\192.168.59.103\data` but that doesn't tell me how to access that same directory from within the boot2docker VM.
I feel like the answer might be obvious, but I'm sure I'm not the only one who will hit that wall. Thanks for your help!
---
# Update - Now Built-In in boot2docker 1.3!
Since boot2docker 1.3, the ability to mount a shared folder has been added, so the VM can share your dev folder directly. More information here: <https://github.com/boot2docker/boot2docker#virtualbox-guest-additions>
|
Not sure if it helps under windows. But for Mac see:
**boot2docker together with VirtualBox Guest Additions**
How to mount /Users into boot2docker
<https://medium.com/boot2docker-lightweight-linux-for-docker/boot2docker-together-with-virtualbox-guest-additions-da1e3ab2465c>
>
> tl;dr Build your own custom boot2docker.iso with VirtualBox Guest
> Additions (see [link](https://gist.github.com/mattes/2d0ffd027cb16571895c#file-readme-md)) or download
> <http://static.dockerfiles.io/boot2docker-v1.0.1-virtualbox-guest-additions-v4.3.12.iso>
> and save it to ~/.boot2docker/boot2docker.iso.
>
>
>
|
Python 2D list performance, without numpy
I am trying to create a 2D list in Python. I found two possibilities.
```
def cArray(size):
c = [[0. for i in range(size)] for j in range(size)]
return c
def npArray(size):
np = numpy.zeros((size,size))
return np
```
Now both of these functions give the correct answer. The problem here is with the performance. I ran both of these using timeit, and here are my results:
```
list size is 5000
number of times run is 5
cArray average time: 3.73241295815
npArray average time: 0.210782241821
```
So obviously, I would like to avoid the first solution, especially since this will be running for sizes up to 100k. However, I also do not want to use too many dependencies. Is there a way for me to efficiently create a 2D array, without numpy? It doesn't need to be exactly up to speed, as long as it's not 17 times slower.
|
>
> So obviously, I would like to avoid the first solution, especially since this will be running for sizes up to 100k. However, I also do not want to use too many dependencies.
>
>
>
You must choose which of these is more important to you. Numpy has better performance precisely because it doesn't use the builtin Python types and uses its own types that are optimized for numerical work. If your data are going to be numeric and you're going to have 100k rows/columns, you will see a gigantic performance increase with numpy. If you want to avoid the numpy dependency, you will have to live with reduced performance. (Obviously you can always write your own Python libraries or C extensions to optimize for your particular use case, but these will then be dependencies like any other.)
Personally I would recommend you just use numpy. It is so widely used that anyone who is considering using a library that deals with 100k multidimensional arrays probably already has numpy installed.
|
Is there an easy way to create a FreeBSD rc script?
I have a FreeBSD jail in which I run a server using the command:
```
/usr/sbin/daemon /path/to/script.py
```
At the moment I have to run this command every time I restart the machine and the jail starts. I'd like to have this command started from `/etc/rc`. Is there an easy way to create a FreeBSD rc script for a `daemon` command?
---
**UPDATE:** I read through [this BSD documentation about rc scripts](http://www.freebsd.org/doc/en_US.ISO8859-1/articles/rc-scripting/rcng-daemon.html), and from that I created the following script in `/etc/rc.d/pytivo`:
```
#!/bin/sh
. /etc/rc.subr
name=pytivo
rcvar=pytivo_enable
procname="/usr/local/pytivo/pyTivo.py"
command="/usr/sbin/daemon -u jnet $procname"
load_rc_config $name
run_rc_command "$1"
```
This works to start the python script I am wanting as a daemon when the jail starts... (given `pytivo_enable="YES"` is in `/etc/rc.conf`) but the rc script doesn't know if the daemon is running (it thinks it isn't when it is) and it gives a warning when I try to start it:
```
[root@meryl /home/jnet]# /etc/rc.d/pytivo start
[: /usr/sbin/daemon: unexpected operator
Starting pytivo.
[root@meryl /home/jnet]#
```
So it's *close*, and it *works*, but I feel like I should be able to get better functionality than this.
|
`command` should not contain multiple words. This is the cause of the `[` error you see. You should set any flags separately.
Also, you should use `pytivo_user` to set the running uid, and not `daemon -u`. See the [rc.subr(8)](http://www.freebsd.org/cgi/man.cgi?query=rc.subr&sektion=8) man page for all these magic variables.
Also, you should let the rc subsystem know that pytivo is a Python script so that it can find the process when it checks to see if it's running.
Finally, you should use the idiomatic `set_rcvar` for `rcvar`.
Something like this (I'm not sure this is the right Python path):
```
#!/bin/sh
# REQUIRE: LOGIN
. /etc/rc.subr
name=pytivo
rcvar=`set_rcvar`
command=/usr/local/pytivo/pyTivo.py
command_interpreter=/usr/local/bin/python
pytivo_user=jnet
start_cmd="/usr/sbin/daemon -u $pytivo_user $command"
load_rc_config $name
run_rc_command "$1"
```
|
Scaling / Normalizing pandas column
I have a dataframe like:
```
TOTAL | Name
3232 Jane
382 Jack
8291 Jones
```
I'd like to create a newly scaled column in the dataframe called `SIZE` where `SIZE` is a number between 5 and 50.
For Example:
```
TOTAL | Name | SIZE
3232 Jane 24.413
382 Jack 10
8291 Jones 50
```
I've tried
```
from sklearn.preprocessing import MinMaxScaler
import pandas as pd
scaler=MinMaxScaler(feature_range=(10,50))
df["SIZE"]=scaler.fit_transform(df["TOTAL"])
```
but got `Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.`
I've tried other things, such as creating a list, transforming it, and appending it back to the dataframe, among other things.
What is the easiest way to do this?
Thanks!
|
**Option 1**
`sklearn`
You see this problem time and time again, the error really should be indicative of what you need to do. You're basically missing a superfluous dimension on the input. Change `df["TOTAL"]` to `df[["TOTAL"]]`.
```
df['SIZE'] = scaler.fit_transform(df[["TOTAL"]])
```
```
df
TOTAL Name SIZE
0 3232 Jane 24.413959
1 382 Jack 10.000000
2 8291 Jones 50.000000
```
---
**Option 2**
`pandas`
Preferably, I would bypass sklearn and just do the min-max scaling myself.
```
a, b = 10, 50
x, y = df.TOTAL.min(), df.TOTAL.max()
df['SIZE'] = (df.TOTAL - x) / (y - x) * (b - a) + a
```
```
df
TOTAL Name SIZE
0 3232 Jane 24.413959
1 382 Jack 10.000000
2 8291 Jones 50.000000
```
This is essentially what the min-max scaler does, but without the overhead of importing scikit learn (don't do it unless you have to, it's a heavy library).
|
Split audio into several pieces based on timestamps from a text file with sox or ffmpeg
I looked at the following link: [Trim audio file using start and stop times](https://unix.stackexchange.com/questions/182602/trim-audio-file-using-start-and-stop-times?newreg=ff8310e737744266bfecb9b2003b2af3)
But this doesn't completely answer my question. My problem is: I have an audio file such as `abc.mp3` or `abc.wav`. I also have a text file containing start and end timestamps:
```
0.0 1.0 silence
1.0 5.0 music
6.0 8.0 speech
```
I want to split the audio into three parts using Python and `sox`/`ffmpeg`, thus resulting in three seperate audio files.
How do I achieve this using either `sox` or `ffmpeg`?
Later I want to compute the [MFCC](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum) corresponding to those portions using `librosa`.
I have `Python 2.7`, `ffmpeg`, and `sox` on an Ubuntu Linux 16.04 installation.
|
I've just had a quick go at it, very little in the way of testing so maybe it'll be of help. Below relies on [ffmpeg-python](https://kkroening.github.io/ffmpeg-python/), but it wouldn't be a challenge to write with `subprocess` anyway.
At the moment the time input file is just treated as pairs of times, start and end, and then an output name. Missing names are replaced as `linecount.wav`
```
import ffmpeg
from sys import argv
""" split_wav `audio file` `time listing`
`audio file` is any file known by local FFmpeg
`time listing` is a file containing multiple lines of format:
`start time` `end time` output name
times can be either MM:SS or S*
"""
_in_file = argv[1]
def make_time(elem):
# allow user to enter times on CLI
t = elem.split(':')
try:
# will fail if no ':' in time, otherwise add together for total seconds
return int(t[0]) * 60 + float(t[1])
except IndexError:
return float(t[0])
def collect_from_file():
"""user can save times in a file, with start and end time on a line"""
time_pairs = []
with open(argv[2]) as in_times:
for l, line in enumerate(in_times):
tp = line.split()
tp[0] = make_time(tp[0])
tp[1] = make_time(tp[1]) - tp[0]
# if no name given, append line count
if len(tp) < 3:
tp.append(str(l) + '.wav')
time_pairs.append(tp)
return time_pairs
def main():
for i, tp in enumerate(collect_from_file()):
# open a file, from `ss`, for duration `t`
stream = ffmpeg.input(_in_file, ss=tp[0], t=tp[1])
# output to named file
stream = ffmpeg.output(stream, tp[2])
# this was to make trial and error easier
stream = ffmpeg.overwrite_output(stream)
# and actually run
ffmpeg.run(stream)
if __name__ == '__main__':
main()
```
|
Subscribing to a event in a closure while iterating through a collection results in the last item subscribing to the event
The code:
```
using System;
using System.Collections.Generic;
namespace so {
public abstract class Feature {
public void doIt() {
Console.WriteLine( GetType().FullName );
}
}
class A : Feature { }
class B : Feature { }
class C : Feature { }
public class SSCCE {
event EventHandler Click;
static void Main( string[] args ) {
SSCCE sscce = new SSCCE();
List<Feature> features = new List<Feature>();
features.Add( new A());
features.Add( new B() );
features.Add( new C() );
foreach ( Feature feature in features ) {
sscce.Click += ( object sender, EventArgs e ) => { feature.doIt(); };
}
sscce.Click.Invoke( null, null );
}
}
}
```
The expected result:
```
so.A
so.B
so.C
```
The observed result:
```
so.C
so.C
so.C
```
In java, the `final` keyword in front of `Feature` in the `foreach` loop allow the `feature` value to be used in the body of the action of the lambda, before `.doIt()`.
**What's the good syntax in C#?**
|
Last feature gets captured by your lambda (its a closure). You should create local variable to capture feature on each iteration:
```
foreach (Feature feature in features) {
Feature current = feature;
sscce.Click += (object sender, EventArgs e) => { current.doIt(); };
}
```
I suggest you to read [Closing over the loop variable](http://blogs.msdn.com/b/ericlippert/archive/2009/11/12/closing-over-the-loop-variable-considered-harmful.aspx) article at Eric Lippert's blog.
NOTE: That was fixed in last version of C#
---
To understand what happens, lets take a look what code is generated in your case (prior C# 5). Thus your lambda uses local variable, it's not enough to generate method - compiler generates private class which captures local variables used in lambda:
```
private sealed class AnonymousClass
{
public Feature feature;
public void AnonymousMethod(object sender, EventArgs e)
{
this.feature.doIt();
}
}
```
And your code is modified so that it uses instance of this AnonymousClass and subscribes it's AnonymousMethod to Click event:
```
using(var enumerator = ((IEnumerable<Feature>)features).GetEnumerator())
{
AnonymousClass x = new AnonymousClass();
while(enumerator.MoveNext())
{
x.feature = (Feature)enumerator.Current;
sscce.Click += new EventHandler(x.AnonymousMethod);
}
}
```
As you can see, you have subscribed AnonymousMethod of same AnonymousClass instance several times. And that instance will have feature equal to last assigned feature. Now what is changed when you copy current feature to local variable:
```
using(var enumerator = ((IEnumerable<Feature>)features).GetEnumerator())
{
while(enumerator.MoveNext())
{
AnonymousClass x = new AnonymousClass();
x.current = (Feature)enumerator.Current; // field has local variable name
sscce.Click += new EventHandler(x.AnonymousMethod);
}
}
```
In this case AnonymousClass instance created on each iteration, thus AnonymousMethods of different class instances (each with own feature captured) will handle Click event. Why code differs - because, as Eric says, closure (i.e. anonymous class) is closed over variables. In order to be closed over local variable in body of loop, in second case instance of anonymous class should be created inside loop.
|
Fetch a file from web using GLib/GIO from C
With what function should I fetch a file from the web using GLib/GIO libs?
If my file is from:
```
gchar *path = "http://xxx.yyyServer/sharing/temp.txt"
```
What should I do to download it?
For the local files I simply use C libraries like fopen and fread.
How should I do the above?
There is unfortunately no examples of file handling in the Tutorials. I can only see a File chooser from File Dialog boxes.
---
UPDATED WITH WORKING CODE FROM COMMENTS:
The code below works for binary files of unknown sizes.
```
char *name= http://127.0.0.1:8000/mybinfile
int getFile(char *name)
{
GFile *f = g_file_new_for_uri(name);
GFileInputStream *fis = NULL;
GDataInputStream* dis = NULL;
GError *err = NULL;
//char buffer[2048];
char *buffer;
size_t length;
int ret = -1;
GFileInfo *info;
int total_size = -1;
/* get input stream */
fis = g_file_read(f, NULL, &err);
if (err != NULL) {
fprintf(stderr, "ERROR: opening %s\n", name);
g_object_unref(f);
return -1;
}
info = g_file_input_stream_query_info (G_FILE_INPUT_STREAM (fis),G_FILE_ATTRIBUTE_STANDARD_SIZE,NULL, &err);
if (info)
{
if (g_file_info_has_attribute (info, G_FILE_ATTRIBUTE_STANDARD_SIZE))
total_size = g_file_info_get_size (info);
printf( "total_size = %d\n", total_size);
g_object_unref (info);
}
// fill buffer
if(total_size > 0){
buffer = (char *) malloc(sizeof(char) * total_size);
memset(buffer, 0, total_size);
if ((length = g_input_stream_read (G_INPUT_STREAM(fis),
buffer, total_size, NULL, &err)) != -1) {
printf( "reading file\n");
}
printf( "File length = %d\n", length);
ret = 0;
}
// close streams
g_object_unref(fis);
g_object_unref(f);
return ret;
}
```
|
HTTP is one of the protocols supported by GIO, so you can open an HTTP URI just like any other file when using the GIO functions instead of standard C functions. Just use [`g_file_new_for_uri`](http://developer.gnome.org/gio/stable/GFile.html#g-file-new-for-uri) to create the file object and then you can read it just like a local file.
You can use `g_file_read` to get a GFileInputStream for the given URI and then `g_data_input_stream_new` to get a GDataInputStream for the input stream, which you can then use to read the file line-by-line. You have to upcast the GFileInputStream to a GInputStream before you can pass it to `g_data_input_stream_new` (or before you can do anything else useful with it), but if you're programming GTK in C, you're probably used to that by now.
|
private message using websockets
How can we implement a private message system (client to client) using Websockets (and PHP)?
From what I understood, the server broadcasts the message and all of the clients can receive the message through connection.onmessage event handler. I was wondering if there is a way to send the message to only a targeted user in the websockets?
|
When a client sends a messages (`ws.send( message );`) your WebSocket server will receive the message. The sockets that then you send that on to is determined entirely by your server code - your implementation.
To create a one-to-one chat you need a way of routing data between just the two clients involved in the chat. You also need a way of authenticating that only those two clients can receive that information.
WebSocket frameworks tend to provide an additional [PubSub](http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) layer e.g. Pusher (who I previously worked for) do this using [channels](http://pusher.com/docs/channels). You will find similar terminology along with 'subjects' and 'topics'.
Once you have a way of routing data (chat messages) between just two clients you then need to consider authenticating the subscription. More information can be found on that via this question which asks [How do I create Private channel between two user](https://stackoverflow.com/a/15406580/39904). The question is about Ruby but is applicable to any technology.
|
How to drop all tables in a SQL Server database?
I'm trying to write a script that will completely empty a SQL Server database. This is what I have so far:
```
USE [dbname]
GO
EXEC sp_msforeachtable 'ALTER TABLE ? NOCHECK CONSTRAINT all'
EXEC sp_msforeachtable 'DELETE ?'
```
When I run it in the Management Studio, I get:
>
> Command(s) completed successfully.
>
>
>
but when I refresh the table list, they are all still there. What am I doing wrong?
|
It doesn't work for me either when there are multiple foreign key tables.
I found that code that works and does everything you try (delete all tables from your database):
```
DECLARE @Sql NVARCHAR(500) DECLARE @Cursor CURSOR
SET @Cursor = CURSOR FAST_FORWARD FOR
SELECT DISTINCT sql = 'ALTER TABLE [' + tc2.TABLE_SCHEMA + '].[' + tc2.TABLE_NAME + '] DROP [' + rc1.CONSTRAINT_NAME + '];'
FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS rc1
LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc2 ON tc2.CONSTRAINT_NAME =rc1.CONSTRAINT_NAME
OPEN @Cursor FETCH NEXT FROM @Cursor INTO @Sql
WHILE (@@FETCH_STATUS = 0)
BEGIN
Exec sp_executesql @Sql
FETCH NEXT FROM @Cursor INTO @Sql
END
CLOSE @Cursor DEALLOCATE @Cursor
GO
EXEC sp_MSforeachtable 'DROP TABLE ?'
GO
```
You can find the post [here](http://social.msdn.microsoft.com/forums/en-US/transactsql/thread/a512be8a-376f-4fc9-8243-78dbdbe59e55/). It is the post by Groker.
|
Why is a cast exception fatal in Java?
According to [this](https://thesocietea.org/2015/11/programming-concepts-static-vs-dynamic-type-checking/) article:
>
> In contrast to static type checking, dynamic type checking may cause a program to fail at runtime due to type errors. In some programming languages, it is possible to anticipate and recover from these failures – either by error handling or poor type safety. In others, type checking errors are considered fatal.
>
>
>
Java is an example where type checking errors are fatal. Why is Java (and maybe most static typed languages) so strict that it fails at runtime if a type error occurs? Let's say you upcast object A (actual type `int`) to type `Object`, and downcast to `String`. Compile-time checking will pass, but the runtime type check will fail with a fatal exception. This seems harsh, as it's not like an `illegalArgumentException` where the program literally *cannot* proceed (an invalid cast loses type safety but shouldn't be impossible). It would seem to me like the most optimal situation would be to throw a warning and then fail fatally if trying to call, say, `String` method `indexOf` on the integer object. Is there a reason for Java actually failing to proceed at runtime when trying to perform this invalid cast?
|
>
> Java is an example where type checking errors are fatal.
>
>
>
No it isn't. You can catch them.
>
> Why is Java (and maybe most static typed languages) so strict that it fails at runtime if a type error occurs?
>
>
>
Because that's what 'static typing' means; ditto 'strict typing'.
>
> Let's say you upcast object A (actual type int) to type Object, and downcast to String. Compile-time checking will pass, but the runtime type check will fail with a fatal exception.
>
>
>
No it won't. It will fail with an unchecked `RuntimeException.`
>
> This seems harsh, as it's not like an illegalArgumentException where the program literally cannot proceed (an invalid cast loses type safety but shouldn't be impossible).
>
>
>
But it is impossible. The cast cannot proceed. The way the object code is generated assumes so.
>
> It would seem to me like the most optimal situation would be to throw a warning
>
>
>
It throws a `RuntimeException.`
>
> and then fail fatally if trying to call, say, String method indexOf on the integer object.
>
>
>
Well it doesn't do that.
>
> Is there a reason for Java actually failing to proceed at runtime when trying to perform this invalid cast?
>
>
>
The implementation is designed around the fact that the exception occurs.
|
32-bit informix drivers in 64-bit windows server 2008 are unavailable
I have an **SSIS** package that needs to use **ODBC** to connect to an **Informix** data source. It runs fine on my 64-bit Win7 development box but blows up on 64-bit Windows Server 2008 with the following error message:
`The specified DSN contains an architecture mismatch between the Driver and Application.`
I can see the informix drivers when I use **odbcad32.exe** in **%windir%/SysWOW64/** and have created an ODBC connection there. I cannot see them when I use odbcad32.exe in %windir%/System32. The Google machine says the error message I'm seeing comes when I've created the ODBC connection using the wrong ODBC administrator engine. I'm using the **IBM Informix SDK 3.70.**
Maybe my Google-fu is weak, but I can't find anything useful on how to get Windows Server 2008 to allow me to see the 32-bit Informix drivers in the ODBC administrator engine from System32.
Anyone have any ideas/insights?
|
Two things to keep in mind, you have the ODBC drivers/configuration thing (odbcad32.exe) in the 32/64 bit space but you'll also have SSIS (dtexec.exe) also existing in the 32/64 bit space.
As I read it, you have installed the 32 bit driver on the server and created the 32 bit DSN (which despite the backwards sounding name **is** the one found in `%windir%/SysWOW64/`).
The trick now is to run your packages in 32 bit mode. If you are using SQL Agent, the SQL Server Integration Services job step has a checkbox for 32 bit mode. That only works for the SQL Agent though. If you are running it from the command line, or clicking it via the gui, you'll need to start with the correct exe.
32 bit
```
C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\DTExec.exe
C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\DTExec.exe
C:\Program Files (x86)\Microsoft SQL Server\90\DTS\Binn\DTExec.exe
```
64 bits
```
C:\Program Files\Microsoft SQL Server\100\DTS\Binn\DTExec.exe
C:\Program Files\Microsoft SQL Server\110\DTS\Binn\DTExec.exe
C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTExec.exe
```
|
How to write text in a cell from top to bottom in openpyxl
I'd like to write text in a cell from top to bottom just like the vertical text orientation in EXCEL using openpyxl packages in python.
But I can't do that.
Could anyone please help me?
[I uploaded the exact image I want](https://i.stack.imgur.com/CtwOq.png)
|
Use the `alignment` attribute of a cell and set the `textRotation` to the needed angle (1-180). You can read more abot it in the openpyxl documentation - [Here](https://openpyxl.readthedocs.io/en/stable/api/openpyxl.styles.alignment.html#openpyxl.styles.alignment.Alignment.textRotation).
Code Example:
```
from openpyxl import Workbook
from openpyxl.styles import Alignment
wb = Workbook()
ws = wb.active
ws['A1'] = 'Example'
ws['A1'].alignment = Alignment(textRotation=180)
wb.save('Example.xlsx')
```
Output:
[](https://i.stack.imgur.com/buCyj.png)
If you want the charecters to be written horizontally but the word verticaly just set `textRotation` to `255`:
```
from openpyxl import Workbook
from openpyxl.styles import Alignment
wb = Workbook()
ws = wb.active
ws['A1'] = 'Hello'
ws['A1'].alignment = Alignment(textRotation=255)
wb.save('Example.xlsx')
```
Output:
[](https://i.stack.imgur.com/Yca3h.png)
|
SVN, see details while Im committing files
simple question for svn experts:
Right now Im doing a first commit of a big project (dont need to explain details), and it's like almost 2 Gb of files.
The commit is taking long, and I see the dots moving forward as usual. So my question is if there's a way to show a detailed progress for my commits because I don't know the real progress, how much was already uploaded, and how much is pending.
Thanks
|
There's no way to see more detail with the command line client at current, but you could certainly ask for it by sending an email suggesting this to [email protected]. We actually have the information to print more data (transmission speed and percent done might be tricky, but we could at least provide which file is being transmitted). If you're not willing to wait for Subversion itself you could modify the client to print whatever you wanted, the notify function isn't terribly complicated.
Unfortunately, David W.'s answer is not really accurate. The impression that the periods are displayed while the server tries to execute the commit and that the data is already on the server is entirely wrong. The periods are displayed as each text delta for file contents (which may be full text in some cases) starts being transmitted (as implied by the "Transmitting file data" that is printed just before the periods start). So the first period will be displayed when the first text is being sent, second after it finishes and the 2nd is about to start and so on. So if you know how many files you're committing you can get some idea as to how far along it is from this output.
Super technical internal details follow. The output is implemented in the notify callback function set in the [svn\_client\_ctx\_t](https://people.apache.org/~brane/svndocs/capi/structsvn__client__ctx__t.html) (ctx is short for context). For a commit you can see this being called from the svn\_client\_\_do\_commit function as implemented in [subversion/libsvn\_client/commit\_util.c](https://svn.apache.org/repos/asf/subversion/trunk/subversion/libsvn_client/commit_util.c) as follows:
```
if (ctx->notify_func2)
{
svn_wc_notify_t *notify;
notify = svn_wc_create_notify(item->path,
svn_wc_notify_commit_postfix_txdelta,
iterpool);
notify->kind = svn_node_file;
notify->path_prefix = notify_path_prefix;
ctx->notify_func2(ctx->notify_baton2, notify, iterpool);
}
```
The event being sent by that code is a [svn\_wc\_notify\_commit\_postfix\_txdelta](https://people.apache.org/~brane/svndocs/capi/group__svn__wc__notifications.html#gga816dd065327746637bb9af6322959dfba849f0b67e4b6bb9a773a595af7c36a2d), which if you look at the context of that code you'll see is sent just before the text delta is transmitted for a path.
Herein, lies a tiny sliver of truth in David's answer above. The tree delta has already been transmitted by this point (that means copies, deletes, mkdir, etc have already been sent). But as the event and the code context implies, the text deltas are delayed to be sent after all of that is finished. That doesn't particularly help avoid long commits that end up failing because they are out of date, as David points out, but ultimately even if we optimized failures to happen during the delta editor drive we'd still have to check at the point of trying to convert the transaction (that is being built as the edit drive is being done) into a revision for any out of date situations because we don't lock the repository during a transaction (this helps keep Subversion from being less responsive when lots of people are committing). I believe we have actually tried to make the server return errors sooner, but I haven't dug around in that code much lately so I'm not going to try and make definitive statements about that.
The time spent actually processing the transaction after receiving all the data is relatively small (as by design since we have to take out a write lock on the repo during this processing). There are two big drains on Subversion commit speeds. In the case of DAV (http/https) most steps of the delta drive take place in separate HTTP requests (as per the design of DAV and the DeltaV standards). These requests are executed serially causing round trip delays. That includes sending file contents. This contributes greatly to delays when committing lots of small changes over DAV. You can avoid this issue by using svnserve instead of DAV if this is a common case for you. In the future we hope to help mitigate some of this with pipelining. The second issue is that we cannot send text deltas of multiple files in parallel. This impacts all the various network protocols and at current can't be implemented because the file system implementation for transactions doesn't support it. We hope to resolve this in the future, switching DAV from neon to serf is a step towards this since serf has more capabilities in this area.
Back to the notify situation. The notify callback function for the command line client is implemented in [subversion/svn/notify.c](https://svn.apache.org/repos/asf/subversion/trunk/subversion/svn/notify.c) in a function called notify(). If you search for svn\_wc\_notify\_commit\_postfix\_txdelta, you'll find this code:
```
case svn_wc_notify_commit_postfix_txdelta:
if (! nb->sent_first_txdelta)
{
nb->sent_first_txdelta = TRUE;
if ((err = svn_cmdline_printf(pool,
_("Transmitting file data "))))
goto print_error;
}
if ((err = svn_cmdline_printf(pool, ".")))
goto print_error;
break;
```
Which is where your periods get printed. The n argument to the function is a [svn\_wc\_notify\_t](https://people.apache.org/~brane/svndocs/capi/structsvn__wc__notify__t.html) which has all sorts of fields to provide information. In this case the path, action, kind, and path\_prefix should all be set. So it should be possible to easily display the filename being transmitted with some simple modifications.
|
Matching closest longitude/latitude
Using Maxmind's GeoIP software, we can narrow down the LONG/LAT of an IP address to relative accuracy within 25 miles around 80% of the time.
Now, we don't want to use any of the other information provided by MaxMind, because there are a lot of discrepancies between feature names, i.e., cities, to perform a look up. We plan on attempting such a look up if other methods fail to locate a feature, but for performance reasons, look ups on floats are much faster than strings.
Now, I'm a little clueless on how we can find the closest matching LAT/LONG given from Maxmind to our database. The problem is, our datbase features has a much higher precision compared to that of Maxmind, therefore a straight comparison might not be effective. If we try applying a ROUND() to the column during query, that will obviously be really slow.
Given the following data, would the fastest way simply be something like
LONG 79.93213
LAT 39.13111
```
SELECT `feature_name` FROM `geo_features`
WHERE long BETWEEN 79.93 AND 79.79.94
AND lat BETWEEN 39.13 AND 39.14
```
Can anyone thing of an elegant solution that will be blazing fast? I know there are some new spatial storage types in MySQL 5, perhaps anyone can provide a solution beyond the blinders I've seem to put up on myself.
|
The elegant (**more accurate**) way of doing this (*but not blazing fast*)
```
// Closest within radius of 25 Miles
// 37, -122 are your current coordinates
// To search by kilometers instead of miles, replace 3959 with 6371
SELECT feature_name,
( 3959 * acos( cos( radians(37) ) * cos( radians( lat ) )
* cos( radians( long ) - radians(-122) ) + sin( radians(37) )
* sin( radians( lat ) ) ) ) AS distance
FROM geo_features HAVING distance < 25
ORDER BY distance LIMIT 1;
```
**Edit**
This is [Haversine formula](http://en.wikipedia.org/wiki/Haversine_formula) for calculating circular distance from geo-coordinates. Here are some implementation of this formula in [different platforms](http://www.movable-type.co.uk/scripts/latlong.html)
```
R = earth’s radius (mean radius = 6,371km)
Δlat = lat2− lat1
Δlong = long2− long1
a = sin²(Δlat/2) + cos(lat1).cos(lat2).sin²(Δlong/2)
c = 2.atan2(√a, √(1−a))
d = R.c
// Note that angles need to be in radians to pass to Trigonometric functions
```
|
Do databases besides Postgres have features comparable to foreign data wrappers?
I'm very excited by several of the more recently-added Postgres features, such as foreign data wrappers. I'm not aware of any other RDBMS having this feature, but before I try to make the case to my main client that they should begin preferring Postgres over their current cocktail of RDBMSs, and include in my case that no other database can do this, I'd like to verify that.
I've been unable to find evidence of any other database supporting SQL/MED, and things like this short note stating that [Oracle does not support SQL/MED](http://docs.oracle.com/cd/E11882_01/server.112/e26088/ap_standard_sql007.htm#SQLRF55527).
The main thing that gives me doubt is a statement on <http://wiki.postgresql.org/wiki/SQL/MED>:
>
> SQL/MED is Management of External Data, a part of the SQL standard that deals with how a database management system can integrate data stored outside the database.
>
>
>
If FDWs are based on SQL/MED, and SQL/MED is an open standard, then it seems likely that other RDBMSs have implemented it too.
# TL;DR:
Does any database besides Postgres support SQL/MED?
|
- [IBM DB2](http://www.ibm.com/developerworks/data/library/techarticle/0203haas/0203haas.html#iso) claims compliance with SQL/MED (including full FDW API);
- [MySQL](https://dev.mysql.com/doc/refman/5.1/en/federated-storage-engine.html)'s FEDERATED storage engine [can connect to another MySQL database, but ***NOT*** to other RDBMSs](https://dev.mysql.com/doc/refman/5.0/en/federated-limitations.html);
- [MariaDB](https://mariadb.com/kb/en/introduction-to-the-connect-engine/)'s CONNECT engine allows access to various file formats (CSV, XML, Excel, etc), gives access to "any" ODBC data sources (Oracle, DB2, SQLServer, etc) and can access data on the storage engines MyIsam and InnoDB.
- [Farrago](http://farrago.sourceforge.net/design/sqlmed.html) has some of it too;
- [PostgreSQL](http://www.postgresql.org/docs/9.3/static/fdwhandler.html) implements parts of it (notably it does not implement routine mappings, and has a simplified FDW API). It is usable as readeable since PG 9.1 and writeable since 9.3, and prior to that there was the [DBI-Link](http://pgfoundry.org/projects/dbi-link/).
PostgreSQL communities have a plenty of nice [FDW](http://wiki.postgresql.org/wiki/Foreign_data_wrappers) like noSQL FDW (couchdb\_fdw, mongo\_fdw, redis\_fdw), Multicorn (for using Python output instead of C for the wrapper per se), or the nuts [PGStrom](http://wiki.postgresql.org/wiki/PGStrom) (which uses GPU for some operations!)
|
ServiceStack OrmLite multiple references of same type load
In my ServiceStack app I'm implementing a simple chat where 2 users can have a dialogue. For simplicity, I've just created a TextMessages table, which contains the following Fields:
```
public class TextMessage
{
[AutoIncrement]
[PrimaryKey]
public int Id { get; set; }
[References(typeof(MyUserAuth))]
public int FromUserId { get; set; }
[References(typeof(MyUserAuth))]
public int ToUserId { get; set; }
[Reference]
[ForeignKey(typeof(MyUserAuth))]
public MyUserAuth FromUser { get; set; }
[Reference]
[ForeignKey(typeof(MyUserAuth))]
public MyUserAuth ToUser { get; set; }
//TimeZoneInfo.ConvertTimeToUtc(dateNow);
public DateTime UtcReceivedOn { get; set; }
public string Text { get; set; }
}
```
My UserAuth inherits the base one and adds 2 more fields:
```
public class MyUserAuth : UserAuth
{
public List<TextMessage> TextMessagesAsAuthor { get; set; }
public List<TextMessage> TextMessagesAsRecipient { get; set; }
}
```
Now let's say I create some users and then some messages:
```
var msg1 = new TextMessage { FromUserId = 1, ToUserId = 2, UtcReceivedOn = dt, Text = "Hello" };
var msg2 = new TextMessage { FromUserId = 1, ToUserId = 3, UtcReceivedOn = dt, Text = "Hello" };
var msg3 = new TextMessage { FromUserId = 1, ToUserId = 4, UtcReceivedOn = dt, Text = "Hello" };
var msg4 = new TextMessage { FromUserId = 1, ToUserId = 4, UtcReceivedOn = dt, Text = "Hello" };
```
And then I try to read my user:
```
var user = db.LoadSingleById<MyUserAuth>(1);
```
The problem here is that the user has 4 messages in both `TextMessagesAsAuthor` and `TextMessagesAsRecipient` while logically there should be 4 in `TextMessagesAsAuthor` and 0 in `TextMessagesAsRecipient`. How can i tell OrmLite to differ these two properties?
|
OrmLite only supports [multiple 1:1 Self References](https://github.com/ServiceStack/ServiceStack.OrmLite#multiple-self-references) as you're using in your `TextMessage` table, it doesn't support multiple 1:M external references like you're trying to declare on:
```
public class MyUserAuth : UserAuth
{
public List<TextMessage> TextMessagesAsAuthor { get; set; }
public List<TextMessage> TextMessagesAsRecipient { get; set; }
}
```
>
> Also note complex properties that don't have `[Reference]` attributes are blobbed with the row they're on, which isn't what you want here.
>
>
>
Also OrmLite's [POCO References only load 1-level deep](https://github.com/ServiceStack/ServiceStack.OrmLite#querying-pocos-with-references), i.e. references do not recursively go down and load the referenced table references as well or populate any cyclical back references which is what it looks like you're trying to do.
So I'd keep the text messages as they are:
```
var msgs = new[]
{
new TextMessage { FromUserId = 1, ToUserId = 2, Text = "msg #1" },
new TextMessage { FromUserId = 1, ToUserId = 3, Text = "msg #2" },
new TextMessage { FromUserId = 1, ToUserId = 4, Text = "msg #3" },
new TextMessage { FromUserId = 1, ToUserId = 4, Text = "msg #4" },
};
db.InsertAll(msgs);
```
Which you can use to Load their multiple self User references, e.g:
```
var msg1 = db.LoadSingleById<TextMessage>(1);
msg1.PrintDump(); //prints populated FromUser/ToUser properties
```
But you wont be able to do the same for `MyUserAuth` multiple 1:M external references. In this case I would still add the `TextMessage` collections to the `MyUserAuth` table but you'll want to tell OrmLite to ignore them when creating the `MyUserAuth` table which you can do with the `[Ignore]` attribute, e.g:
```
public class MyUserAuth : UserAuth
{
[Ignore]
public List<TextMessage> TextMessagesAsAuthor { get; set; }
[Ignore]
public List<TextMessage> TextMessagesAsRecipient { get; set; }
}
```
Which you can then populate them manually, i.e:
```
var user1 = db.SingleById<MyUserAuth>(1);
user1.TextMessagesAsAuthor = db.Select<TextMessage>(x => x.FromUserId == 1);
user1.TextMessagesAsRecipient = db.Select<TextMessage>(x => x.ToUserId == 1);
user1.PrintDump(); //prints populated MyUserAuth TextMessages
```
|
Fastest way to iterate over all the chars in a String
In Java, what would the fastest way to iterate over all the chars in a String, this:
```
String str = "a really, really long string";
for (int i = 0, n = str.length(); i < n; i++) {
char c = str.charAt(i);
}
```
Or this:
```
char[] chars = str.toCharArray();
for (int i = 0, n = chars.length; i < n; i++) {
char c = chars[i];
}
```
**EDIT :**
What I'd like to know is if the cost of repeatedly calling the `charAt` method during a long iteration ends up being either less than or greater than the cost of performing a single call to `toCharArray` at the beginning and then directly accessing the array during the iteration.
It'd be great if someone could provide a robust benchmark for different string lengths, having in mind JIT warm-up time, JVM start-up time, etc. and not just the difference between two calls to `System.currentTimeMillis()`.
|
FIRST UPDATE: Before you try this ever in a production environment (not advised), read this first: <http://www.javaspecialists.eu/archive/Issue237.html>
Starting from Java 9, the solution as described won't work anymore, because now Java will store strings as byte[] by default.
SECOND UPDATE: As of 2016-10-25, on my AMDx64 8core and source 1.8, there is no difference between using 'charAt' and field access. It appears that the jvm is sufficiently optimized to inline and streamline any 'string.charAt(n)' calls.
THIRD UPDATE: As of 2020-09-07, on my Ryzen 1950-X 16 core and source 1.14, 'charAt1' is 9 times slower than field access and 'charAt2' is 4 times slower than field access. Field access is back as the clear winner. Note than the program will need to use byte[] access for Java 9+ version jvms.
It all depends on the length of the `String` being inspected. If, as the question says, it is for **long** strings, the fastest way to inspect the string is to use reflection to access the backing `char[]` of the string.
A fully randomized benchmark with JDK 8 (win32 and win64) on an 64 AMD Phenom II 4 core 955 @ 3.2 GHZ (in both client mode and server mode) with 9 different techniques (see below!) shows that using `String.charAt(n)` is the fastest for small strings and that using `reflection` to access the String backing array is almost twice as fast for large strings.
# THE EXPERIMENT
- 9 different optimization techniques are tried.
- All string contents are randomized
- The test are done for string sizes in multiples of two starting with 0,1,2,4,8,16 etc.
- The tests are done 1,000 times per string size
- The tests are shuffled into random order each time. In other words, the tests are done in random order every time they are done, over 1000 times over.
- The entire test suite is done forwards, and backwards, to show the effect of JVM warmup on optimization and times.
- The entire suite is done twice, once in `-client` mode and the other in `-server` mode.
# CONCLUSIONS
## -client mode (32 bit)
For strings **1 to 256 characters in length**, calling `string.charAt(i)` wins with an average processing of 13.4 million to 588 million characters per second.
Also, it is overall 5.5% faster (client) and 13.9% (server) like this:
```
for (int i = 0; i < data.length(); i++) {
if (data.charAt(i) <= ' ') {
doThrow();
}
}
```
than like this with a local final length variable:
```
final int len = data.length();
for (int i = 0; i < len; i++) {
if (data.charAt(i) <= ' ') {
doThrow();
}
}
```
For long strings, **512 to 256K characters length**, using reflection to access the String's backing array is fastest. **This technique is almost twice as fast** as String.charAt(i) (178% faster). The average speed over this range was 1.111 billion characters per second.
The Field must be obtained ahead of time and then it can be re-used in the library on different strings. Interestingly, unlike the code above, with Field access, it is 9% faster to have a local final length variable than to use 'chars.length' in the loop check. Here is how Field access can be setup as fastest:
```
final Field field = String.class.getDeclaredField("value");
field.setAccessible(true);
try {
final char[] chars = (char[]) field.get(data);
final int len = chars.length;
for (int i = 0; i < len; i++) {
if (chars[i] <= ' ') {
doThrow();
}
}
return len;
} catch (Exception ex) {
throw new RuntimeException(ex);
}
```
## Special comments on -server mode
Field access starting winning after 32 character length strings in server mode on a 64 bit Java machine on my AMD 64 machine. That was not seen until 512 characters length in client mode.
Also worth noting I think, when I was running JDK 8 (32 bit build) in server mode, the overall performance was 7% slower for both large and small strings. This was with build 121 Dec 2013 of JDK 8 early release. So, for now, it seems that 32 bit server mode is slower than 32 bit client mode.
That being said ... it seems the only server mode that is worth invoking is on a 64 bit machine. Otherwise it actually hampers performance.
For 32 bit build running in `-server mode` on an AMD64, I can say this:
1. String.charAt(i) is the clear winner overall. Although between sizes 8 to 512 characters there were winners among 'new' 'reuse' and 'field'.
2. String.charAt(i) is 45% faster in client mode
3. Field access is twice as fast for large Strings in client mode.
Also worth saying, String.chars() (Stream and the parallel version) are a bust. Way slower than any other way. The `Streams` API is a rather slow way to perform general string operations.
## Wish List
Java String could have predicate accepting optimized methods such as contains(predicate), forEach(consumer), forEachWithIndex(consumer). Thus, without the need for the user to know the length or repeat calls to String methods, these could help parsing libraries `beep-beep beep` speedup.
Keep dreaming :)
Happy Strings!
~SH
## The test used the following 9 methods of testing the string for the presence of whitespace:
"charAt1" -- CHECK THE STRING CONTENTS THE USUAL WAY:
```
int charAtMethod1(final String data) {
final int len = data.length();
for (int i = 0; i < len; i++) {
if (data.charAt(i) <= ' ') {
doThrow();
}
}
return len;
}
```
"charAt2" -- SAME AS ABOVE BUT USE String.length() INSTEAD OF MAKING A FINAL LOCAL int FOR THE LENGTh
```
int charAtMethod2(final String data) {
for (int i = 0; i < data.length(); i++) {
if (data.charAt(i) <= ' ') {
doThrow();
}
}
return data.length();
}
```
"stream" -- USE THE NEW JAVA-8 String's IntStream AND PASS IT A PREDICATE TO DO THE CHECKING
```
int streamMethod(final String data, final IntPredicate predicate) {
if (data.chars().anyMatch(predicate)) {
doThrow();
}
return data.length();
}
```
"streamPara" -- SAME AS ABOVE, BUT OH-LA-LA - GO PARALLEL!!!
```
// avoid this at all costs
int streamParallelMethod(final String data, IntPredicate predicate) {
if (data.chars().parallel().anyMatch(predicate)) {
doThrow();
}
return data.length();
}
```
"reuse" -- REFILL A REUSABLE char[] WITH THE STRINGS CONTENTS
```
int reuseBuffMethod(final char[] reusable, final String data) {
final int len = data.length();
data.getChars(0, len, reusable, 0);
for (int i = 0; i < len; i++) {
if (reusable[i] <= ' ') {
doThrow();
}
}
return len;
}
```
"new1" -- OBTAIN A NEW COPY OF THE char[] FROM THE STRING
```
int newMethod1(final String data) {
final int len = data.length();
final char[] copy = data.toCharArray();
for (int i = 0; i < len; i++) {
if (copy[i] <= ' ') {
doThrow();
}
}
return len;
}
```
"new2" -- SAME AS ABOVE, BUT USE "FOR-EACH"
```
int newMethod2(final String data) {
for (final char c : data.toCharArray()) {
if (c <= ' ') {
doThrow();
}
}
return data.length();
}
```
"field1" -- FANCY!! OBTAIN FIELD FOR ACCESS TO THE STRING'S INTERNAL char[]
```
int fieldMethod1(final Field field, final String data) {
try {
final char[] chars = (char[]) field.get(data);
final int len = chars.length;
for (int i = 0; i < len; i++) {
if (chars[i] <= ' ') {
doThrow();
}
}
return len;
} catch (Exception ex) {
throw new RuntimeException(ex);
}
}
```
"field2" -- SAME AS ABOVE, BUT USE "FOR-EACH"
```
int fieldMethod2(final Field field, final String data) {
final char[] chars;
try {
chars = (char[]) field.get(data);
} catch (Exception ex) {
throw new RuntimeException(ex);
}
for (final char c : chars) {
if (c <= ' ') {
doThrow();
}
}
return chars.length;
}
```
# COMPOSITE RESULTS FOR CLIENT `-client` MODE (forwards and backwards tests combined)
Note: that the -client mode with Java 32 bit and -server mode with Java 64 bit are the same as below on my AMD64 machine.
```
Size WINNER charAt1 charAt2 stream streamPar reuse new1 new2 field1 field2
1 charAt 77.0 72.0 462.0 584.0 127.5 89.5 86.0 159.5 165.0
2 charAt 38.0 36.5 284.0 32712.5 57.5 48.3 50.3 89.0 91.5
4 charAt 19.5 18.5 458.6 3169.0 33.0 26.8 27.5 54.1 52.6
8 charAt 9.8 9.9 100.5 1370.9 17.3 14.4 15.0 26.9 26.4
16 charAt 6.1 6.5 73.4 857.0 8.4 8.2 8.3 13.6 13.5
32 charAt 3.9 3.7 54.8 428.9 5.0 4.9 4.7 7.0 7.2
64 charAt 2.7 2.6 48.2 232.9 3.0 3.2 3.3 3.9 4.0
128 charAt 2.1 1.9 43.7 138.8 2.1 2.6 2.6 2.4 2.6
256 charAt 1.9 1.6 42.4 90.6 1.7 2.1 2.1 1.7 1.8
512 field1 1.7 1.4 40.6 60.5 1.4 1.9 1.9 1.3 1.4
1,024 field1 1.6 1.4 40.0 45.6 1.2 1.9 2.1 1.0 1.2
2,048 field1 1.6 1.3 40.0 36.2 1.2 1.8 1.7 0.9 1.1
4,096 field1 1.6 1.3 39.7 32.6 1.2 1.8 1.7 0.9 1.0
8,192 field1 1.6 1.3 39.6 30.5 1.2 1.8 1.7 0.9 1.0
16,384 field1 1.6 1.3 39.8 28.4 1.2 1.8 1.7 0.8 1.0
32,768 field1 1.6 1.3 40.0 26.7 1.3 1.8 1.7 0.8 1.0
65,536 field1 1.6 1.3 39.8 26.3 1.3 1.8 1.7 0.8 1.0
131,072 field1 1.6 1.3 40.1 25.4 1.4 1.9 1.8 0.8 1.0
262,144 field1 1.6 1.3 39.6 25.2 1.5 1.9 1.9 0.8 1.0
```
# COMPOSITE RESULTS FOR SERVER `-server` MODE (forwards and backwards tests combined)
Note: this is the test for Java 32 bit running in server mode on an AMD64. The server mode for Java 64 bit was the same as Java 32 bit in client mode except that Field access starting winning after 32 characters size.
```
Size WINNER charAt1 charAt2 stream streamPar reuse new1 new2 field1 field2
1 charAt 74.5 95.5 524.5 783.0 90.5 102.5 90.5 135.0 151.5
2 charAt 48.5 53.0 305.0 30851.3 59.3 57.5 52.0 88.5 91.8
4 charAt 28.8 32.1 132.8 2465.1 37.6 33.9 32.3 49.0 47.0
8 new2 18.0 18.6 63.4 1541.3 18.5 17.9 17.6 25.4 25.8
16 new2 14.0 14.7 129.4 1034.7 12.5 16.2 12.0 16.0 16.6
32 new2 7.8 9.1 19.3 431.5 8.1 7.0 6.7 7.9 8.7
64 reuse 6.1 7.5 11.7 204.7 3.5 3.9 4.3 4.2 4.1
128 reuse 6.8 6.8 9.0 101.0 2.6 3.0 3.0 2.6 2.7
256 field2 6.2 6.5 6.9 57.2 2.4 2.7 2.9 2.3 2.3
512 reuse 4.3 4.9 5.8 28.2 2.0 2.6 2.6 2.1 2.1
1,024 charAt 2.0 1.8 5.3 17.6 2.1 2.5 3.5 2.0 2.0
2,048 charAt 1.9 1.7 5.2 11.9 2.2 3.0 2.6 2.0 2.0
4,096 charAt 1.9 1.7 5.1 8.7 2.1 2.6 2.6 1.9 1.9
8,192 charAt 1.9 1.7 5.1 7.6 2.2 2.5 2.6 1.9 1.9
16,384 charAt 1.9 1.7 5.1 6.9 2.2 2.5 2.5 1.9 1.9
32,768 charAt 1.9 1.7 5.1 6.1 2.2 2.5 2.5 1.9 1.9
65,536 charAt 1.9 1.7 5.1 5.5 2.2 2.4 2.4 1.9 1.9
131,072 charAt 1.9 1.7 5.1 5.4 2.3 2.5 2.5 1.9 1.9
262,144 charAt 1.9 1.7 5.1 5.1 2.3 2.5 2.5 1.9 1.9
```
# FULL RUNNABLE PROGRAM CODE
(to test on Java 7 and earlier, remove the two streams tests)
```
import java.lang.reflect.Field;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Random;
import java.util.function.IntPredicate;
/**
* @author Saint Hill <http://stackoverflow.com/users/1584255/saint-hill>
*/
public final class TestStrings {
// we will not test strings longer than 512KM
final int MAX_STRING_SIZE = 1024 * 256;
// for each string size, we will do all the tests
// this many times
final int TRIES_PER_STRING_SIZE = 1000;
public static void main(String[] args) throws Exception {
new TestStrings().run();
}
void run() throws Exception {
// double the length of the data until it reaches MAX chars long
// 0,1,2,4,8,16,32,64,128,256 ...
final List<Integer> sizes = new ArrayList<>();
for (int n = 0; n <= MAX_STRING_SIZE; n = (n == 0 ? 1 : n * 2)) {
sizes.add(n);
}
// CREATE RANDOM (FOR SHUFFLING ORDER OF TESTS)
final Random random = new Random();
System.out.println("Rate in nanoseconds per character inspected.");
System.out.printf("==== FORWARDS (tries per size: %s) ==== \n", TRIES_PER_STRING_SIZE);
printHeadings(TRIES_PER_STRING_SIZE, random);
for (int size : sizes) {
reportResults(size, test(size, TRIES_PER_STRING_SIZE, random));
}
// reverse order or string sizes
Collections.reverse(sizes);
System.out.println("");
System.out.println("Rate in nanoseconds per character inspected.");
System.out.printf("==== BACKWARDS (tries per size: %s) ==== \n", TRIES_PER_STRING_SIZE);
printHeadings(TRIES_PER_STRING_SIZE, random);
for (int size : sizes) {
reportResults(size, test(size, TRIES_PER_STRING_SIZE, random));
}
}
///
///
/// METHODS OF CHECKING THE CONTENTS
/// OF A STRING. ALWAYS CHECKING FOR
/// WHITESPACE (CHAR <=' ')
///
///
// CHECK THE STRING CONTENTS
int charAtMethod1(final String data) {
final int len = data.length();
for (int i = 0; i < len; i++) {
if (data.charAt(i) <= ' ') {
doThrow();
}
}
return len;
}
// SAME AS ABOVE BUT USE String.length()
// instead of making a new final local int
int charAtMethod2(final String data) {
for (int i = 0; i < data.length(); i++) {
if (data.charAt(i) <= ' ') {
doThrow();
}
}
return data.length();
}
// USE new Java-8 String's IntStream
// pass it a PREDICATE to do the checking
int streamMethod(final String data, final IntPredicate predicate) {
if (data.chars().anyMatch(predicate)) {
doThrow();
}
return data.length();
}
// OH LA LA - GO PARALLEL!!!
int streamParallelMethod(final String data, IntPredicate predicate) {
if (data.chars().parallel().anyMatch(predicate)) {
doThrow();
}
return data.length();
}
// Re-fill a resuable char[] with the contents
// of the String's char[]
int reuseBuffMethod(final char[] reusable, final String data) {
final int len = data.length();
data.getChars(0, len, reusable, 0);
for (int i = 0; i < len; i++) {
if (reusable[i] <= ' ') {
doThrow();
}
}
return len;
}
// Obtain a new copy of char[] from String
int newMethod1(final String data) {
final int len = data.length();
final char[] copy = data.toCharArray();
for (int i = 0; i < len; i++) {
if (copy[i] <= ' ') {
doThrow();
}
}
return len;
}
// Obtain a new copy of char[] from String
// but use FOR-EACH
int newMethod2(final String data) {
for (final char c : data.toCharArray()) {
if (c <= ' ') {
doThrow();
}
}
return data.length();
}
// FANCY!
// OBTAIN FIELD FOR ACCESS TO THE STRING'S
// INTERNAL CHAR[]
int fieldMethod1(final Field field, final String data) {
try {
final char[] chars = (char[]) field.get(data);
final int len = chars.length;
for (int i = 0; i < len; i++) {
if (chars[i] <= ' ') {
doThrow();
}
}
return len;
} catch (Exception ex) {
throw new RuntimeException(ex);
}
}
// same as above but use FOR-EACH
int fieldMethod2(final Field field, final String data) {
final char[] chars;
try {
chars = (char[]) field.get(data);
} catch (Exception ex) {
throw new RuntimeException(ex);
}
for (final char c : chars) {
if (c <= ' ') {
doThrow();
}
}
return chars.length;
}
/**
*
* Make a list of tests. We will shuffle a copy of this list repeatedly
* while we repeat this test.
*
* @param data
* @return
*/
List<Jobber> makeTests(String data) throws Exception {
// make a list of tests
final List<Jobber> tests = new ArrayList<Jobber>();
tests.add(new Jobber("charAt1") {
int check() {
return charAtMethod1(data);
}
});
tests.add(new Jobber("charAt2") {
int check() {
return charAtMethod2(data);
}
});
tests.add(new Jobber("stream") {
final IntPredicate predicate = new IntPredicate() {
public boolean test(int value) {
return value <= ' ';
}
};
int check() {
return streamMethod(data, predicate);
}
});
tests.add(new Jobber("streamPar") {
final IntPredicate predicate = new IntPredicate() {
public boolean test(int value) {
return value <= ' ';
}
};
int check() {
return streamParallelMethod(data, predicate);
}
});
// Reusable char[] method
tests.add(new Jobber("reuse") {
final char[] cbuff = new char[MAX_STRING_SIZE];
int check() {
return reuseBuffMethod(cbuff, data);
}
});
// New char[] from String
tests.add(new Jobber("new1") {
int check() {
return newMethod1(data);
}
});
// New char[] from String
tests.add(new Jobber("new2") {
int check() {
return newMethod2(data);
}
});
// Use reflection for field access
tests.add(new Jobber("field1") {
final Field field;
{
field = String.class.getDeclaredField("value");
field.setAccessible(true);
}
int check() {
return fieldMethod1(field, data);
}
});
// Use reflection for field access
tests.add(new Jobber("field2") {
final Field field;
{
field = String.class.getDeclaredField("value");
field.setAccessible(true);
}
int check() {
return fieldMethod2(field, data);
}
});
return tests;
}
/**
* We use this class to keep track of test results
*/
abstract class Jobber {
final String name;
long nanos;
long chars;
long runs;
Jobber(String name) {
this.name = name;
}
abstract int check();
final double nanosPerChar() {
double charsPerRun = chars / runs;
long nanosPerRun = nanos / runs;
return charsPerRun == 0 ? nanosPerRun : nanosPerRun / charsPerRun;
}
final void run() {
runs++;
long time = System.nanoTime();
chars += check();
nanos += System.nanoTime() - time;
}
}
// MAKE A TEST STRING OF RANDOM CHARACTERS A-Z
private String makeTestString(int testSize, char start, char end) {
Random r = new Random();
char[] data = new char[testSize];
for (int i = 0; i < data.length; i++) {
data[i] = (char) (start + r.nextInt(end));
}
return new String(data);
}
// WE DO THIS IF WE FIND AN ILLEGAL CHARACTER IN THE STRING
public void doThrow() {
throw new RuntimeException("Bzzzt -- Illegal Character!!");
}
/**
* 1. get random string of correct length 2. get tests (List<Jobber>) 3.
* perform tests repeatedly, shuffling each time
*/
List<Jobber> test(int size, int tries, Random random) throws Exception {
String data = makeTestString(size, 'A', 'Z');
List<Jobber> tests = makeTests(data);
List<Jobber> copy = new ArrayList<>(tests);
while (tries-- > 0) {
Collections.shuffle(copy, random);
for (Jobber ti : copy) {
ti.run();
}
}
// check to make sure all char counts the same
long runs = tests.get(0).runs;
long count = tests.get(0).chars;
for (Jobber ti : tests) {
if (ti.runs != runs && ti.chars != count) {
throw new Exception("Char counts should match if all correct algorithms");
}
}
return tests;
}
private void printHeadings(final int TRIES_PER_STRING_SIZE, final Random random) throws Exception {
System.out.print(" Size");
for (Jobber ti : test(0, TRIES_PER_STRING_SIZE, random)) {
System.out.printf("%9s", ti.name);
}
System.out.println("");
}
private void reportResults(int size, List<Jobber> tests) {
System.out.printf("%6d", size);
for (Jobber ti : tests) {
System.out.printf("%,9.2f", ti.nanosPerChar());
}
System.out.println("");
}
}
```
|
Java - How can i compare chars with other chars?
My goal is to get specific inputs from the user (A, B, C, and D only).
for example: If i enter the letter A, the if statement will execute (its not supposed to). and the same with the do while.
(Logic Error)
```
char[] response = new char[20];
Scanner k = new Scanner(System.in);
//Prompts User to fill array with their responses
for(int i=0; i<response.length; i++) {
//Input validation
do {
System.out.println("Answer "+(i+1)+":");
response[i] = k.nextLine().charAt(0);
if(response[i] != 'A' ||response[i] != 'B' || response[i] != 'C' ||response[i] != 'D')
System.out.print("Try Again. ");
}
while(response[i]!= 'A' ||response[i] != 'B' ||response[i] != 'C' ||response[i] != 'D');
}
```
|
This is how I would write it
```
Scanner in = new Scanner(System.in);
char[] response = new char[20];
//Prompts User to fill array with their responses
for(int i = 0; i < response.length; i++) {
for (;;) {
System.out.println("Answer " + (i + 1) + ":");
response[i] = in.nextLine().charAt(0);
//Input validation
if ("ABCD".indexOf(response[i]) >= 0)
break;
System.out.print("Please try again, it must be A, B, C or D");
}
}
```
What you were doing wrong is you needed to write
```
if (!(response[i] == 'A' || response[i] == 'B' || response[i] == 'C' || response[i] != 'D'))
```
OR
```
if (response[i] != 'A' && response[i] != 'B' && response[i] != 'C' && response[i] != 'D')
```
|
Can I set CSS rules for sites I don't own?
Consider this case:
I'm using a website service on a daily basis and I don't like the menu background-color.
If I inspect the element I can see that the menu CSS class is called "my-menu" and with a developers tools I can alter the background-color, but obviously, when I'll refresh the page my change will be lost.
Is there a way to define that from now on whenever I surf to this website "my-menu" class will contain my changes? Is there a way to use CSS rules that was not defined in the loaded page and give them the top priority?
|
You can use a browser extension like `Stylish` and create custom stylesheets (known as userstyles) for it.
[Stylish for Firefox](https://addons.mozilla.org/en-US/firefox/addon/stylish/)
[Stylish for Chrome](https://chrome.google.com/webstore/detail/stylish/fjnbnpbmkenffdnngjfgmeleoegfcffe?hl=en)
You can find already created userstyles, documentation on creating your own and more in [this](http://userstyles.org/) website.
[Installing & Using Stylish](http://userstyles.org/help/stylish)
[Creating your own userstyles](https://github.com/JasonBarnabe/stylish/wiki)
[Userstyles Help](http://userstyles.org/help)
|
Laziness of Haskell and list comprehension. Completion condition
`foo s1 s2 = null ([x | x <- s2, x == s1])`
Please explain when this function will end?
Will Haskell first iterate through the entire list and then check for `null`, or when one element appears, will he check for `null` and complete the iteration? If the first option, then how can it be made more lazy?
|
>
> Will Haskell first iterate through the entire list and then check for `null`.
>
>
>
**No**. `null` checks if it is an empty list, and returns `True` for an empty list `[]`, and `False` for the `(:)` data constructor. The "old" [`null` is implemented as [src]](https://hackage.haskell.org/package/base-4.16.0.0/docs/src/GHC.List.html#null):
>
>
> ```
> null :: [a] -> Bool
> null [] = True
> null (_:_) = False
>
> ```
>
>
It will thus evaluate the list comprehension to *head normal form* (HNF), and once it knows the outer data constructor it can return `True` or `False`: it will not check what the first element is, so if that is an expensive expression, it will not waste time on that one either.
The "new" [`null` is implemented as [src]](https://hackage.haskell.org/package/base-4.16.0.0/docs/src/Data.Foldable.html#null):
>
>
> ```
> null :: t a -> Bool
> null = foldr (\_ _ -> False) True
>
> ```
>
>
where [`foldr` for a list is implemented as [src]](https://hackage.haskell.org/package/base-4.16.0.0/docs/src/GHC.Base.html#foldr):
>
>
> ```
> foldr k z = go
> where
> go [] = z
> go (y:ys) = y `k` go ys
>
> ```
>
>
which will thus also simply check if the outer data constructor is an empty list `[]` or a "cons" `(:)` since in this case `k` returns `True` and is not interested in the parameters.
List comprehensions are lazy as well: these will only evaluate the outer data constructor if necessary, and construct the head and tail if necessary. Your list comprehension is desugared to:
```
concatMap (\x -> if x == s1 then [x] else []) s2
```
If it thus has to evaluate the outer data constructor of the list, it will iterate over `s2`, if `x == s1` then it will produce as outer data constructor the "cons" and then `null` can thus use that to determine if the list is empty.
This thus means that from the moment it finds an element `x` from `s2` where `x == s1`, it will return `False`.
|
Tortoise SVN Log messages does not update
SVN repository keeps the log messages as properties attached to each revision. these properties are kept in `db/revprops` folder of repository with same numbering as revision. I needed to change format of the log messages and include some extra information. Format of the prop files (at least for version 7 uncompressed) is just key value in text format. I wrote a small program to do this and successfully edited the log messages.
Running `SVN Log` verifies the command line and shows updated log messages
[](https://i.stack.imgur.com/0LPJk.jpg)
**Problem**
Opening TortoiseSVN in dev computers does not show the updated messages in list; however if developer right click on the revision item in list and select `Edit log message` the edited message will show up. Even checkout of whole repository to new folder still shows the old message.

**Question**
Above observation makes me believe that TortoiseSVN somehow caches the log messages in client computers. Where is this information stored?
|
Yes, TortoiseSVN caches the log messages and changed paths. You should be able to clear the cache via **TortoiseSVN Settings | Log Caching | Cached Repositories**.
Read [TortoiseSVN Manual | Log Caching](https://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-settings.html#tsvn-dug-settings-dia-logcache) & [Cached Repositories](https://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-settings.html#tsvn-dug-settings-logcache-repolist) for further information.
**NOTE:** Log caching is a very helpful feature. Don't disable it and be careful when you enter log messages.
|
Specifying column class in html\_table(rvest)
I am using the html\_table from rvest to read a two-column concordance table from the website below. Both columns contain instances of leading zeros which I would want to preserve. As such, I would want the columns to be of class character. I use the following code:
```
library(rvest)
library(data.table)
df <- list()
for (j in 1:25) {
url <- paste('http://unstats.un.org/unsd/cr/registry/regso.asp?Ci=70&Lg=1&Co=&T=0&p=',
j, '&prn=yes', sep='')
webpage <- read_html(url)
table <- html_nodes(webpage, 'table')
df[[j]] <- html_table(table, header=TRUE)[[1]]
df[[j]] <- df[[j]][,c(1:2) ]
}
ISIC4.NACE2 <- rbindlist(df)
```
However a str(df[[1]]) returns
```
'data.frame': 40 obs. of 2 variables:
$ ISIC Rev.4: chr "01" "011" "0111" "0112" ...
$ NACE Rev.2: num 1 1.1 1.11 1.12 1.13 1.14 1.15 1.16 1.19 1.2 ...
```
It seems the html\_table function interprets the first column as character and the second column as numeric, thereby truncating the leading zeros in the latter. Is there a way to specify column class using html\_table?
|
`col_classes` should either be `NULL` or a `list`. if a `list` then it should be in the form:
```
list(`COL#`=`class`, ...)
```
for example:
```
list(`1`='character', `3`='integer', `7`='logical')
```
You have to source everything below into the session you're using `rvest` from since it's replacing the `rvest` S3 definitions of these functions:
I changed the `html_table` line in your code to:
```
df[[j]] <- html_table(table, header=TRUE, col_classes=list(`2`='character'))[[1]]
```
and now get the following as `str` output:
```
'data.frame': 40 obs. of 2 variables:
$ ISIC Rev.4: int 1 11 111 112 113 114 115 116 119 12 ...
$ NACE Rev.2: chr "01" "01.1" "01.11" "01.12" ...
```
------ source everything below -------
```
html_table <- function(x, header = NA, trim = TRUE, fill = FALSE, dec = ".", col_classes = NULL) {
UseMethod("html_table")
}
' @export
html_table.xml_document <- function(x, header = NA, trim = TRUE, fill = FALSE,
dec = ".", col_classes = NULL) {
tables <- xml2::xml_find_all(x, ".//table")
lapply(tables, html_table, header = header, trim = trim, fill = fill, dec = dec, col_classes)
}
html_table.xml_nodeset <- function(x, header = NA, trim = TRUE, fill = FALSE,
dec = ".", col_classes = NULL) {
# FIXME: guess useful names
lapply(x, html_table, header = header, trim = trim, fill = fill, dec = dec, col_classes)
}
html_table.xml_node <- function(x, header = NA, trim = TRUE,
fill = FALSE, dec = ".",
col_classes=NULL) {
stopifnot(html_name(x) == "table")
# Throw error if any rowspan/colspan present
rows <- html_nodes(x, "tr")
n <- length(rows)
cells <- lapply(rows, "html_nodes", xpath = ".//td|.//th")
ncols <- lapply(cells, html_attr, "colspan", default = "1")
ncols <- lapply(ncols, as.integer)
nrows <- lapply(cells, html_attr, "rowspan", default = "1")
nrows <- lapply(nrows, as.integer)
p <- unique(vapply(ncols, sum, integer(1)))
maxp <- max(p)
if (length(p) > 1 & maxp * n != sum(unlist(nrows)) &
maxp * n != sum(unlist(ncols))) {
# then malformed table is not parsable by smart filling solution
if (!fill) { # fill must then be specified to allow filling with NAs
stop("Table has inconsistent number of columns. ",
"Do you want fill = TRUE?", call. = FALSE)
}
}
values <- lapply(cells, html_text, trim = trim)
out <- matrix(NA_character_, nrow = n, ncol = maxp)
# fill colspans right with repetition
for (i in seq_len(n)) {
row <- values[[i]]
ncol <- ncols[[i]]
col <- 1
for (j in seq_len(length(ncol))) {
out[i, col:(col+ncol[j]-1)] <- row[[j]]
col <- col + ncol[j]
}
}
# fill rowspans down with repetition
for (i in seq_len(maxp)) {
for (j in seq_len(n)) {
rowspan <- nrows[[j]][i]; colspan <- ncols[[j]][i]
if (!is.na(rowspan) & (rowspan > 1)) {
if (!is.na(colspan) & (colspan > 1)) {
# special case of colspan and rowspan in same cell
nrows[[j]] <- c(head(nrows[[j]], i),
rep(rowspan, colspan-1),
tail(nrows[[j]], length(rowspan)-(i+1)))
rowspan <- nrows[[j]][i]
}
for (k in seq_len(rowspan - 1)) {
l <- head(out[j+k, ], i-1)
r <- tail(out[j+k, ], maxp-i+1)
out[j + k, ] <- head(c(l, out[j, i], r), maxp)
}
}
}
}
if (is.na(header)) {
header <- all(html_name(cells[[1]]) == "th")
}
if (header) {
col_names <- out[1, , drop = FALSE]
out <- out[-1, , drop = FALSE]
} else {
col_names <- paste0("X", seq_len(ncol(out)))
}
# Convert matrix to list to data frame
df <- lapply(seq_len(maxp), function(i) {
if (!is.null(col_classes) & (i %in% names(col_classes))) {
as(out[, i], col_classes[[as.character(i)]])
} else {
utils::type.convert(out[, i], as.is = TRUE, dec = dec)
}
})
names(df) <- col_names
class(df) <- "data.frame"
attr(df, "row.names") <- .set_row_names(length(df[[1]]))
if (length(unique(col_names)) < length(col_names)) {
warning('At least two columns have the same name')
}
df
}
```
|
Informix equivalent to mysql's SHOW CREATE TABLE
Is there any equivalent to MySQL's `SHOW CREATE TABLE <tablename>` in informix? I would like to run it in the SQL Manager on the Server Studio and would also like to obtain information about a tables columns and their types. Is this possible? I've found the systables, syscolumns and information schema, but wasn't lucky with the results I got with the output of `select * from...` .
|
I don't know such SQL command. From command line you can use `dbschema` utility:
```
C:\Informix>dbschema -t abc -d test
DBSCHEMA Schema Utility INFORMIX-SQL Version 11.50.TC6DE
{ TABLE "informix".abc row size = 4 number of columns = 1 index size = 0 }
create table "informix".abc
(
xxx integer
);
revoke all on "informix".abc from "public" as "informix";
```
If you want to read systables, then look at my script that dumps info about tables, columns, indexes etc: <http://code.activestate.com/recipes/576621-dump-informix-schema-to-text/?in=user-186902> . In its source you will find systables queries you can use.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.