prompt
stringlengths 49
4.73k
| response
stringlengths 238
35k
|
---|---|
Regex for six characters with at least one digit
I am looking for a regex with at least 6 characters (no limit) including at least one digit. No spaces allowed.
I have this this regex:
```
^(?=.*\d).{4,8}$
```
However, I don't want to limit to 8 characters.
|
>
> a regex with at least 6 characters (no limit) including at least one digit. no spaces allowed.
>
>
>
```
^(?=\D*\d)\S{6,}$
```
Or
```
^(?=\D*\d)[^ ]{6,}$
```
See [demo](https://regex101.com/r/jS4eP1/2)
- `^` Start of string
- `(?=\D*\d)` - Must be 1 digit (the lookahead is based on the [principle of contrast](http://www.rexegg.com/regex-style.html#contrast))
- `\S{6,}` - 6 or more non-whitespaces
*OR*
`[^ ]{6,}` - 6 or more characters other than literal normal space
To enable the regex to match more than 6 characters, you only need to adjust the quantifier. See [more about limiting quantifiers here](http://www.regular-expressions.info/repeat.html#limit).
|
Access PostgreSQL server from LAN
I've been trying to edit pg\_hba.conf file in order to be able to access the server using just the IP address with, so far, no success.
For example, I can access using «localhost», but I want to access using the IP address that my router gave me which is something like 192.168.1.X
This is mi pg\_hba.conf:
```
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres trust
#host replication postgres 127.0.0.1/32 trust
#host replication postgres ::1/128 trust
host all all 0.0.0.0/0 trust
```
Any help?
|
First, edit the postgresql.conf file, and set [listen\_addresses](http://www.postgresql.org/docs/current/static/runtime-config-connection.html#GUC-LISTEN-ADDRESSES). The default value of 'localhost' will only listen on the loopback adaptor. You can change it to '\*', meaning listen on all addresses, or specifically list the IP address of the interfaces you want it to accept connections from. Note that this is the IP address which the interface has allocated to it, which you can see using `ifconfig` or `ip addr` commands.
You must restart postgresql for the changes to listen\_addresses to take effect.
Next, in [pg\_hba.conf](http://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html), you will need an entry like this:
```
# TYPE DATABASE USER ADDRESS METHOD
host {dbname} {user} 192.168.1.0/24 md5
```
{dbname} is the database name you are allowing access to. You can put "all" for all databases.
{user} is the user who is allowed to connect. Note that this is the postgresql user, not necessarily the unix user.
The ADDRESS part is the network address and mask that you want to allow. The mask I specified will work for 192.168.1.x as you requested.
The METHOD part is the authentication method to use. There are a number of options there. md5 means it will use an md5 hashed password. 'trust' which you had in your sample means no authentication at all - this is definitely not recommended.
Changes to pg\_hba.conf will take effect after reloading the server. You can to this using `pg_ctl reload` (or via the init scripts, depending on your OS distro).
|
Why are Python's datetime ISO-functions logically incorrect and buggy?
I'm kind of stunned that the python `datetime` `.isoformat()` function doesn't return correct information. The function correctly returns an ISO 8601-formatted string when a timezone is provided to the fromtimestamp() method. However, the timezone is ignored in the calculation of the resultant. Observe:
```
13:29 msimsonnet:~$ python
Python 2.7.1 (r271:86832, Jan 26 2011, 13:56:46)
[GCC 4.2.1 (Apple Inc. build 5664)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Running with pythonstartup.py
>>> import pytz,datetime
>>> datetime.datetime.fromtimestamp(1303876413).isoformat()
'2011-04-26T23:53:33'
>>> ny = pytz.timezone('America/New_York')
>>> sf = pytz.timezone('America/Los_Angeles')
>>> datetime.datetime.fromtimestamp(1303876413,ny).isoformat()
'2011-04-26T23:53:33-04:00'
>>> datetime.datetime.fromtimestamp(1303876413,sf).isoformat()
'2011-04-26T20:53:33-07:00'
>>>
```
I'm running this on a computer that's in EDT (-400 from GMT). The time 1303876413 is actually 11:53:33pm on April 26, 2011, when I first wrote the question. Notice that in the first example simply requesting `.isoformat()` returns `'2011-04-26T23:53:33'`, which is wrong --- it should return `'2011-04-26T23:53:33-04:00'`, since it's returning local time and Python knows the timezone. The second example is correct, but I am jamming in the NY timezone object. The third example is just wrong --- Python is preserving the timezone, but it's not adjusting the time accordingly.
ADDENDUM:
If you read all of the comments, you'll see that the behavior I was looking for can be found using `utcfromtimestamp` rather than `fromtimestamp`
|
Make sure you are using "timezone aware" `datetime` objects, not "naive" objects.
In order to make a timezone "aware" `datetime` object you need to be sure to provide the timezone when you create it.
More details are here:
<http://docs.python.org/library/datetime.html>
Also, ISO 8601 does *not* require timezone, and `isoformat` *is* compliant. In fact, it doesn't support timezone at all, only time *offsets* from UTC.
ISO 8601 allows many different formats, for example, these are all valid:
```
2011-04-27
2011-04-27 02:48Z
2011-04-27T02:48Z
2011-W17-3
2011-117
```
See <http://en.wikipedia.org/wiki/ISO_8601> for more details.
**Edit, to address your updates:**
It's not buggy, it works properly, and according to the docs. Naive datetime objects just don't have any time zone information, period. So there's no reason to expect that they'd be able to give you timezone information when you call `isoformat()`.
When you create an object using the timestamp, it makes a local time datetime object based on whatever python thinks your system time zone is. That's why when you give it a posix timestamp, it converts it to your local time for you. Note that while the datetime module knows that a timestamp is UTC, and it knows your local time zone, and `fromtimestamp` uses that information to create a datetime object, the resulting object is still naive and doesn't anything about time zones. If you want to use timestamps, you shouldn't use naive datetime objects.
From the docs:
>
> Whether a naive datetime object
> represents Coordinated Universal Time
> (UTC), local time, or time in some
> other timezone is purely up to the
> program, just like it’s up to the
> program whether a particular number
> represents metres, miles, or mass.
> Naive datetime objects are easy to
> understand and to work with, at the
> cost of ignoring some aspects of
> reality.
>
>
>
Here is the documentation for the `fromtimestamp` method (bolding added):
>
> Return the local date and time
> corresponding to the POSIX timestamp,
> such as is returned by time.time(). **If
> optional argument tz is None or not
> specified, the timestamp is converted
> to the platform’s local date and time,
> and the returned datetime object is
> naive.**
>
>
>
**If you want it to consider time zone, you need to pass a time zone, it won't infer it for you.** Just because it doesn't do what you think it should do does not make it non-compliant or buggy. It works as intended, and as is documented. That's pretty much the opposite of buggy.
|
Using Elasticsearch.Net/NEST to search parent documents based on child attributes, where parent/child documents are stored separately
I would like to use Elasticsearch.Net/NEST to search related documents. For example, I have:
```
Person:
id name address_id
-- ---- ----------
1 John 1
2 Mary 2
Address:
id city
-- ------
1 Boston
2 Berlin
```
I'd like to store the Person and Address documents separately, and do queries where I return Person documents based on Address fields. For example, return all documents for people living in Boston. I've seen some examples in the Elaticsearch documentation using mapping and parent/child directives, but nothing for Elasticsearch.Net/NEST. Any code samples or pointers would be greatly appreciated...
|
Here's a small snippet where the address is the parent
EDIT:
Create the index:
```
var indicesOperationResponse = _client.CreateIndex(ci => ci.Index("test")
.AddMapping<Address>(m => m.MapFromAttributes())
.AddMapping<Person>(m => m.MapFromAttributes().SetParent<Address>()));
```
Index documents:
```
var bulkResponse = _client.Bulk(b => b
.Index<Address>(bd => bd.Object(new Address { Name = "Tel Aviv", Id = 1 }).Index("test"))
.Index<Person>(bd => bd.Index("test").Object(new Person {Id = 5, Address = 1, Name = "Me"}).Parent(1)));
```
And search by parent
```
var searchResponse = _client.Search<Person>(s => s
.Query(q=>q.MatchAll())
.Filter(q => q
.HasParent<Address>(c => c
.Query(cq => cq.Match(m=>m.OnField(t => t.Name).Query("Tel Aviv"))))));
```
|
UIMenuController Custom Items
I have created a UIMenuController and have set it a custom menu item like so:
```
UIMenuController *menuController = [UIMenuController sharedMenuController];
UIMenuItem *item1 = [[UIMenuItem alloc] initWithTitle:@"Do This" action:@selector(item1)];
[menuController setMenuItems:[NSArray arrayWithObject:item1]];
```
But I wanted that object to be the only one to appear so I added this code:
```
- (BOOL)canPerformAction: (SEL)action withSender: (id)sender {
BOOL answer = NO;
if (action == @selector(item1))
answer = YES;
return answer;
}
```
The problem is it still shows other## Heading ## items, such as "Select", "Select All" and "Paste".
This may have something to do with this being displayed in a `UITextView`.
But how do I stop if from displaying all other items?
|
I think this is one of the few cases where you want to subclass `UITextView`. I just tried this with the following code, and the only menu item that is shown is my *Do Something* item.
From my `TestViewController.m`
```
@implementation TestViewController
- (void) doSomething: (id) sender
{
NSLog(@"Doing something");
}
- (void) viewDidLoad
{
UIMenuController *menuController = [UIMenuController sharedMenuController];
UIMenuItem *item = [[[UIMenuItem alloc] initWithTitle: @"Do Something"
action: @selector(doSomething:)] autorelease];
[menuController setMenuItems: [NSArray arrayWithObject: item]];
}
@end
```
Code for my `MyTextView.h`:
```
// MyTextView.h
#import <UIKit/UIKit.h>
@interface MyTextView :UITextView {
}
@end
```
Code for `MyTextView.m`:
```
// MyTextView.m
#import "MyTextView.h"
@implementation MyTextView
- (BOOL) canPerformAction:(SEL)action withSender:(id)sender
{
return NO;
}
@end
```
|
How does Rails ActiveRecord chain "where" clauses without multiple queries?
I'm a PHP developer learning the awesomness of Ruby on Rails, I'm loving ActiveRecord and i noticed something really interesting, Which is how ActiveRecord methods detect the end of method chain to execute the query.
```
@person = Person.where(name: 'Jason').where(age: 26)
# In my humble imagination I'd think that each where() executes a database query
# But in reality, it doesn't until the last method in the chain
```
How does this sorcery work?
|
The `where` method returns an `ActiveRecord::Relation` object, and by itself this object does not issue a database query. It's *where* you use this object that matters.
In the console, you're probably doing this:
```
@person = Person.where(name: "Jason")
```
And then *blammo* it issues a database query and returns what appears to be an array of everyone named Jason. Yay, Active Record!
But then you do something like this:
```
@person = Person.where(name: "Jason").where(age: 26)
```
And then that issues another query, but this one's for people who are called Jason who are 26. But it's only issuing *one* query, so where'd the other query go?
---
As others have suggested, this is happening because the `where` method returns a proxy object. It doesn't actually perform a query and return a dataset unless it's asked to do that.
When you run *anything* in the console, it's going to output the inspected version of the outcome of whatever it is you ran. If you put `1` in the console and hit enter, you'll get `1` back because `1.inspect` is `1`. Magic! Same goes for `"1"`. A variety of other objects don't have an `inspect` method defined and so Ruby falls back to the one on `Object` which returns something *ghastly* like `<Object#23adbf42560>`.
Every single `ActiveRecord::Relation` object has the `inspect` method defined on it so that it causes a query. When you write the query in your console, IRB will call `inspect` on the return value from that query and output something almost human readable, like the Array that you'd see.
---
If you were just issuing this in a standard Ruby script, then no query would be executed until the object was inspected (via `inspect`) or was iterated through using `each`, or had the `to_a` method called on it.
Up until one of those three things happen, you can chain as many `where` statements on it as you will like and then when you *do* call `inspect`, `to_a` or `each` on it, then it will finally execute that query.
|
Align item in box [Jetpack Compose]
I'm trying to set `ic_remove_selected_photo` at the TopEnd of the box
[](https://i.stack.imgur.com/w28ee.png)
What I have achieved:
[](https://i.stack.imgur.com/VW0BX.png)
Code:
```
Box(modifier = Modifier.size(90.dp).padding(7.dp)) {
Image(
bitmap = bitmap.asImageBitmap(),
modifier = Modifier
.size(80.dp)
.clip(RoundedCornerShape(6.dp)),
contentScale = ContentScale.Crop,
)
IconButton(modifier = Modifier.align(Alignment.TopEnd).size(10.dp)) {
Icon(painter = painterResource(id = R.drawable.ic_remove_selected_photo))
}
}
```
How I can set `remove icon` on the image?
|
[](https://i.stack.imgur.com/ANf6B.png)
```
@Composable
fun ImageWithCloseButton() {
Box(
modifier = Modifier
.background(LightGray)
.padding(16.dp)
.size(88.dp),
) {
Image(
painter = painterResource(
id = R.drawable.ic_launcher_foreground,
),
contentDescription = "",
modifier = Modifier
.align(Alignment.Center)
.clip(RoundedCornerShape(16.dp))
.background(Black)
.size(80.dp),
contentScale = ContentScale.Crop,
)
IconButton(
onClick = {},
modifier = Modifier
.clip(CircleShape)
.background(White)
.align(Alignment.TopEnd)
.size(16.dp)
) {
Icon(
imageVector = Icons.Rounded.Close,
contentDescription = "",
)
}
}
}
```
|
Python unicode error
I have a program made in python that was built to python 2 , but now i have to rebuild it and have already change some things to python3 but somehow, my csv is not being loaded and says ...
Unresolved reference unicode for the first example ( i already seen a solution here but it didn't worked at all )
And says unresolved reference file, can anybody help me plz thks in advance ;)
```
def load(self, filename):
try:
f = open(filename, "rb")
reader = csv.reader(f)
for sub, pre, obj in reader:
sub = unicode(sub, "UTF-8").encode("UTF-8")
pre = unicode(pre, "UTF-8").encode("UTF-8")
obj = unicode(obj, "UTF-8").encode("UTF-8")
self.add(sub, pre, obj)
f.close()
print
"Loaded data from " + filename + " !"
except:
print
"Error opening file!"
def save(self, filename):
fnm = filename ;
f = open(filename, "wb")
writer = csv.writer(f)
for sub, pre, obj in self.triples(None, None, None):
writer.writerow([sub.encode("UTF-8"), pre.encode("UTF-8"), obj.encode("UTF-8")])
f.close()
print
"Written to " + filename
```
|
```
unicode(sub, "UTF-8")
```
should be
```
sub.decode("UTF-8")
```
Python3 unified the `str` and `unicode` types so there's no longer a builtin `unicode` cast operator.
---
The Python 3 [Unicode HOWTO](http://docs.python.org/3/howto/unicode.html#python-s-unicode-support) explains a lot of the differences.
>
> Since Python 3.0, the language features a str type that contain Unicode characters, meaning any string created using `"unicode rocks!"`, `'unicode rocks!'`, or the triple-quoted string syntax is stored as Unicode.
>
>
>
and explains how `encode` and `decode` relate to one another
>
> # Converting to Bytes
>
>
> The opposite method of `bytes.decode()` is `str.encode()`, which returns a `bytes` representation of the Unicode string, encoded in the requested encoding.
>
>
>
---
Instead of
```
file(...)
```
use `open`
The [I/O docs](http://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files) explain how to use `open` and how to use `with` to make sure it gets closed.
>
> It is good practice to use the with keyword when dealing with file objects. This has the advantage that the file is properly closed after its suite finishes, even if an exception is raised on the way. It is also much shorter than writing equivalent try-finally blocks:
>
>
>
> ```
> >>> with open('workfile', 'r') as f:
> ... read_data = f.read()
> >>> f.closed
> True
>
> ```
>
>
|
Xamarin.forms how to auto save user name like browser
I am developing a mobile application using **Xamarin.Forms**
I had the following Home page contains login info:
[](https://i.stack.imgur.com/8U5S4.png)
How can we have the application to automatically save the user name, so that they do not have to type it in each time (as in a browser)?
|
You can use `Properties` dictionary in Xamarin.Forms `Application` class. And let the `Xamarin.Forms` framework handle persisting user name between app restarts and pausing/resuming your app.
Save user name by writing it to `Properties` dictionary
```
var properties = Xamarin.Forms.App.Current.Properties;
if(!properties.ContainsKey("username")
{
properties.Add("username", username);
}
else
{
properties["username"] = username;
}
```
Then, when your login screen is about to appear (for example in `OnAppearing` method) check `Properties` for user name:
```
var properties = Xamarin.Forms.App.Current.Properties;
if(properties.ContainsKey("username")
{
var savedUsername = (string)properties["username"];
}
```
If it's not there, then it means that this is first time when user log in into your application.
|
Place div on top of boundary between two divs
I'm trying to put the arrow on top in the middle of two div like this picture: [link](https://i.stack.imgur.com/CS7kB.png).
The code below doesn't work and I'm trying to make it work with `position : absolute ;` but I don't know how.
```
.section1 {
background-color: lightgray;
text-align: center;
padding: 100px;
}
.content {
display: inline-block;
}
.section1 .separator {
margin: 0 auto;
position: absolute; /* XXX this does something weird */
bottom: 0;
}
.section2 {
height: 200px;
background-color: coral;
}
```
```
<div class="section1">
<div class="content">
Hello, world!
</div>
<div class="separator">
▼
</div>
</div>
<div class="section2">
<div class="content">
Hello, world!
</div>
</div>
```
|
First up: You need to make `section1` use relative positioning so that the `separator` is relative to it's container.
Then you can position it in the bottom center with:
```
bottom: 0;
left: 50%;
```
And then finally, translate it 50% left (So that it's center is aligned with it's containers center) and 50% down (So it's half out of it's container) using:
```
transform: translate(-50%, 50%);
```
Working example:
```
.section1 {
background-color: lightgray;
text-align: center;
padding: 100px;
/* This makes sure the separator is positioned relative to the correct element */
position: relative;
}
.content {
display: inline-block;
}
.section1 .separator {
position: absolute;
/* Position the element in the center bottom */
bottom: 0;
left: 50%;
/* Translate it the the offset position */
transform: translate(-50%, 50%);
}
.section2 {
height: 200px;
background-color: coral;
}
```
```
<div class="section1">
<div class="content">
Hello, world!
</div>
<div class="separator">
▼
</div>
</div>
<div class="section2">
<div class="content">
Hello, world!
</div>
</div>
```
|
How to center text on android IconGenerator
I'm developing an app using lots of markers placed on the map, and I'm using a custom ClusterRenderer to show them.
Problem is that I can't draw the cluster's size in the center of the custom marker icon, please see attached screenshot.
I've tried adding contentPadding to the IconGenerator, but still no luck, because of the changing number of digits shown. Could you please help me center the text on the generated icon?
Code:
```
IconGenerator clusterIconGenerator = new IconGenerator(context);
clusterIcon = context.getResources().getDrawable(R.drawable.map_cluster);
clusterIconGenerator.setBackground(clusterIcon);
@Override
protected void onBeforeClusterRendered(Cluster<MyType> cluster, MarkerOptions markerOptions) {
Bitmap clusterIcon = clusterIconGenerator.makeIcon(String.valueOf(cluster.getSize()));
markerOptions.icon(BitmapDescriptorFactory.fromBitmap(clusterIcon));
}
```
[](https://i.stack.imgur.com/ZC2jM.png)
|
## UPDATE
starting Apr 1, 2016 a prefix has been added to the Resources of the library
so the id="text" has been changed to **"amu\_text"**.
---
As stated in the library documentation :
>
> setContentView public void setContentView(View contentView)
> Sets the child view for the icon.
> If the view contains a
> TextView with the id "amu\_text", operations such as
> setTextAppearance(Context, int) and makeIcon(String) will operate
> upon that TextView .
>
>
>
```
@Override
protected void onBeforeClusterRendered(Cluster<Dashboard_Marker> cluster, MarkerOptions markerOptions) {
IconGenerator TextMarkerGen = new IconGenerator(context);
Drawable marker;
int ClusterSize = cluster.getSize();
marker = context.getResources().getDrawable(R.drawable.cluster_red);
TextMarkerGen.setBackground(marker);
LayoutInflater myInflater = (LayoutInflater)context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
View activityView = myInflater.inflate(R.layout.cluster_view, null, false);
TextMarkerGen.setContentView(activityView);
TextMarkerGen.makeIcon(String.valueOf(cluster.getSize()));
BitmapDescriptor icon = BitmapDescriptorFactory.fromBitmap(TextMarkerGen.makeIcon());
markerOptions.icon(icon);
}
```
with the layout cluster\_view as :
```
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical" android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerHorizontal="true"
android:layout_centerInParent="true"
android:layout_centerVertical="true"
android:weightSum="1">
<TextView
android:layout_width="61dp"
android:layout_height="wrap_content"
android:textAppearance="?android:attr/textAppearanceMedium"
android:text="Medium Text"
android:textColor="#000000"
android:id="@+id/amu_text"
android:layout_marginTop="13dp"
android:gravity="center" />
</LinearLayout>
```
note :: the layout must contain one text view with an id = "amu\_text" in order for the icon generator to accept it , manipulate all the positioning you want in the layout .
|
Rails does not load assets located in public directory in production
Hello i have assets in public directory (because of simplicity)
in layout i load
```
<link href="/bootstrap/css/bootstrap.css" rel="stylesheet">
<link href="/assets/css/jumbotron.css" rel="stylesheet">
<link href="/assets/css/application.css" rel="stylesheet">
```
and in Development it works well but in Production assets are not loaded.
My **Development.rb**
```
Web::Application.configure do
config.cache_classes = false
config.whiny_nils = true
config.consider_all_requests_local = true
config.action_controller.perform_caching = false
config.action_mailer.raise_delivery_errors = false
config.active_support.deprecation = :log
config.action_dispatch.best_standards_support = :builtin
config.active_record.mass_assignment_sanitizer = :strict
config.active_record.auto_explain_threshold_in_seconds = 0.5
config.assets.compress = false
config.assets.debug = true
end
```
My **Production.rb**
```
Web::Application.configure do
config.cache_classes = false
config.consider_all_requests_local = true # default false, zobrazuje errory
config.action_controller.perform_caching = false # default true
config.serve_static_assets = false
config.assets.compress = true
config.assets.compile = true # default false
config.assets.digest = true
config.i18n.fallbacks = true
config.active_support.deprecation = :notify
end
```
|
This is because you have
```
config.serve_static_assets = false
```
in your `production.rb` file.
From the [Rails Configuration guide](http://guides.rubyonrails.org/configuring.html):
>
> - **`config.serve_static_assets`** configures Rails itself to serve static assets. Defaults to true, but in the production environment is turned off as the server software (e.g. Nginx or Apache) used to run the application should serve static assets instead. Unlike the default setting set this to true when running (absolutely not recommended!) or testing your app in production mode using WEBrick. Otherwise you won´t be able use page caching and requests for files that exist regularly under the public directory will anyway hit your Rails app.
>
>
>
And like that guide suggests, you really shouldn't rely on serving assets from `public/` via your Rails app, it is better to let the web server (e.g. Apache or Nginx) handle serving assets for performance.
|
Maven Javadoc - Unable to generate Javadoc
I have the following dependency and build in my pom file. I'm able to manually create the javadoc with a Maven command. I can also succesfully perform a build. The output doesn't mention javadoc at all. I've also tried leaving out the output directory paths. POM File
Dependency section:
```
<dependency>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.8</version>
</dependency>
```
and then the build section:
```
<build>
<finalName>D</finalName>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.8</version>
<configuration>
<outputDirectory>${project.build.directory}/javadoc</outputDirectory>
<reportOutputDirectory>${project.reporting.outputDirectory}/javadoc</reportOutputDirectory>
<version>2.8</version>
</configuration>
<executions>
<execution>
<id>attach-javadocs</id>
<goals>
<goal>aggregate</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
```
|
The Maven Javadoc plugin doesn't run by default and needs to be bound to one of the default Maven lifecycle phases.
Here's how I would write the plugin's configuration:
```
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.8</version>
<configuration>
<outputDirectory>${project.build.directory}/javadoc</outputDirectory>
<reportOutputDirectory>${project.reporting.outputDirectory}/javadoc</reportOutputDirectory>
</configuration>
<executions>
<execution>
<id>attach-javadocs</id>
<phase>site</phase>
<goals>
<goal>aggregate</goal>
</goals>
</execution>
</executions>
</plugin>
```
Notice how I added an extra `phase` element to the execution. This will bind it to the "site" goal so that javadocs are generated when you run `mvn site`. Check [Introduction to the Build Lifecycle](http://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html#Build_Lifecycle_Basics) if you want one of the default Java build phases.
Also note that I ditched the `version` parameter; by default, it should use your POM's version anyway.
|
How make use of the Voice API to make calls using Huawei 3g Modems?
Some Huawei 3g modems like mine (E1752) has ability to make and receive calls. I believe onboard there is PCM channel that can be used while making or receiving the calls but I do not have any more information on that.
I am using their app called the Mobile Partner which is a fairly complete app which supports making and receiving calls. But I want to build my own app which will run on Mac OS X. But I am not able to locate any documents detailing the Voice API and the onboard PCM channel. If anybody is aware of this please let me know.

|
Voice is implemented as follows:- Your Modem registers total of 5 devices.The audio is sent through the serial port named "Huawei Mobile Connect - Application Interface".
Format of voice (in|out) data:
```
wFormatTag = WAVE_FORMAT_PCM;
nChannels = 1;
nSamplesPerSec = 8000;
nAvgBytesPerSec = 16000;
nBlockAlign = 2;
wBitsPerSample = 16;
cbSize = 0;
```
Block size of voice data in ReadFile or WriteFile operations (for COM port) must be set in **320** bytes. After each ReadFile must be WriteFile operation (in other choice buffers will be overflow and modem will restart after some time). Sample:
```
// BlockSize - size of buff for wave in|out operations (in my case 320*4 bytes)
while (!bAllRead) {
if (cInfo->hCom == INVALID_HANDLE_VALUE) {
SetVoiceClosed(cInfo);//exit from thread
return 0;
}
BOOL isRead = ReadFile(cInfo->hCom, cInfo->Header[counter].lpData + currBlocLength, 320, &nActualRead, &cInfo->o);
if (isRead || (GetLastError() == ERROR_IO_PENDING && GetOverlappedResult(cInfo->hCom, &cInfo->o, &nActualRead, TRUE))) {
if (nActualRead > 0) {
// обратка
nActualWrite = 0;
int nActualWriteAll = 0;
BOOL isWrite = WriteFile(cInfo->hCom, CurrBuffPtr + currBlocLength, nActualRead, &nActualWrite, &cInfo->oVoiceOut);
while (isWrite || (GetLastError() == ERROR_IO_PENDING && GetOverlappedResult(cInfo->hCom, &cInfo->oVoiceOut, &nActualWrite, TRUE))) {
nActualWriteAll += nActualWrite;
if (nActualWriteAll >= nActualRead)
break;
}
currBlocLength += nActualRead;
if (currBlocLength >= BlockSize)
bAllRead = true;
}
else {
Sleep(25);// wait for voice data (resync)
PurgeComm(cInfo->hCom, PURGE_TXABORT | PURGE_RXABORT | PURGE_TXCLEAR | PURGE_RXCLEAR);
}
}
else {
bAllRead = true;// there are no active call
PurgeComm(cInfo->hCom, PURGE_TXABORT | PURGE_RXABORT | PURGE_TXCLEAR | PURGE_RXCLEAR);
}
}
```
Something like that))). I do not find any useful info in Internet, so all that recommendation based on my experiments. I hope that was useful.
PS: I hope wave in|out operations will not be a problem for you.
PS2: Sorry for my English, I'm from Ukraine.
|
Multiple variable assignments in one row
As each programming language is different and my experience with Javascript is on basic level, I would like to know, how multiple variable assignments in one row are evaluated
Example:
```
a = b = c = d = 5;
```
Will such statement assign `5` to each of 4 variables `a`, `b`, `c` and `d`?
Thanks.
|
The short answer is *yes*, that statement will assign `5` to each of 4 variables `a`, `b`, `c` and `d`. But, contrary to what was said, doesn't assign `5` to `d`, and then the value of `d` to `c`, but it will assign the same value to each variables, starting from the right-hand side. To be more clear, your statement:
```
var a, b, c, d;
a = b = c = d = 5;
```
It's equivalent to:
```
var d = 5;
var c = 5;
var b = 5;
var a = 5;
```
**Not** to:
```
var d = 5;
var c = d;
var b = c;
var a = b;
```
It's a subtle but important difference: in the first case, JavaScript just *sets* a value to all the variables. In the second case, JavaScript *set* a value to all the variables **but** also *get* the value of three variables (the value of `a` is not assigned anywhere).
A simple code that will show that:
```
// `this` is the global object if you run this code in the global scope.
// In the browsers the global object is `window`.
Object.defineProperties(this, {
"a": {
get: function() {
console.log("get a");
},
set: function(value) {
console.log("set a");
}
},
"b": {
get: function() {
console.log("get b");
},
set: function(value) {
console.log("set b");
}
},
"c": {
get: function() {
console.log("get c");
},
set: function(value) {
console.log("set c");
}
},
"d": {
get: function() {
console.log("get d");
},
set: function(value) {
console.log("set d");
}
}
});
b = c = d = 5;
a = b;
```
On the console you should have:
```
set d
set c
set b
get b
set a
```
As you can see for the statement `b = c = d = 5` JS only *set* the variable, and call both `set` and `get` on `b`, because the statement `a = b`.
This distinction is very important, because if you define some getter for your property and you're not aware of this behavior, you will end up with unexpected bug using multiple variable assignments.
|
How to get type of file?
I'm trying to find a package which would recognise file type. For example
```
final path = "/some/path/to/file/file.jpg";
```
should be recognised as image or
```
final path = "/some/path/to/file/file.doc";
```
should be recognised as document
|
You can make use of the [`mime` package from the Dart team](https://pub.dev/packages/mime) to extract the [MIME types](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types) from file names:
```
import 'package:mime/mime.dart';
final mimeType = lookupMimeType('/some/path/to/file/file.jpg'); // 'image/jpeg'
```
## Helper functions
If you want to know whether a file *path* represents an **image**, you can create a function like this:
```
import 'package:mime/mime.dart';
bool isImage(String path) {
final mimeType = lookupMimeType(path);
return mimeType.startsWith('image/');
}
```
Likewise, if you want to know if a *path* represents a **document**, you can write a function like this:
```
import 'package:mime/mime.dart';
bool isDocument(String path) {
final mimeType = lookupMimeType(path);
return mimeType == 'application/msword';
}
```
---
You can find lists of MIME types [at IANA](https://www.iana.org/assignments/media-types/media-types.xhtml) or look at the [extension map in the `mime` package](https://github.com/dart-lang/mime/blob/c551c43d09e2290e033c90ed8d911140d4bcef20/lib/src/default_extension_map.dart#L8).
## From file headers
With the `mime` package, you can even check against header bytes of a file:
```
final mimeType = lookupMimeType('image_without_extension', headerBytes: [0xFF, 0xD8]); // jpeg
```
|
Get current user information in Apigility Resource
I just started with Apigility and oAuth2, and I was wondering if it is possible to get the currently authenticated "loggedin" user when fetching information from a database.
I currently have the following code:
```
/**
* Fetch all or a subset of resources
*
* @param array $params
* @return mixed
*/
public function fetchAll($params = array())
{
var_dump($params);
// Using Zend\Db's SQL abstraction
$sql = new \Zend\Db\Sql\Sql($this->db);
//I would like to get the currently logged in user here... but how?
$select = $sql->select('projects')->where(array('userid' => 1));;
// This provides paginated results for the given Select instance
$paged = new \Zend\Paginator\Adapter\DbSelect($select, $this->db);
// which we then pass to our collection
return new ProjectsCollection($paged);
}
```
I did a lot of searching already but I have no clue how to access the user information or the access token, do I need to parse the request header for this?
|
I was also looking for it. I didn't found any documentation about that. But the answer is quite simple:
Resource classes inherits `ZF\Rest\AbstractResourceListener` which already has a method `getIdentity`.
```
/**
* Fetch all or a subset of resources
*
* @param array $params
* @return mixed
*/
public function fetchAll($params = array())
{
// if user isn't authenticated return nothing
if(!$this->getIdentity() instanceof ZF\MvcAuth\Identity\AuthenticatedIdentity) {
return [];
}
// this array returyour query here using $userIdns the authentication info
// in this case we need the 'user_id'
$identityArray= $this->getIdentity()->getAuthenticationIdentity();
// note, by default user_id is the email (username column in oauth_users table)
$userId = $identityArray['user_id'];
// fetch all using $userId
}
```
You can also use `getIdentity` in RPC services.
I'm using the latest version of apigility.
|
Visual Studio setup project: run CustomActions/process as current user not system account
I'm using a setup project in visual studio 2010 for a c# outlook add-in (Office 2010/2013) and an other standalone tool. During the installation I kill all instances of outlook, afterwards I want to restart an instance of outlook.
In my addin project I added an installerclass and added an InstallEventHandler(AfterInstallEventHandler) where I execute
```
Process.Start("Outlook");
```
While the same command simply opens Outlook in an other compiled class, in the context of the installer outlook opens in the profile creation assistant.
I also tried to run said working compiled exe as an user defined action after the commit, but same problem occurs.
Any solution or explanation would be appreciated.
|
**SOLUTION:**
The installation runs in the SYSTEM account. Therefor the created process is also run in said account, not as the currently logged in user.
I created an additional project (InstallHelper), which includes the
```
Process.Start("Outlook");
```
I added the InstallHelper as CustomAction on Commit in my setup project and changed InstallerClass to False in the properties of the CustomAction. Then I copied WiRunSql.vbs to the project folder and added an PostBuildEvent to the setup project:
```
@echo off
cscript //nologo "$(ProjectDir)WiRunSql.vbs" "$(BuiltOutputPath)" "UPDATE CustomAction SET Type=1554 WHERE Type=3602"
```
3602:
- 0x800 (msidbCustomActionTypeNoImpersonate)
- 0x400 (msidbCustomActionTypeInScript)
- 0x200 (msidbCustomActionTypeCommit)
- 0x12 (Custom Action Type 18: exe)
1554:
- 0x400 (msidbCustomActionTypeInScript)
- 0x200 (msidbCustomActionTypeCommit)
- 0x12 (Custom Action Type 18: exe)
See:
[msdn: Custom Action In-Script Execution Options](http://msdn.microsoft.com/en-us/library/Aa368069.aspx)
The Type-change removed the bit for msidbCustomActionTypeNoImpersonate (0x00000800), so the InstallHelper and the created process are run as the logged in user, not as SYSTEM.
Alternatively those changes are possible via opening the msi in orca (has to be repeated after each build, so I prefer the scripted change).
|
php get array index of object having specific key value
I need to test if a specific key/value exists and then return the array index that the key/value is a member of without having to loop through each element and/or levels of the array. Is this possible?
```
[0:
{ regional:
[ 0 :
{
ID: 1000
someInfoA : valueA
someInfoB : valueB
}
]
[ 1 :
{
ID: 1001
someInfoA : valueA
someInfoB : valueB
}
]
[ 2 :
{
ID: 1002
someInfoA : valueA
someInfoB : valueB
}
]
[ 3 :
{
ID: 1003
someInfoA : valueA
someInfoB : valueB
}
]
}
{ national :
[ ... ]
[ ... ]
}
]
```
In this data example, I need to see if `ID: 1002` exists and if it does, return the array index: `[2]`, if not return `false`. i have seen various ways of just determining if the key/value exists but I haven't seen a non-loop method for returning the array index it belongs to.
|
You can do a combination of [array\_column()](https://www.php.net/manual/en/function.array-column.php) and [array\_search()](https://www.php.net/manual/en/function.array-search.php). Have a look [here](http://sandbox.onlinephpfunctions.com/code/7b462d47bc0f5b63dc4e5846ebefebf5e4eeb046).
```
$records = [
[
'id' => 2135,
'first_name' => 'John',
'last_name' => 'Doe',
],
[
'id' => 3245,
'first_name' => 'Sally',
'last_name' => 'Smith',
],
[
'id' => 5342,
'first_name' => 'Jane',
'last_name' => 'Jones',
],
[
'id' => 5623,
'first_name' => 'Peter',
'last_name' => 'Doe',
]
];
$key = array_search(3245, array_column($records, 'id'));
echo $key;
```
|
Where can I learn about perl6 Type variables (::T)
I need to use perl6 type variables. It seems that the definitive manual is here <http://www.jnthn.net/papers/2008-yapc-eu-perl6types.pdf>, which is concise and v. useful in so far as it goes.
Is there anything more comprehensive or authoritative that I can be pointed to?
|
The way I like to think of it is that `Int` is really short for `::Int`.
So most of the time that you are talking about a type, you can add the `::` to the front of it.
Indeed if you have a string and you want to use it to get the type with the same short name you use `::(…)`
```
my $type-name = 'Int';
say 42 ~~ ::($type-name); # True
```
---
The thing is that using a type in a signature is already used to indicate that the parameter is of that type.
```
-> Int $_ {…}
```
Any unsigiled identifier in a signature is seen as the above, so the following throws an error if there isn't a `foo` type.
```
-> foo {…}
```
What you probably want in the situation above is for `foo` to be a sigiless variable. So you have to add a `\` to the front. (Inside of the block you just use `foo`.)
```
-> \foo {…}
```
---
So if you wanted to add a feature where you capture the type, you have to do something different than just use an identifier. So obviously adding `::` to the front was chosen.
```
-> ::foo { say foo }
```
If you call it with the number `42`, it will print `(Int)`.
---
You can combine these
```
-> Real ::Type \Value {…}
```
The above only accepts a real number (all numerics except Complex), aliases the type to `Type`, and aliases the number to `Value`
```
sub example ( Real ::Type \Value ) {
my Type $var = Value;
say Type;
say Value;
}
```
```
> example 42;
(Int)
42
> example ''
Type check failed in binding to parameter 'Value'; expected Real but got Str ("")
in block <unit> at <unknown file> line 1
> example 42e0
(Num)
42
```
---
This is also used in roles.
```
role Foo[ Real ::Type \Value ] {
has Type $.foo = Value; # constrained to the same type as Value
}
class Example does Foo[42] {}
say Example.new( :foo(128) ).foo; # 128
say Example.new().foo; # 42
say Example.new( :foo(1e0) ); # Type check error
```
You can of course leave off any part that you don't need.
```
role Foo[::Type] {…}
```
|
Opencv java - Load image to GUI
I'm developing an application using Java Opencv-2.4.4 and swing GUI. Problem is that I'm unable to find any solution, that shows efficient way how to print processed image (saved in *Mat* object) to java swing GUI. For this moment I'm using this clumsy solution:
```
javax.swing.JLabel outputImage;
outputImage.setIcon(new javax.swing.ImageIcon("/home/username/Output.png"));
private void sliderStateChanged(javax.swing.event.ChangeEvent evt) {
.
.
Mat canny; // Here is saved what I want to plot
String filename = "/home/username/Output.png";
Highgui.imwrite(filename, canny); // write to disk
outputImage.setIcon(new ImageIcon(ImageIO.read(new File(filename)))); //update Icon
.
.
}
```
When user changes some values, inputs etc ., in GUI I have to overwrite *Output.png* on disk and update jLabel with new image from disk.
Is there any more elegant / efficient solution to this ? Is it posible to plot or convert *Mat* object directly to *Canvas* or *Image* or anything that is printable as image in swing ?
|
jpeg encoding is interesting, but there are a couple problems:
- it is not a lossless format, you will lose image data when compressing
- it takes quite a while (around 6 to 10 times longer than the suggested one below)
>
>
> ```
> public Image toBufferedImage(Mat m){
> int type = BufferedImage.TYPE_BYTE_GRAY;
> if ( m.channels() > 1 ) {
> type = BufferedImage.TYPE_3BYTE_BGR;
> }
> int bufferSize = m.channels()*m.cols()*m.rows();
> byte [] b = new byte[bufferSize];
> m.get(0,0,b); // get all the pixels
> BufferedImage image = new BufferedImage(m.cols(),m.rows(), type);
> final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
> System.arraycopy(b, 0, targetPixels, 0, b.length);
> return image;
>
> }
>
> ```
>
>
|
Python: Extracting bits from a byte
I'm reading a binary file in python and the documentation for the file format says:
>
> Flag (in binary)Meaning
>
>
> 1 nnn nnnn Indicates that there is one data byte to follow
> that is to be duplicated nnn nnnn (127 maximum)
> times.
>
>
> 0 nnn nnnn Indicates that there are nnn nnnn bytes of image
> data to follow (127 bytes maximum) and that
> there are no duplications.
>
>
> n 000 0000 End of line field. Indicates the end of a line
> record. The value of n may be either zero or one.
> Note that the end of line field is required and
> that it is reflected in the length of line record
> field mentioned above.
>
>
>
When reading the file I'm expecting the byte I'm at to return `1 nnn nnnn` where the `nnn nnnn` part should be 50.
I've been able to do this using the following:
```
flag = byte >> 7
numbytes = int(bin(byte)[3:], 2)
```
But the numbytes calculation feels like a cheap workaround.
Can I do more bit math to accomplish the calculation of numbytes?
How would you approach this?
|
You can strip off the leading bit using a mask ANDed with a byte from file. That will leave you with the value of the remaining bits:
```
mask = 0b01111111
byte_from_file = 0b10101010
value = mask & byte_from_file
print bin(value)
>> 0b101010
print value
>> 42
```
I find the binary numbers easier to understand than hex when doing bit-masking.
EDIT: Slightly more complete example for your use case:
```
LEADING_BIT_MASK = 0b10000000
VALUE_MASK = 0b01111111
values = [0b10101010, 0b01010101, 0b0000000, 0b10000000]
for v in values:
value = v & VALUE_MASK
has_leading_bit = v & LEADING_BIT_MASK
if value == 0:
print "EOL"
elif has_leading_bit:
print "leading one", value
elif not has_leading_bit:
print "leading zero", value
```
|
"Dynamic segment" in Ember.js?
Throughout the Ember.js documentation, one finds the concept of **dynamic segment** mentioned [at several places](https://www.google.de/search?q=site%3Aemberjs.com+%22dynamic+segment%22). What does it mean?
|
Updating with a proper sample: [Demo](http://jsfiddle.net/schawaska/ZRaxz/show) | [Source](http://jsfiddle.net/schawaska/ZRaxz/)
**Edit due to questions in comments:**
In Ember, think of the `Router` mechanism as a State Machine: Each `Route` can be seen as a State. Sometimes tho, a state can have it's own little state machine within it. With that said: A `resource` is a state which you have possible child states. A `PersonRoute` can be defined as either as a `resource` our a `route` in the `<Application>.Router.map` callback; it really depends on your end goal. For example, if we think of a resource for a list of people based on a person model, we would potentially have a route to list all records.
```
App.Router.map(function() {
this.resource('people');
});
```
With this map, I'm telling my app that it needs a people template (and maybe a view), a people controller and a people route. A resource is also assumed to have a index route, which is implied and you don't have to code it, but if you need to, it would be `PeopleIndexRoute`, after the name of the resource itself, by convention.
Now I can (a) create a `person` route under `people` resource to be a single state of a person record; or (b) I can create a `person` resource under the `people` resource, so I would have more options under `person` resource (edit, detail, delete); or (c) I could create a separate resource for person, and use the path to override the url if I want to.
I sometimes go for option c:
```
App.Router.map(function() {
this.resource('people');
this.resource('person', {path: 'person/:person_id'}, function() {
this.route('edit');
this.route('delete');
});
});
```
That would make sense that `edit` is route since it doesn't have child states , only siblings (delete) and a parent (person). The url for a record would be something like this: `~/#/person/3/edit`).
The routes, when not defined as a resource, won't have any child route/state, so you don't have person.edit.index like you have person.index, in other words, routes don't have child, only siblings and resources can have both.
Right now, the [Routing Guide](http://emberjs.com/guides/routing/) is the most solid piece of documentation we have about this. I strongly recommend it.
---
Dynamic Segment is a part of a route URL which changes according to the resource in use. Consider the following:
```
App.Router.map(function() {
this.resource('products', function() {
this.route('product', { path: ':product_id' })
}
});
```
In the stub above, the line:
```
this.resource('products', function() {
```
will produce the url
>
> ~/#/products
>
>
>
and the following line will produce
>
> ~/#/products/:product\_id
>
>
>
replacing the dynamic part, you could have an url like this
>
> ~/#/products/3
>
>
>
the `:product_id` is what makes this route dynamic. The router will serialize the id of a resource (for example a `Product` model) to the URL and it also uses that id to `find` the a model in your `DS.Store`. You'll often see this in routes like the following:
```
App.ProductRoute = Em.Route.extend({
model: function(params) {
return App.Product.find(params.product_id);
}
});
```
So for this example, if you access `~/#/products/3`, the app will then try to load an instance of the Product model from your store or try to fetch from your backend API.
You can see a fiddle that illustrates that [here](http://fiddle.jshell.net/schawaska/bw45L/show) | [source here](http://fiddle.jshell.net/schawaska/bw45L)
I also recommend this [screencast by Tom Dale](http://www.youtube.com/watch?v=Ga99hMi7wfY&list=PLLUBPLc28H8c8ihVHkYeRHe8OiB8P5WL0&index=13) where he builds a blog reader app with Ember.js using the router and the ember-data API to load blog records based on the dynamic part of the URL.
|
Hiding other windows/programs from Electron App
I've been searching NPM for some package that will allow me, for Windows only Linux and OSX support is not required, to hide specific open windows. For instance I would like to be able to hide and unhide a notepad window, however if I have multiple notepad windows open I do not want to hide them all, I want to hide only a single specific one. I'd like it to be hidden from the task bar and not just have the opacity set to invisible.
Does anyone know of an NPM Package with this functionality or if its even possible?
I believe the equidistant of this in C# would be ShowWindow from user32.dll
Thanks.
|
I wasn't able to locate a out-of-the-box solution for you, but you hint at the answer to your problem in your question. user32.dll is accessible from node.js/electron using the node-ffi.
To install node-ffi you need node-gyp and all of its dependencies on Windows. See <https://github.com/nodejs/node-gyp#installation>
Next you need to do the following:
```
npm install ffi
npm install electron-rebuild
.\node_modules\.bin\electron-rebuild.cmd
```
That last one is to rebuild the native binaries for Electron.
Put the following chunk of code in your Electron application:
```
var FFI = require('ffi');
function TEXT(text){
return new Buffer(text, 'ucs2').toString('binary');
}
var user32 = new FFI.Library('user32', {
'FindWindowW': ['int', ['string', 'string']],
'ShowWindow': ['int', ['int', 'int']]
});
var handle = user32.FindWindowW(null, TEXT('Untitled - Notepad'));
console.log(handle);
user32.ShowWindow(handle, 0);
```
Launch notepad and make sure the window title is 'Untitled - Notepad'. If you launch your Electron app the window should hide, but the process remains. I've posted a video that demonstrates that this does work: <https://youtu.be/4thydneqtyQ>
As for the second half of your question...
You'll need some way to identify which Window you want to hide. If you can use the Window title then the above method will work.
If you cannot then you'll need to have some other way to evaluate the Window via the Win32 API and make the determination. You could use EnumWindows (<https://msdn.microsoft.com/en-us/library/windows/desktop/ms633497(v=vs.85).aspx>) via FFI to get every window and then use other functions to inspect it and make the determination.
If you provide exactly how you plan to differentiate I'll see if I can find a specific solution for it. Otherwise my suggestion is to research how to do it via user32.dll using C# or C++ and then translate the result to Electron using FFI.
|
What happens when I truncate a file that is in use?
Around the web many people say you can truncate a file using `> filename` or `truncate -s0 filename` while file begin used
I know everytime a process write in a file, the process uses a offset for write in a file, doing a test with an script like this.
```
#!/usr/bin/env python
import os, time
with open("passwd","w") as f: #copy of passwd file in my current directory
f.seek(0)
for x in xrange(1,1000):
f.write("hello world \n" + time.ctime() + "\n")
f.flush()
time.sleep(2)
```
every time my script make a write syscall the offset in `/proc/pid_number/fdinfo/3 pos` field is changed, but when i try to truncate the the file using the method listed above, in my file i see many characters like this `^@` when i open the file using `vim` or less `-u` and the file type is change from `ASCII text` to `data` and when i use `ls -l filename` the size isn't changed
So, when truncate the file the offset of the file are not report back, i'm testing this in `Centos 7` and in `Redhat 5`, so i can tell changed the file size while the file is in use by processes doesn't free space and make dirty my file.
So my questions is, if my process has an opened file in `pos 1000` and i did `truncate -s0 filename`, if the truncate works, what happening in the next process write?
```
strace truncate -s0 passwd
open("passwd", O_WRONLY|O_CREAT|O_NONBLOCK, 0666) = 3
ftruncate(3, 0) = 0
close(3) = 0
close(1) = 0
close(2) = 0
exit_group(0) = ?
ls -l passwd
-rw-rw-r--. 1 user91 users 13832 Feb 23 17:16 passwd
```
As you can see my file wasn't truncated
This problem doesn't happen if i open the in append mode, for example with this code.
```
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/stat.h>
int main(){
int range = 1000;
int x; x = open("passwd", O_WRONLY|O_CREAT|O_APPEND);
int i = 0;
for( i = 0; i <= range; range++)
write(x,"hello world\n",12);
sleep(2);
}
```
|
Note although the system call is called truncate, it actually is better interpreted as saying "Make my file report this many bytes in size". As per the system call manpage:
>
> The truncate() and ftruncate() functions cause the regular file named by path or referenced by fd to be truncated to a size of precisely length bytes.
>
>
> If the file previously was larger than this size, the extra data is lost. If the file previously was shorter, it is extended, and the extended part reads as null bytes ('\0').
>
>
>
So, one can truncate a file and have it be larger, rather than smaller.
>
> So my questions is, if my process has an opened file in pos 1000 and i did truncate -s0 filename, if the truncate works, what happening in the next process write?
>
>
>
- You have truncated. The file size at this stage is 0 bytes. The offset is 1000.
- The write at position 1001 occurs.
- The file size is 1002 bytes. Bytes 0-1000 contain '\0' (null). Bytes 1001+ contain data written.
When you write to a file from a position larger than the file itself, the data between the end of the file and the new write becomes null bytes and the file data between those two points is referred as being *sparse*.
Indeed, you can do the following and produce the same affect.
```
import os, sys
f = open('data.txt','w')
f.seek(1048576)
f.write('a')
f.flush()
f.close()
```
You also mentioned that opening in append mode avoids this behaviour. This is true because you are instructing the kernel in that case to "write to the actual end of the file every time". If you truncate then the end of the file does change. In append you cannot reposition your file pointer.
Here is a sample program which demonstrates what happens to the file, the offsets and the data in a file that has been truncated.
```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <fcntl.h>
#include <err.h>
#define FPATH "/tmp/data.txt"
#define FSIZE 65536
int main() {
int a,b;
char buf[FSIZE];
char byte;
struct stat st;
memset(buf, 'A', FSIZE);
a = open(FPATH, O_WRONLY|O_CREAT);
b = open(FPATH, O_RDONLY);
if (a < 0 || b < 0)
errx(EXIT_FAILURE, "Could not open file");
printf("Writing %d * 'A' into file\n", FSIZE);
/* Write some bytes */
if(write(a, buf, FSIZE) != FSIZE)
errx(EXIT_FAILURE, "Couldn't write complete size out");
/* Seek to a new position in the file */
lseek(b, FSIZE/2, SEEK_SET);
printf("Current position of handle 'a': %d\n", lseek(a, 0, SEEK_CUR));
printf("Current position of handle 'b': %d\n", lseek(b, 0, SEEK_CUR));
stat(FPATH, &st);
printf("Reported size on filesystem of %s: %d\n", FPATH, st.st_size);
/* OK -- now, read the byte at the position */
if (read(b, &byte, 1) < 0)
err(EXIT_FAILURE, "Could not read file");
printf("Character at current position of handle 'b': '%c'\n", byte);
/* Truncate the file in the 'a' handle */
printf("Performing truncate...\n");
if (ftruncate(a, 0) < 0)
err(EXIT_FAILURE, "Cannot truncate file");
printf("Current position of handle 'a': %d\n", lseek(a, 0, SEEK_CUR));
printf("Current position of handle 'b': %d\n", lseek(b, 0, SEEK_CUR));
stat(FPATH, &st);
printf("Reported size on filesystem of %s: %d\n", FPATH, st.st_size);
printf("Writing one byte via handle 'a'\n");
if (write(a, buf, 1) < 0)
err(EXIT_FAILURE, "Cannot perform second write");
printf("Current position of handle 'a': %d\n", lseek(a, 0, SEEK_CUR));
printf("Current position of handle 'b': %d\n", lseek(b, 0, SEEK_CUR));
stat(FPATH, &st);
printf("Reported size on filesystem of %s: %d\n", FPATH, st.st_size);
if (read(b, &byte, 1) < 0)
err(EXIT_FAILURE, "Could not read file");
printf("Character at current position of handle 'b': '%c'\n", byte);
close(a);
close(b);
exit(0);
}
```
This results in the following output;
```
Writing 65536 * 'A' into file
Current position of handle 'a': 65536
Current position of handle 'b': 32768
Reported size on filesystem of /tmp/data.txt: 65536
Character at current position of handle 'b': 'A'
Performing truncate...
Current position of handle 'a': 65536
Current position of handle 'b': 32769
Reported size on filesystem of /tmp/data.txt: 0
Writing one byte via handle 'a'
Current position of handle 'a': 65537
Current position of handle 'b': 32769
Reported size on filesystem of /tmp/data.txt: 65537
Character at current position of handle 'b': ''
```
|
SImple C Program opening a file
I'm trying to make a program to open a file, called "write.txt".
```
#include <stdio.h>
main() {
FILE *fp;
fp = fopen("write.txt", "w");
return 0;
}
```
Should this work? Because it returns nothing.
|
Other than an old variant of `main`, there's not really *much* wrong with that code. It should, barring errors, create the file.
However, since you're not checking the return value from `fopen`, you may get an error of some sort and not know about it.
I'd start with something like:
```
#include <stdio.h>
#include <errno.h>
int main(void) {
FILE *fp = fopen("write.txt", "w");
if (fp == NULL) {
printf("File not created, errno = %d\n", errno);
return 1;
}
fclose(fp);
printf("File created okay\n");
return 0;
}
```
---
If you're convinced that the file isn't being created but the above code says it is, then you may be a victim of the dreaded "IDE is working in a different directory from what you think" syndrome :-)
Some IDEs will actually run your code while they're in a directory like `<solution-name>\bin` or `<solution-name>\debug`. You can find out by putting:
```
system("cd"); // for Windows
system("pwd") // for UNIXy systems
```
in to your code to see *where* it's running.
*That's* where a file will be created if you specify a relative path like `"write.txt"`. Otherwise, you can specify an absolute path to ensure it tries to create it at a specific point in the file system.
|
Comparing object.Value = Null does not produce expected results
So I have a frustratingly simple issue that I cannot seem to solve.
```
If Me.Bank_Credit.Value = Null Then
Me.Bank_Credit.Value = 0
End If
```
Basically, I have an unbound box that the user enters data into and then hits a button. After a YES on confirmation box, the data on the unbound box is copied over to the bound box. However, if the user does not enter anything, that in turn creates an empty bound field which can seriously screw up queries down the road.
That being said, the above code simply will not work for me. If I set, for instance, If Me.Bank\_Credit.Value = 1 and then run it, the 1s get turned into 2s, as should happen. But it simply refuses to work for Null or even "".
I'm so sure there is a simple solution to this issue, I just can't figure it out.
Thanks in advance
|
Nothing is ever equal to Null, not even another Null. And nothing is ever not equal to Null, not even another Null.
When `Bank_Credit` *is* Null, the following expression will return Null ... not `True` as you might expect, or even `False`.
```
Debug.Print (Me.Bank_Credit.Value = Null)
```
It's the same reason for this result in the Immediate window:
```
Debug.Print Null = Null
Null
```
Use the `IsNull()` function.
```
If IsNull(Me.Bank_Credit.Value) Then
```
Also, look at the `Nz()` help topic to see whether it can be useful. You could do this, although it's not really an improvement over `IsNull()`. But `Nz()` can be very convenient for other VBA code.
```
Me.Bank_Credit = Nz(Me.Bank_Credit, 0)
```
|
Random Forest Overfitting R
I used a two-step cforest in my model. the accuracy of the train set is 87%, yet the accuracy of the test set is 57%. This indicates the model is severely overfitting. How to solve this problem? Should I reduce the nodes of the tree or divide the data into k fold? How can I determine how many nodes should I retain?
Here is the code for step 1.
```
fit1 <- cforest((b == 'three')~ posemo + social + family
+friend + home + humans + money + they
+ social+article+certain+insight+affect+ negemo+ future+swear+sad
+negate+ppron+sexual+death + filler+leisure, data = trainset1,
controls=cforest_unbiased(ntree=3000, mtry= 3))
```
|
>
> In random forests, overfitting is generally caused by over growing the
> trees
>
>
>
as stated in one of the other answers is completely **WRONG**. The RF algorithm, by definition, requires fully grown **unprunned** trees. This is the case because RF can only reduce variance, not bias (where $error=bias+variance$). Since the bias of the entire forest is roughly equal to the bias of a single tree, the base model used has to be a very deep tree to guarantee a low bias. Variance is subsequently reduced by growing many deep, uncorrelated trees and averaging their predictions.
I wouldn't necessarily say that a training accuracy of 87% and a test accuracy of 57% indicates *severe* overfitting. Performance on your training set will always be higher than on your test set. Now, you need to provide more information if you want CV users to be able to diagnose the source of your potential overfitting problem.
- how did you tune the parameters of your random forest model? Did you use cross-validation, or an independent test set? What are the sizes of your training/testing sets? Did you properly used randomization to constitute these sets?
- is your target categorical or continuous? If yes to the former, do
you have any kind of class imbalance issue?
- how did you measure error? If it applies, is your classification problem binary, or multiclass?
In practice, Random Forest seldom overfit. But what would tend to favor overfitting would be having too many trees in the forest. At some point it is not necessary to keep adding trees (it does not reduce variance anymore, but can slightly increase it). This is why the optimal number of trees should be optimized like any other hyperparameter (or at least, should not be carelessly set to too high of a number. It should be the smallest number of trees needed to achieve lowest error. You can look at a plateau in the curve of OOB error VS number of trees).
Other than overfitting, the difference in accuracy between train & test that you observe could be explained by differences between the sets. Are the same concepts present in both sets? If not, even the best classifier won't be able to perform well out-of-bag. You can't extrapolate for something if you did not even learn about some aspect of it.
I would also recommend that you read the section about RF in the formative [Elements of Statistical Learning](http://web.stanford.edu/~hastie/local.ftp/Springer/OLD/ESLII_print4.pdf). Especially, see section 15.3.4 (p. 596) about RF and overfitting.
|
Nest.js Can't resolve circular dependency on TestingModule
I have built a new module and service for a Nest app, it has a circular dependency that resolves successfully when I run the application, but when I run the tests, my mockedModule (TestingModule) can't resolve the dependency for the new service I created.
Example of the "LimitsService" created with a circular dependency with "MathService":
```
@Injectable()
export class LimitsService {
constructor(
private readonly listService: ListService,
@Inject(forwardRef(() => MathService))
private readonly mathService: MathService,
) {}
async verifyLimit(
user: User,
listId: string,
): Promise<void> {
...
this.mathService.doSomething()
}
async someOtherMethod(){...}
}
```
MathService calls LimitService.someOtherMethod in one of its methods.
This is how the testing module for "MathService" is setup (everything worked fine before without "LimitsService"):
```
const limitsServiceMock = {
verifyLimit: jest.fn(),
someOtherMethod: jest.fn()
};
const listServiceMock = {
verifyLimit: jest.fn(),
someOtherMethod: jest.fn()
};
describe('Math Service', () => {
let mathService: MathService;
let limitsService: LimitsService;
let listService: ListService;
let httpService: HttpService;
beforeEach(async () => {
const mockModule: TestingModule = await Test.createTestingModule({
imports: [HttpModule],
providers: [
MathService,
ConfigService,
{
provide: LimitsService,
useValue: limitsServiceMock
},
{
provide: ListService,
useValue: listServiceMock
},
],
}).compile();
httpService = mockModule.get(HttpService);
limitsService = mockModule.get(LimitsService);
listService = mockModule.get(ListService);
mathService= mockModule.get(MathService);
});
...tests
```
But when I run the test file, I get:
"Nest can't resolve dependencies of the MathService (...). Please make sure that the argument dependency at index [x] is available in the RootTestModule context."
I have tried commenting out "mathService" from "LimitsService" and it works when I do that,but I need mathService.
I have also tried importing "LimitsModule" instead of providing "LimitsService" with forwardRef() and then getting "LimitsService" from mockModule but that threw the same error.
What is the proper way of importing my "LimitsService" into the mockModule?
|
This is now working for me.
# SOLUTION
Import jest mock of LimitsService
```
jest.mock('@Limits/limits.service');
```
Set Provider with mock
```
describe('Math Service', () => {
let mockLimitsService : LimitsService;
let mathService: MathService;
let listService: ListService;
let httpService: HttpService;
beforeEach(async () => {
const mockModule: TestingModule = await Test.createTestingModule({
imports: [HttpModule],
providers: [
MathService,
ConfigService,
LimitsService,
{
provide: ListService,
useValue: listServiceMock
},
],
}).compile();
mockLimitsService = mockModule.get(LimitsService);
httpService = mockModule.get(HttpService);
listService = mockModule.get(ListService);
mathService= mockModule.get(MathService);
});
```
|
Understanding Export a round bracket function in ES6
I am trying to understand following export statement:
```
export default (
<Route path="/" component={App}>
<IndexRoute component={HomePage} />
<Route path="about" component={AboutPage}/>
</Route>
);
```
By definition, The export statement is used to export functions, objects, or primitive values.
**Different Syntax**:
>
>
> ```
> export { name1, name2, …, nameN };
> export { variable1 as name1, variable2 as name2, …, nameN };
> export let name1, name2, …, nameN; // also var, function
> export let name1 = …, name2 = …, …, nameN; // also var, const
>
> export default expression;
> export default function (…) { … } // also class, function*
> export default function name1(…) { … } // also class, function*
> export { name1 as default, … };
>
> export * from …;
> export { name1, name2, …, nameN } from …;
> export { import1 as name1, import2 as name2, …, nameN } from …;
>
> ```
>
>
Among all of possible alternatives I am not able to relate how export default ( .. ); fit in.
I am guessing it would be exporting an anonymous function.
|
This rule applies:
```
export default expression;
```
`(...)` is the [**grouping operator**](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Grouping), which is an expression. It simply evaluates to the result of the expression it contains. You surely have seen it before. For example:
```
(20 + 1) * 2
```
Some constructs require an expression to be started on the same line. In the following example, the function returns `undefined`, because the expression has to start in the same line as the `return`:
```
function foo() {
return
21 + 1;
}
console.log(foo());
```
With the grouping operator, we can do exactly that: Start the expression on the same line, but put the main part on the next line, for stylistic reasons:
```
function foo() {
return (
21 + 1
);
}
console.log(foo());
```
---
I don't know whether or not using the grouping operator is required in the `export default` case, but it would be trivial to find out (run the code with and without it).
Even if it is not required, it doesn't change the result of the expression. Sometimes it's just used to be more visually pleasing or easier to read.
|
Robocopy cannot find specific directory
I am migrating our file server from 2008 to 2012R2.
Currently i am trying to copy all folders and files to our new server but the command isnt working correctly.
Here is what i am using:
```
robocopy G: D:\ /e /zb /COPY:DATSOU /r:3 /w:1 /v/ /eta /log+:C:\robocopy\copyreport.txt /tee /sec /tbd
```
But for some reason its NOT working.. it cant find the specific path.
I checked and double checked and im 100% sure the paths are working and are correct.. i had an error before with the log file but resolved that.
Anyone knows whats wrong?
Actual error: `2015/04/22 11:24:56 ERROR 3 (0X00000003) Accessing Source Directory G:\ The system cannot find the path specified.`
|
I had the same problem with a mapped drive. [I found the answer on the technet forums.](https://social.technet.microsoft.com/Forums/windowsserver/en-US/f642415b-2b9e-43fe-88aa-ee660ed86508/robocopy-error-3-system-cannot-find-path-specified-and-error-5-access-denied?forum=winserverpowershell)
>
> This is caused by running with highest privileges(on the local system). Please run as the user not as administrator. Normally the administrator only has local access, no access to the server.
>
>
>
This problem is caused by running the command as administrator (on the old server). Running the command as a non-administrator user will resolve the problem.
|
Tomcat hosting multiple virtual host with multiple SSL certificate
I have a server hosting multiple websites using Tomcat 7, for example
- a.abc.com
- b.abc.com
- c.def.com
- d.def.com
Using tomcat's virtual hosting feature, so they each may belong to different webapps folder.
We're now trying to implement Https to each of the sites. So basically we got 2 wildcard certificates, \*.abc.com, and \*.def.com
I've been looking for the ways to setup and I found:
- [This](https://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html) where it taught me how to setup SSL with tomcat
- [This](https://stackoverflow.com/a/6343059/1131470) where it taught me how to setup multiple Host with different SSL pointing at different IP address
Second example is closest to what I need but the problem is all of my virtual hosts are of same IP address, the only difference is on the domain name itself, worse where most of them have a couple different alias even (eg: my d.def.com could have e.ghi.com as one of its alias).
So my question would be, is there anyway I could setup my multiple SSL certificates for all my virtual hosts?
|
I'm afraid it's not possible to fulfill all your requirements with tomcat:
- multiple domains
- two SSL certificates
- unique IP address
- standard SSL port (I have assumed it)
Tomcat SSL Configuration is defined in `<Connector>` element at `config.xml`
```
<Connector
protocol="org.apache.coyote.http11.Http11NioProtocol"
port="8443" maxThreads="200"
scheme="https" secure="true" SSLEnabled="true"
keystoreFile="${user.home}/.keystore" keystorePass="changeit"
clientAuth="false" sslProtocol="TLS"/>
```
Each connector requires a `port` attribute. See definition in [HTTP Connector](https://tomcat.apache.org/tomcat-7.0-doc/config/http.html) documentation
>
> The TCP port number on which this Connector will create a server socket and await incoming connections. **Your operating system will allow only one server application to listen to a particular port number on a particular IP address**.
>
>
>
Therefore you can't define two connectors using the same port, and then it is not possible to configure different SSL certificates.
## Alternatives
- **Several IP's**: The `address` attribute configures which address will be used for listening on the specified port. Set an IP per main domain using a SSL certificate and configure a `Connector` for it
- **Different ports**: `443` for \*.abc.com, `444` for \*.def.com, and so on
- **SSL Proxy**: Deploy a proxy server like Apache or Nginx in front of tomcat. The proxy only deals with SSL negotiation and virtual hosts. All the traffic is redirected to Tomcat in plain HTTP.
Just as an example using [Apache mod\_ssl](https://httpd.apache.org/docs/current/mod/mod_ssl.html) + and the tomcat connector [mod\_JK](http://tomcat.apache.org/connectors-doc/webserver_howto/apache.html) your requested configuration is simple
```
listen 443
<VirtualHost *:443>
ServerName a.abc.com:443
SSLEngine on
SSLProtocol all -SSLv2
SSLCertificateFile "/home/certs/abc.com.crt"
SSLCertificateKeyFile "/home/certs/abc.com.key"
SSLCertificateChainFile "/home/certs/abc.com.ca-bundle"
SSLOptions +StdEnvVars +ExportCertData
ErrorLog "/var/logs/error_abc_443.log"
TransferLog "/var/logs/error_abc_443.log"
JkMount /* worker1
</VirtualHost>
<VirtualHost *:443>
ServerName c.def.com:443
SSLEngine on
SSLProtocol all -SSLv2
SSLCertificateFile "/home/certs/def.com.crt"
SSLCertificateKeyFile "/home/certs/def.com.key"
SSLCertificateChainFile "/home/certs/def.com.ca-bundle"
SSLOptions +StdEnvVars +ExportCertData
ErrorLog "/var/logs/error_def.log"
TransferLog "/var/logs/error_def.log"
JkMount /* worker2
</VirtualHost>
```
|
How to connect Dualshock 3 controller (PS3 Sixaxis Gamepad) on Ubuntu 16.04?
Since the last officially supported version is "vivid"(Ubuntu 15.04) from [QtSixA PPA](https://launchpad.net/~falk-t-j/+archive/ubuntu/qtsixa) and don't work for Ubuntu Xenial (16.04).
|
**On Ubuntu 16.04 - Xenial (maybe work on above)**
**Note:** Don't work for PS3 Gasia / Shanwan game controllers (China/fake PS3 controllers) -- I tested only with a Shanwan controller!
**Install from source**
Since the last officially supported version is "vivid"(Ubuntu 15.04) from [QtSixA PPA](https://launchpad.net/~falk-t-j/+archive/ubuntu/qtsixa) and [don't work for Xenial (16.04)](https://askubuntu.com/a/826247/139248). So we need compile from the only repo that **works** (and has updated) and allow to connect **two** controllers at the same time was the fork of [RetroPie](https://github.com/RetroPie/sixad):
```
sudo apt-get install dialog build-essential pyqt4-dev-tools libusb-dev libbluetooth-dev python-dbus -y
wget https://github.com/RetroPie/sixad/archive/master.zip -O sixad-master.zip
unzip sixad-master.zip
cd sixad-master
make
sudo make install
```
**Configuring**
1.) First make sure you have Bluetooth on your computer.
2.) Plug in the controller via USB first to do the initial pairing. Run the sixpair command:
```
sudo sixpair
```
Sample of output when successful pair:
```
Current Bluetooth master: XX:XX:XX:XX:XX:XX
Setting master bd_addr to XX:XX:XX:XX:XX:XX
```
**XX:XX:XX:XX:XX:XX** is the MAC of your Bluetooth device.
3.) Unplug the controller and run:
```
sudo sixad -s
```
This starts the sixad daemon which waits for incoming PS3 controller connections. sixad will completely take over the Bluetooth adapter (exclusive control, so no other Bluetooth devices other than PS3 controllers will work after you start sixad).
4.) Press the ps3 button on your PS3 controller and wait for 2-3 seconds. You'll feel the controller vibrate when it successfully connects.
Sample of output when successful connected:
```
[ ok ] Starting bluetooth (via systemctl): bluetooth.service.
sixad-bin[23052]: started
sixad-bin[23052]: sixad started, press the PS button now
Watching... (5s)
sixad-sixaxis[23069]: started
sixad-sixaxis[23069]: Connected 'PLAYSTATION(R)3 Controller (**XX:XX:XX:XX:XX:XX**)' [Battery 05]
```
To turn off sixad and disable control: Just press **CTRL+C**
**Test the functionality [Optional]**
You can test the functionality of your controller through Jstest-gtk (install: `sudo apt-get install jstest-gtk`)
**Disable sixad service [Optional]**
**Warning:** sixad will completely take over the Bluetooth adapter (exclusive control, so no other Bluetooth devices other than PS3 controllers will work after you start sixad).
```
sudo update-rc.d sixad disable
sudo service sixad stop
```
References:
<https://askubuntu.com/a/834907/139248>
<https://retropie.org.uk/forum/topic/2913/guide-use-qtsixa-on-ubuntu-16-04-and-derivatives>
|
Python, PIL and JPEG on Heroku
I have a **Django** site, hosted on **Heroku**.
One of the models has an image field, that takes uploaded images, resizes them, and pushes them to Amazon S3 so that they can be stored persistently.
This is working well, using **PIL**
```
def save(self, *args, **kwargs):
# Save this one
super(Product, self).save(*args,**kwargs)
# resize on file system
size = 200, 200
filename = str(self.thumbnail.path)
image = Image.open(filename)
image.thumbnail(size, Image.ANTIALIAS)
image.save(filename)
# send to amazon and remove from ephemeral file system
if put_s3(filename):
os.remove(filename)
return True
```
However, PIL seems to work fine for PNGs and GIFs, but is not compliled with **libjpeg**. On a local development environment or a fully controlled 'nix server, it is simply a case of installing the jpeg extension.
But does anyone know whether Jpeg manipulation is possible using the Cedar Heroku stack? Is there something else that can be added to requirements.txt?
Among other unrelated packages, the requirements.txt for this virtualenv includes:
```
Django==1.3.1
PIL==1.1.7
distribute==0.6.24
django-queued-storage==0.5
django-storages==1.1.4
psycopg2==2.4.4
python-dateutil==1.5
wsgiref==0.1.2
```
Thanks
|
I use this PIL fork in requirements.txt:
```
-e hg+https://bitbucket.org/etienned/pil-2009-raclette/#egg=PIL
```
and can use JPEG without issues:
```
--------------------------------------------------------------------
PIL 1.2a0 SETUP SUMMARY
--------------------------------------------------------------------
version 1.2a0
platform Python 2.7.2 (default, Oct 31 2011, 16:22:04)
[GCC 4.4.3] on linux2
--------------------------------------------------------------------
*** TKINTER support not available
--- JPEG support available
*** WEBP support not available
--- ZLIB (PNG/ZIP) support available
--- FREETYPE2 support available
--- LITTLECMS support available
--------------------------------------------------------------------
```
|
Making OmniAuth, Devise and Koala work together
I have an app in which I am implementing authentication using devise and omniauth. Ive got the logging in / out etc figured out, what i wanted to know was, what is the most efficient way to have a connection to the users graph api end point initialised and ready for use within my application.
eg: If on the profile page i wanted to do,
```
profile_image = current_user.fbgraph.get_picture("me")
```
How would I accomplish this with the least number of API calls (i will be using similar calls throughout the application)
|
You can accomplish this using something like [Koala](https://github.com/arsduo/koala). When you authenticate the user, you can grab the access token. Assuming you've followed the [Devise/Omniauth tutorial](https://github.com/plataformatec/devise/wiki/OmniAuth%3a-Overview), you could do something like so:
```
def self.find_for_facebook_oauth(response, signed_in_resource=nil)
data = response['extra']['user_hash']
access_token = response['credentials']['token']
user = User.find_by_email(data["email"])
# only log in confirmed users
# that way users can't spoof accounts
if user and user.confirmed?
user.update_attribute('fb_access_token', access_token)
user
end
end
```
Once you have the access token, you could then do something like:
```
@graph = Koala::Facebook::API.new(@user.fb_access_token)
profile_image = @graph.get_picture("me")
```
In my app, I check to see if a user is logged in when the callback from Facebook comes. If they are, I assume the request was to link accounts. If they're not, I assume it's a login request.
|
Executing Powershell on remove server fails when path to script is fully qualified
I have two servers running Windows Server 2012 R2 on the same domain, \tt-sql.perf.corp and \tt-file.perf.corp. There's a Powershell script in a shared folder on the file server, \tt-file.perf.corp\fileshare\helloworld.ps1. I have an application on the sql server executing the following command:
`powershell -NonInteractive -InputFormat None -ExecutionPolicy Bypass -Command "& '\tt-file.perf.corp\fileshare\helloworld.ps1'"`
It's failing with the following error:
>
> & : AuthorizationManager check failed. At line:1 char:3
> + & '\tt-file.perf.corp\fileshare\helloworld.ps1'
> + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> + CategoryInfo : SecurityError: (:) [], PSSecurityException
> + FullyQualifiedErrorId : UnauthorizedAccess
>
>
>
It also fails if I change the path to use the IP address.
However, it works when the path to the script isn't fully qualified:
`powershell -NonInteractive -InputFormat None -ExecutionPolicy Bypass -Command "& '\tt-file\fileshare\helloworld.ps1'"`
The Windows Management Instrumentation service is running on both servers. I ran Get-ExecutionPolicy on both servers as well, and both are set to Unrestricted. UAC is disabled on both servers. What's going on?
|
It seems you've already found the workaround (using the Short Name versus the FQDN), so instead I'll try to answer why you're hitting into this problem in the first place.
Some greater can be found [in this blog post](http://setspn.blogspot.com/2011/05/running-powershell-scripts-from-unc.html); effectively this is happening because when you specify the FQDN to a server, you're running afoul of one of PowerShell / Windows security features. Even though you're specifying that PowerShell should bypass the normal execution policy, running from an FQDN makes Windows believe that this file is coming from the web, and thus PowerShell wants to display a warning to you like this one:
```
Run only scripts that you trust. While scripts from the Internet can be
useful, this script can potentially harm your computer. Do you want to run
\\tt-file.perf.corp\fileshare\helloworld.ps1?
[D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"):
```
But it cannot, because you're running the shell in NonInteractive Mode.
---
So you have two options to resolve this, really:
1. As this blog post mentions, you could resolve the problem by making the UNC path a trusted site in IE, or use only the Short Name as you've seen (using \tt-file\ versus \tt-file.perf.corp) instead.
2. You could use Group Policy (or configure within IE if this is a one-off computer) addresses for local Intranet Zone. If this is a one-off machine, go to Internet Explorer, Tools, Internet Options, then go to the Security Tab. Click Local Intranet, Advanced, then add your FQDN here, as seen below.

If this is a setting you'll want to configure globally, specify the path just like I did above at the following location within Group Policy Management Console:
>
> User Configuration, expand Polices > Windows settings >Internet
> Explorer Maintenance >Security
> 3. Double click Security Zones and Content Ratings, then chose Import the current security zones and privacy settings.
>
>
>
For more information on the Group Policy Approach, [refer to this thread here on TechNet.](https://social.technet.microsoft.com/Forums/windowsserver/en-US/51d6fea4-4019-42ee-b11f-127b7b20ad14/adding-site-to-local-intranet-sites-zone-via-gpo)
I hope this helps! Unfortunately I can't think of a good PowerShell way to solve this problem :).
|
Handlebars If-Else If-Else With String Equality Function
HandlebarsJS doesn't support string equality ('==') so I have to write my own helper, but this answer isn't clear:
<https://stackoverflow.com/a/15095019/1005607>
```
Handlebars.registerHelper('if_eq', function(a, b, opts) {
if(a == b) // Or === depending on your needs
return opts.fn(this);
else
return opts.inverse(this);
});
```
and then adjust your template:
```
{{#if_eq this "some message"}}
...
{{else}}
...
{{/if_eq}}
```
1) Why is he doing `{{#if_eq ..}}` rather than `{{#if if_eq .. }}` ?
2) I also need to do **ELSE-IF**, which is supported since Handlebars 3.0.0 (I have 4.0). But using his notation, I wouldn't be able to do `{{#elseif_eq}}` there is no such expression. How would I implement an ELSE-IF with this custom helper?
|
The helper in your example is a block helper. The helper is aptly named `if_eq` because it takes two expressions and, if those expressions are equal, it renders what is in the block. The `return opts.inverse(this);` line does the job of rendering the `else` block if the expressions are *not* equal (see: <http://handlebarsjs.com/block_helpers.html#conditionals>).
If you want to be able to chain `else if` conditionals in your template, then you cannot use a custom block helper. Instead you must use a regular (non-block) [Handlebars helper](http://handlebarsjs.com/expressions.html#helpers). Whereas a block helper will render a block of template, a regular helper will return a value. In your case, we would return a Boolean indicating whether or not all arguments are equal. Such a helper could look like the following:
```
Handlebars.registerHelper('eq', function () {
const args = Array.prototype.slice.call(arguments, 0, -1);
return args.every(function (expression) {
return args[0] === expression;
});
});
```
You would use this helper in your template in the following manner:
```
{{#if (eq val1 val2)}}
<p>IF Branch</p>
{{else if (eq val2 val3)}}
<p>ELSE IF Branch</p>
{{else}}
<p>ELSE Branch</p>
{{/if}}
```
Please note the parentheses as in the code `(eq val1 val2)`. These parentheses are required because our `eq` evaluation is a [subexpression](http://handlebarsjs.com/expressions.html#subexpressions) within the `#if` expression. The parentheses tell Handlebars to pass the result of the equality check to the `#if` block helper.
I have created a [JS Fiddle](https://jsfiddle.net/76484/qhx8mx89/1/) for your reference.
**Update**
The answer above states that you cannot chain block helpers using `else if`. This is misleading. In fact, `if` is *itself* a block helper; and arbitrarily many `if` blocks can be chained together by using the `else` keyword. Even *different* block helpers can be chained in this way.
The important point to address the poster's first question is that two block helpers cannot be used within the same mustache. That is, `{{#if if_eq ... }}` is invalid if `if_eq` is a block helper. However, it would be perfectly valid to use `if_eq` as in the following:
```
{{#if_eq this 'some message'}}
<p>If branch.</p>
{{else if_eq this 'some other message'}}
<p>Else if branch.</p>
{{else}}
<p>Else branch.</p>
{{/if_eq}}
```
Please see this [fiddle](https://jsfiddle.net/o9x63kn0/1/) for an example.
|
Why does C++ not allow user-defined operators?
I've been wondering this for quite some time. There are already a whole bunch of them and they can be overloaded, so why not do it to the end and allow custom operators? I think it could be a great addition.
I've been told that this would make the language too hard to compile. This makes me wonder, C++ cannot really be designed for easy compilation anyway, so is it really undoable? Of course, if you use an LR parser with a static table and a grammar such as
```
E → T + E | T
T → F * T | F
F → id | '(' E ')'
```
it wouldn't work. In Prolog, which is usually parsed with a Operator-Precedence parser AFAIK, new operators can easily be defined, but the language is much simpler. Now, the grammar could obviously be rewritten to accept `identifiers` in every place where an operator is hard-coded into the grammar.
What other solutions and parser schemes are there and what other things have influenced that design decision?
|
[http://www2.research.att.com/~bs/bs\_faq2.html#overload-operator](http://www2.research.att.com/%7Ebs/bs_faq2.html#overload-operator)
>
> The possibility has been considered several times, but each time I/we decided that the likely problems outweighed the likely benefits.
>
>
> It's not a language-technical problem. Even when I first considerd it in 1983, I knew how it could be implemented. However, my experience has been that when we go beyond the most trivial examples people seem to have subtlely different opinions of "the obvious" meaning of uses of an operator. A classical example is `a**b**c`. Assume that `**` has been made to mean exponentiation. Now should `a**b**c` mean `(a**b)**c` or `a**(b**c)`? I thought the answer was obvious and my friends agreed - and then we found that we didn't agree on which resolution was the obvious one. My conjecture is that such problems would lead to subtle bugs.
>
>
>
|
Nuget - store packages in source control, or not?
We currently don't use nuget for our dependencies, preferring to go old-skool way and stick them all in a libs folder and reference from there. I know. So 1990's.
Anyway, nuget has always made me feel a bit queasy... you know, reliance on the cloud and all that. As such, I'm find myself in the main agreeing with Mark Seeman (see here: <http://blog.ploeh.dk/2014/01/29/nuget-package-restore-considered-harmful/>) who says:
>
> Personally, I always disable the feature and instead check in all packages in my repositories. This never gives me any problems.
>
>
>
Trouble is, this has changed in version 3, you can't store packages alongside the solution, as outlined here: <https://oren.codes/2016/02/08/project-json-all-the-things/>. Which sorta screws up checking them into source code.
So, am I worrying about nothing here? Should I drink from the nuget well, or side with Mr Seeman and er on the side of caution?
|
Storing NuGet packages in source control is a really, *really* bad idea.
I accidentally did it once and I ended up bloating my source code considerably, and that was before .NET Core...
Drink deep from the NuGet well. Most software components are packaged in a similar way these days (NPM, Bower etc). The referenced blog post is two years old and package management is changing rapidly in the .NET world, so here's some of my experience lately.
- NuGet packages can't be deleted from nuget.org. They can be hidden,
but if your application requests a hidden package it will download it
as normal. It'll never disappear into the void.
- 'Enable Package Restore' is no longer glitchy because it's now a default option in NuGet 2.7+. You have no choice anymore.
- Packages are no longer stored per solution but per machine, which will save a ton of bandwidth and will decrease the initial fetch period when building.
- If you build a new project using .NET Core, you will have dozens more packages as the entire BCL will be available as NuGet packages. Do you really want to check-in all the System.\* packages into source code?
|
Access active slide / access active shape on slide in PowerPoint - VSTO
Is there a way to access currently active slide in PowerPoint presentation using VSTO? Also is would be nice if I could get currently active Shape. I know how to iterate through slides / shapes, yet I can't find any property to figure out whether slide/shape is active:
```
foreach (Slide slide in presentation.Slides)
{
foreach (Shape shape in slide.Shapes)
{
}
}
```
|
Look at the .Selection object.
It has a .Type property that tells you what's selected. If it's SlideRange, the selection might be one or more slides; up to you to decide what to do if > 1, but if 1, then .Selection.SlideRange(1) gives you a reference to the selected slide.
.Type might return ShapeRange, in which case you'd use .Selection.ShapeRange(1) to get the current shape or the first shape in the range if more than one shape is selected. The shape's .Parent property returns a reference to the slide the shape is on (slide, master, layout, whatever).
If .Type returns TextRange, you have to walk a few steps up the parent chain; the parent of text is textrange, the parent of textrange is the containing shape and the shape's parent is the slide the shape is on.
This is liable to fall apart in some versions of PowerPoint 2007 (it's broken pre SP1 or 2, as I recall), and it's still broken in 2010 when the text in a table is selected. You can manipulate the text, you can get access to some of the properties of the shape that contains the text, but not all, and you can't climb the .Parent ladder to the slide.
|
React stateless component with controlled form Elements?
I have a dumb/stateless component which is used to only render a form ,
just a typical form .
```
import React from 'react'
export const AuthorForm =
({ firstName , lastName , handlefnChange , handlelnChange}) => (
<form action="">
<h2>Manage Authors</h2>
<label htmlFor="firstName">First Name</label>
<input type="text" name="firstName"
value={firstName} onChange={handlefnChange} />
<br/>
<label htmlFor="lastName">Last Name</label>
<input type="text" name="lastName"
value={lastName} onChange={handlelnChange} />
<br/>
<input type="submit" value="save" />
</form>
)
```
I'm controlling this form from the parent smart component
which is just rendering the upper form component passing down
props for values and event handlers
```
import React , {Component} from 'react'
import {AuthorForm} from './'
export class ManageAuthors extends Component {
constructor(){
super()
this.state = {
author:{
id: "",
firstName:"",
lastName:""
}
}
}
handlefnChange = e => {
this.setState({
author:{
firstName: e.target.value
}
})
}
handlelnChange = e => {
this.setState({
author: {
lastName: e.target.value
}
})
}
render = () => (
<div>
<AuthorForm
{...this.state.author}
handlefnChange={this.handlefnChange}
handlelnChange={this.handlelnChange} />
</div>
)
}
```
everything works fine but I'm getting this warning
`warning.js:36 Warning: AuthorForm is changing a controlled input of type text to be uncontrolled. Input elements should not switch from controlled to uncontrolled (or vice versa). Decide between using a controlled or uncontrolled input element for the lifetime of the component. More info: https://facebook.github.io/react/docs/forms.html#controlled-components`
can I resolve this warning without converting to `stateful` component ?
|
This is because the author object loses either the `firstName` or `lastName` field when you modify either one of them:
```
handlefnChange = e => {
this.setState({
author: {
firstName: e.target.value
// lastName is missing!
}
});
}
handlelnChange = e => {
this.setState({
author: {
// firstName is missing!
lastName: e.target.value
}
})
}
```
React only does a merge on the top layer of `this.state`. Since `firstName` and `lastName` are nested inside an `author` object, when you do `handlefn/lnChange` and only set one of the fields, the other goes missing.
The fix would be to do:
```
handlefnChange = e => {
this.setState({
author: {
firstName: e.target.value,
lastName: this.state.author.lastName
}
});
}
handlelnChange = e => {
this.setState({
author: {
firstName: this.state.author.firstName,
lastName: e.target.value
}
})
}
```
Or if you have more than two fields in future, it would be easier to use the spread operator for merging:
```
handlefnChange = e => {
this.setState({
author: {
...this.state.author,
firstName: e.target.value,
}
});
}
handlelnChange = e => {
this.setState({
author: {
...this.state.author,
lastName: e.target.value
}
})
}
```
Or use a utility merge function from lodash.
|
Android : Required API to use Switch?
I can't insert a Switch in my project because of the following error :
>
> View requires API level 14 (current min is 8):
>
>
>
But in my project properties, I use Platform 4.1 and API Level 16. So what is wrong?
|
There is a nice lecture about this from [Google IO 2012 (starting at slide 32)](https://docs.google.com/presentation/d/1mKmwM-HNXukKT_FgAMmyCuwMdL4nQI4aZ6SXIr5wixc/pub?start=false&loop=false&delayms=3000#slide=id.g4c9853a_0_113)
Here is a detailed example:
Create a separate layout XML file for ICS+ versions by placing it in /res/layout-v14. The resulting file structure will look something like this:
```
res/layout
- mainlayout.xml
- compound_button.xml
res/layout-v14
- compound_button.xml
```
Android will then look for resources in the layout-v14 directory when your app is running on v14 or higher.
Place an include in mainlayout.xml that will pull in the pertinent compound\_button.xml when the app is run:
```
<include layout="@layout/compound_button" />
```
For the pre 4.0 layout we want a checkbox, so create /layout/compound\_button.xml as a merge as follows:
```
<?xml version="1.0" encoding="utf-8"?>
<merge xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" >
<CheckBox
android:id="@+id/enabled"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Enable" />
</merge>
```
And then for the 4.0+ layout we want a switch, so create /layout-v14/compound\_button.xml as a merge as follows:
```
<?xml version="1.0" encoding="utf-8"?>
<merge xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" >
<Switch
android:id="@+id/enabled"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Enable"
tools:ignore="NewApi" />
</merge>
```
Of course, be sure to set your min and targets appropriately:
```
<uses-sdk android:minSdkVersion="7" android:targetSdkVersion="14" />
```
|
How to resolve npm run dev missing script issues?
I am currently in the folder 'C:\Users\vignesh\Documents\Personal Projects\Full-Stack-Web-Developement' on gitbash
**npm run dev**
executing the above command on gitbash gives me the following error.
I am assuming this is due to the NODE\_PATH variables not being set properly.
Please let me know if anyone has a solution to the below problem
```
npm ERR! Windows_NT 6.3.9600
npm ERR! argv "C:\\Program Files\\nodejs\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "run" "dev"
npm ERR! node v4.4.6
npm ERR! npm v2.15.5
npm ERR! missing script: dev
```
|
```
npm run <command>
```
will run bash script from package.json from 'scripts' value of '' attribute. For example:
*package.json*
```
{
"name": "app",
"version": "0.0.0",
"license": "MIT",
"scripts": {
"server": "webpack-dashboard -- webpack-dev-server --inline --port 8080",
"webdriver-update": "webdriver-manager update",
},
"dependencies": {
"@angular/common": "~2.2.0",
"@angular/core": "~2.2.0"
},
"devDependencies": {
"@types/core-js": "^0.9.0"
}
}
```
In this case you can run scripts:
```
npm run server
npm run webdriver-update
```
In your case you probably wont have *dev* script.
Remember that few scripts name are reserved (for example *npm test* will try to run, *npm run pretest, npm run test, npm run posttest*). More info on <https://docs.npmjs.com/misc/scripts>
|
How to remove edge between two vertices?
I want to remove edge between two vertices, so my code in java tinkerpop3 as below
```
private void removeEdgeOfTwoVertices(Vertex fromV, Vertex toV,String edgeLabel,GraphTraversalSource g){
if(g.V(toV).inE(edgeLabel).bothV().hasId(fromV.id()).hasNext()){
List<Edge> edgeList = g.V(toV).inE(edgeLabel).toList();
for (Edge edge:edgeList){
if(edge.outVertex().id().equals(fromV.id())) {
TitanGraph().tx();
edge.remove();
TitanGraph().tx().commit();
return;//Remove edge ok, now return.
}
}
}
}
```
Is there a simpler way to remove edge between two vertices by a direct query to that edge and remove it? Thank for your help.
|
Here's an example of how to drop edges between two vertices (where you just have the ids of those vertices:
```
gremlin> graph = TinkerFactory.createModern()
==>tinkergraph[vertices:6 edges:6]
gremlin> g = graph.traversal()
==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard]
gremlin> g.V(1).bothE()
==>e[9][1-created->3]
==>e[7][1-knows->2]
==>e[8][1-knows->4]
```
For purpose of the example, let's say we want to drop edges between vertex 1 and vertex 2. We could find those with:
```
gremlin> g.V(1).bothE().where(otherV().hasId(2))
==>e[7][1-knows->2]
```
and then remove it with:
```
gremlin> g.V(1).bothE().where(otherV().hasId(2)).drop()
gremlin> g.V(1).bothE()
==>e[9][1-created->3]
==>e[8][1-knows->4]
```
If you have the actual vertices, then you could just do:
```
gremlin> g.V(v1).bothE().where(otherV().is(v2)).drop()
gremlin> g.V(1).bothE()
==>e[9][1-created->3]
==>e[8][1-knows->4]
```
You could re-write your function as:
```
private void removeEdgeOfTwoVertices(Vertex fromV, Vertex toV,String edgeLabel,GraphTraversalSource g){
g.V(fromV).bothE().hasLabel(edgeLabel).where(__.otherV().is(toV)).drop().iterate();
g.tx().commit();
}
```
|
What are the challenges related to typing in writing a compiler for a dynamically typed language?
In [this talk](http://www.youtube.com/watch?v=EBRMq2Ioxsc), Guido van Rossum is talking (27:30) about attempts to write a compiler for Python code, commenting on it saying:
>
> turns out it's not so easy to write a compiler that maintains all the
> nice dynamic typing properties and also maintains semantic correctness
> of your program, so that it actually does the same thing no matter
> what kind of weirdness you do somewhere under the covers and actually
> runs any faster
>
>
>
**What are the (possible) challenges related to typing in writing a compiler for a dynamically typed language like Python?**
|
You oversimplified Guido's statement in phrasing your question. The problem isn't writing a compiler for a dynamically-typed language. The problem is writing one that is (criteria 1) always correct, (criteria 2) keeps dynamic typing, *and* (criteria 3) is noticeably faster for a significant amount of code.
It's easy to implement 90% (failing criteria 1) of Python and be consistently fast at it. Similarly, it's easy to create a faster Python variant with static typing (failing criteria 2). Implementing 100% is also easy (insofar implementing a language that complex is easy), but so far every easy way to implement it turns out to be relatively slow (failing criteria 3).
Implementing an interpreter *plus JIT* that's correct, implements the entire language, and is faster for some code turns out to be feasible, though significantly harder (cf. PyPy) and only so if you automate the creation of the JIT compiler (Psyco did without it, but was very limited in what code it could speed up). But note that this is explicitly out of scope, as we're talking about *static* (aka ahead-of-time) compilers. I only mention this to explain why its approach does not work for static compilers (or at least there's no existing counterexample): It first has to interpret and observe the program, then generate code for a specific iteration of a loop (or another linear code path), then optimize the hell out of that based on assumptions only true for that specific iteration (or at least, not for all possible iterations). The expectation is that many later executions of that code will also match the expectation and thus benefit from the optimizations. Some (relatively cheap) checks are added to assure correctness. To do all this, you need an idea of what to specialize for, and a slow but general implementation to fall back to. AOT compilers have neither. They can't specialize *at all* based on code they can't see (e.g. dynamically loaded code), and specializing carelessly means generating more code, which has a number of problems (icache utilization, binary size, compile time, additional branches).
Implementing an AOT compiler that *correctly* implements the *entire* language is also relatively easy: Generate code that calls into the runtime to do what the interpreter would do when fed with this code. [Nuitka](http://nuitka.net/) (mostly) does this. However, this doesn't yield much performance benefit (failing criteria 3), as you still have to do just as much unnecessary work as an interpreter, save for dispatching the bytecode to the block of C code which does what you compiled in. But that's only a rather small cost -- significant enough to be worth optimizing in an existing interpreter, but not significant enough to justify a whole new implementation with its own problems.
What would be needed to fulfill all three criteria? We have no idea. There are some static analysis schemes which can extract some information about concrete types, control flow, etc. from Python programs. The ones that yield accurate data beyond the scope of a single basic block are extremely slow and need to see the whole program, or at least most of it. Still, you can't do much with that information, other than perhaps optimize a few operations on builtin types.
Why's that? To put it bluntly, a compiler either removes the ability to execute Python code loaded at runtime (failing criteria 1), or it does not make any assumptions that can be invalidated by any Python code at all. Unfortunately, that includes pretty much everything useful for optimizing programs: Globals including functions can be rebound, classes can be mutated or replaced entirely, modules can be modified arbitrarily too, importing can be hijacked in several ways, etc. A single string passed to `eval`, `exec`, `__import__`, or numerous other functions, may do any of that. In effect, that means almost no big optimizations can be applied, yielding little performance benefit (failing criteria 3). Back to the above paragraph.
|
How to add xml encoding xml version="1.0" encoding="UTF-8"? to xml Output in SQL Server
Probably a duplicate of unanswered.
[SQL Server 2008 - Add XML Declaration to XML Output](https://stackoverflow.com/questions/4184163/sql-server-2008-add-xml-encoding-to-xml-output)
Please let me know if this is possible. I read in some blogs
<http://forums.asp.net/t/1455808.aspx/1>
<http://www.devnewsgroups.net/group/microsoft.public.sqlserver.xml/topic60022.aspx>
But I couldn't understand why I can't do this.
|
You have to add it manually. SQL Server always stores xml internally as ucs-2 so it is impossible for SQL to generate it a utf-8 encoding header
See ["Limitations of the xml Data Type"](http://technet.microsoft.com/en-us/library/ms187107%28SQL.90%29.aspx) on MSDN
>
> The XML declaration PI, for example, `<?xml version='1.0'?>`, is not preserved when storing XML data in an xml data type instance. This is by design. The XML declaration (`<?xml ... ?>`) and its attributes (version/encoding/stand-alone) are lost after data is converted to type xml. The XML declaration is treated as a directive to the XML parser. The XML data is stored internally as ucs-2.
>
>
>
|
Log 'jsonPayload' in Firebase Cloud Functions
TL;DR;
Does anyone know if it's possible to use `console.log` in a Firebase/Google Cloud Function to log entries to Stack Driver using the `jsonPayload` property so my logs are searchable (currently anything I pass to console.log gets stringified into `textPayload`).
---
I have a multi-module project with some code running on Firebase Cloud Functions, and some running in other environments like Google Compute Engine. Simplifying things a little, I essentially have a 'core' module, and then I deploy the 'cloud-functions' module to Cloud Functions, 'backend-service' to GCE, which all depend on 'core' etc.
I'm using bunyan for logging throughout my 'core' module, and when deployed to GCE the logger is configured using '@google-cloud/logging-bunyan' so my logs go to Stack Driver.
*Aside: Using this configuration in Google Cloud Functions is causing issues with `Error: Endpoint read failed` which I think is due to functions not going cold and trying to reuse dead connections, but I'm not 100% sure what the real cause is.*
So now I'm trying to log using `console.log(arg)` where `arg` is an object, not a string. I want this object to appear in Stack Driver under the `jsonPayload` but it's being stringified and put into the `textPayload` field.
|
It took me awhile, but I finally came across [this example](https://github.com/firebase/functions-samples/blob/master/stripe/functions/index.js#L103) in [firebase functions samples repository](https://github.com/firebase/functions-samples). In the end I settled on something a bit like this:
```
const Logging = require('@google-cloud/logging');
const logging = new Logging();
const log = logging.log('my-func-logger');
const logMetadata = {
resource: {
type: 'cloud_function',
labels: {
function_name: process.env.FUNCTION_NAME ,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
},
},
};
const logData = { id: 1, score: 100 };
const entry = log.entry(logMetaData, logData);
log.write(entry)
```
You can add a string `severity` property value to `logMetaData` (e.g. "INFO" or "ERROR"). [Here is the list](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#LogSeverity) of possible values.
---
Update for available node 10 env vars. These seem to do the trick:
```
labels: {
function_name: process.env.FUNCTION_TARGET,
project: process.env.GCP_PROJECT,
region: JSON.parse(process.env.FIREBASE_CONFIG).locationId
}
```
---
**UPDATE**: Looks like for Node 10 runtimes they want you to [set env values explicitly](https://cloud.google.com/functions/docs/env-var#setting_environment_variables) during deploy. I guess there has been a grace period in place because my deployed functions are still working.
|
Java on Windows: prevent '/' slash in file name from acting as a separator
I have to create a file based on a string provided to me.
For this example, let's say the file name is "My file w/ stuff.txt".
When Java creates the file using
```
File file = new File("My file w/ stuff.txt")
```
Even though the default windows separator is `'\'`, it assumes that the `'/'` slash is a file separator. So a future call to `file.getName()` would return `" stuff.txt"`. This causes problems for my program.
Is there any way to prevent this behaviour?
|
According to [this Wikipedia page](http://en.wikipedia.org/wiki/Path_%28computing%29), the Windows APIs treat `/` as equivalent to `\`. Even if you somehow managed to embed a `/` in a pathname component in (for example) a `File` object, the chances are that Windows at some point will treat it as a path separator.
So your options are:
- Let Windows treat the `/` as it would normally; i.e. let it treat the character as a pathname separator. (Users should know this. It is a "computer literacy" thing ... for Windows users.)
- As above, but with a warning to the user about the `/`.
- Check for `/` AND `\` characters, and reject both saying that a filename (i.e. a pathname component) cannot contain pathname separators.
- Use some encoding scheme to encode reserved characters before attempting to create the files. You must also then apply the (reverse) decoding scheme at all points where you need to show the user their "file name with slashes".
**Note:** if the user can see the actual file paths (e.g. via a command shell) you can't hide the encoding / decoding from them. Arguably, that is worse than the "problem" you were trying to solve in the first place.
There is no escaping scheme that the Windows OS will accept for this purpose. You would need to use something like [`%` encoding](https://en.wikipedia.org/wiki/Percent-encoding); e.g. replace `/` with `%2F`. Basically you need to "hide" the slash character from Windows by replacing it with other characters in the OS-level pathname!
The best option depends on details of your application; e.g. whether you can report problems to the person who entered the bogus filename.
|
what's the difference between ordinal and hint number in the native dll?
I would like to know any difference between ordinal and hint value in the native dlls? Anyone knows?
|
Ordinals can be thought of as the alternative (internal) name of a exported function (All exports have a ordinal and some only have the ordinal a.k.a. [NONAME](http://msdn.microsoft.com/en-us/library/hyx1zcd3%28v=vs.71%29.aspx)). They were common in 16-bit windows but not so much these days (The import/export tables are smaller if you don't include the "string name"). They are slightly faster for the loader to look-up and are often used for undocumented functions. The downside is that they might not be stable from release to release.
The [hint](http://blogs.msdn.com/b/oldnewthing/archive/2010/03/17/9980011.aspx) is used by the loader as a shortcut into the dll export table, if the hint offset does not match the expected function name the normal search is used.
Take a look at [An In-Depth Look into the Win32 Portable Executable File Format](http://msdn.microsoft.com/en-us/magazine/cc301808.aspx) for more details about the different PE sections.
|
encog java exporting network weights
I am using encog to do some of my university assignments and I would like to export a list of all the connections in the networks and their associated weights.
I saw the `dumpWeights()` function that is part of the `BasicMLNetwork` class (im using Java), but this only provides me with the weights with no information about the connections.
Does anyone know of a good way to achieve this?
Thanks In Advance
Bidski
|
Yes, use the BasicNetwork.getWeight. You can loop over all of your layers and neurons. Just specify the two neurons that you want the weight between. Here is how it is called:
```
/**
* Get the weight between the two layers.
* @param fromLayer The from layer.
* @param fromNeuron The from neuron.
* @param toNeuron The to neuron.
* @return The weight value.
*/
public double getWeight(final int fromLayer,
final int fromNeuron,
final int toNeuron) {
```
I just added the following function to Encog's BasicNetwork class to dump the weights and structure. It will be in the next Encog release (3.4), its already on [GitHub](https://github.com/encog/encog-java-core/commit/85d40167899573c48847ab05b8a3bfc3a3630103). For now, here is the code, it is a decent tutorial on how to extract the weights from Encog:
```
public String dumpWeightsVerbose() {
final StringBuilder result = new StringBuilder();
for (int layer = 0; layer < this.getLayerCount() - 1; layer++) {
int bias = 0;
if (this.isLayerBiased(layer)) {
bias = 1;
}
for (int fromIdx = 0; fromIdx < this.getLayerNeuronCount(layer)
+ bias; fromIdx++) {
for (int toIdx = 0; toIdx < this.getLayerNeuronCount(layer + 1); toIdx++) {
String type1 = "", type2 = "";
if (layer == 0) {
type1 = "I";
type2 = "H" + (layer) + ",";
} else {
type1 = "H" + (layer - 1) + ",";
if (layer == (this.getLayerCount() - 2)) {
type2 = "O";
} else {
type2 = "H" + (layer) + ",";
}
}
if( bias ==1 && (fromIdx == this.getLayerNeuronCount(layer))) {
type1 = "bias";
} else {
type1 = type1 + fromIdx;
}
result.append(type1 + "-->" + type2 + toIdx
+ " : " + this.getWeight(layer, fromIdx, toIdx)
+ "\n");
}
}
}
return result.toString();
}
```
|
Spring Security and pattern attributes
I have seen Spring Security OAuth2 samples has this defined in the `spring-servlet.xml`,
```
<http pattern="/users/**" create-session="never" entry-point-ref="oauthAuthenticationEntryPoint"
access-decision-manager-ref="accessDecisionManager" xmlns="http://www.springframework.org/schema/security">
<anonymous enabled="false" />
<intercept-url pattern="/photos" access="ROLE_USER,SCOPE_READ" />
<intercept-url pattern="/photos/trusted/**" access="ROLE_CLIENT,SCOPE_TRUST" />
<intercept-url pattern="/photos/user/**" access="ROLE_USER,SCOPE_TRUST" />
<intercept-url pattern="/photos/**" access="ROLE_USER,SCOPE_READ" />
<custom-filter ref="resourceServerFilter" before="PRE_AUTH_FILTER" />
<access-denied-handler ref="oauthAccessDeniedHandler" />
</http>
```
Is the `pattern` attribute in `http` tag valid? I could not find its definition in the `spring-security-2.0.1.xsd`. If it is valid, what is the relationship of this pattern with `intercept-url`'s `pattern` attribute? Take for example, does the intercept path `/photos/user/**` has the final matching interception path of `/users/photos/user/**`? Thanks.
|
The `pattern` attribute was introduced in [3.1](http://www.springframework.org/schema/security/spring-security-3.1.xsd) which introduced namespace support for multiple filter chains. Spring Security 2 is very out of date (you shouldn't be using it).
The patterns in the `intercept-url` elements are independent, in that they are matched against the incoming request URI in the same way as the filter chain `http` pattern is checked. However if the latter doesn't match, the filter chain won't be applied to the request at all, so in order to have any effect, they must be consistent with the filter chain pattern.
For the example you've posted, this would mean that none of the `/photos` patterns have any effect. They should all have the prefix that the main filter chain matches - i.e. they should begin with `/users/photos`.
|
When using JSTL forEach the array prints the String.toString() and not actual string value
I have the following code:
```
String[] columnHeaders = {"Banana", "Apple", "Carrot", "Orange", "Lychee", "Permisson"};
<c:forEach var="header" items="<%= columnHeaders%>">
<td>
<c:out value="${header}" />
</td>
</c:forEach>
```
When the JSP is executed, the following values get printed:
```
org.apache.commons.el.ImplicitObjects$7@6ac86ac8
org.apache.commons.el.ImplicitObjects$7@6ac86ac8
...
```
It appears to me that the memory value is being printed and not the value contained in each string.
What am I missing here?
|
You're referencing the `items` value incorrectly. Taglibs/EL and *scriptlets* does not share the same variable scope. You're basically printing `columnHeaders.toString()` in the `items` attribute and telling `c:forEach` to iterate over it. Instead, you need to put it in the request scope (preferably by a servlet) and use EL `${}` the normal way:
```
<%
String[] columnHeaders = {"Banana", "Apple", "Carrot", "Orange", "Lychee", "Permisson"};
request.setAttribute("columnHeaders", columnHeaders);
%>
<c:forEach var="columnHeader" items="${columnHeaders}">
<td>
<c:out value="${columnHeader}" />
</td>
</c:forEach>
```
Also, `${header}` is an reserved EL variable referring to the request header map (see [implicit objects in EL](http://docs.oracle.com/javaee/5/tutorial/doc/bnahq.html#bnaij)), you'd need to rename it to something else such as `${columnHeader}` in the above example.
### See also:
- [Our EL tag wiki page](https://stackoverflow.com/tags/el/info)
---
**Unrelated** to the concrete problem, table headers needs to be represented in HTML by `<th>`, not `<td>`.
|
How to do word representation of amount field in Acumatica Invoice Report (AR.64.10.00)?
How to do word representation of amount field in Acumatica Invoice Report (AR.64.10.00)?
[](https://i.stack.imgur.com/dMkaE.png)
[](https://i.stack.imgur.com/EA7AF.png)
|
Create an un-bound field in DAC Extension for `ARInvoice` DAC. And use `PX.Objects.AP.ToWords` attribute.
```
public class ARInvoicePXExt : PXCacheExtension<ARInvoice>
{
#region UsrAmountToWords
public abstract class usrAmountToWords : IBqlField { }
[PX.Objects.AP.ToWords(typeof(ARInvoice.curyOrigDocAmt))]
public virtual string UsrAmountToWords { get; set; }
#endregion
}
```
Use `UsrAmountToWords` field in AR.64.10.00 report.
[](https://i.stack.imgur.com/XDeEF.png)
[](https://i.stack.imgur.com/rDORz.png)
The word representation happens through attribute `PX.Objects.AP.ToWords`. And this out-of-box word representation is English only. For non-English word representation, create your own attribute. Implementation of `PX.Objects.AP.ToWords` can be found in
…\App\_Data\CodeRepository\PX.Objects\AP\Descriptor\Attribute.cs.
|
Why is the denominator in a conditional probability the probability of the conditioning event?
Quite a simple or at least short question: Why is $ \frac{P(A \cap B)}{P(B)} $ divided by $ P(B) $ for the [conditional probability](https://en.wikipedia.org/wiki/Conditional_probability)?
$ P(A | B) = \frac{P(A \cap B)}{P(B)} $
Random image to visualize: [](https://i.stack.imgur.com/93Y1t.png)
I actually would like to make use out of the tree display which I can't grasp at all. Why would I divide by $ P(A+) $ when I had to move along this path?
Sorry for the mix-up of $A+$ and $ B $..
[](https://i.stack.imgur.com/WmkgX.png)
|
In your tree diagram, if you want to know the probability $P(B+|A+)$ Then you completely ignore the $A-$ section, since you know that that does not occur. So the probability of $B+$ given that $A+$ occurred is the count of the ways $A+B+$ can occur divided by the total number of ways that $A+$ can occur, that is the combination of $A+B+$ and $A+B-$, which together is $A+$, so you would need to divide by the count (or probability) of $A+$ happening.
It might be easier to think of actual events that you can count the ways it happens.
One example:
Roll a fair die (possible outcomes are 1, 2, 3, 4, 5, 6)
A is the event that the roll is greater than 3 (4, 5, 6)
B is the event that an odd number was rolled (1, 3, 5)
So the probability of A and B is $\frac16$ since 5 is the only number that matches,
but if we know that A happened, then $P(B|A) = \frac13$ since we know that the outcome was 4, 5, or 6. We can get there by counting (1 out of 3 possibilities) or by the math with probabilities $\frac16$ divided by $\frac12$. If you do not divide by what you are conditioning on, then you are looking at the joint probability, not the conditional probability. Also think about the Venn diagram. You need to divide by the area of the circle that you are conditioning on so that the total area of the sum of the conditional probabilities is 1.
|
Calling getTime changes Calendar value
I'm trying to get the sunday of the same week as a given date.
During this I ran into this problem:
```
Calendar calendar = Calendar.getInstance(Locale.GERMANY);
calendar.set(2017, 11, 11);
calendar.set(Calendar.DAY_OF_WEEK, Calendar.SUNDAY);
System.out.println(calendar.getTime().toString());
```
results in "Sun Jan 07 11:18:42 CET 2018"
but
```
Calendar calendar2 = Calendar.getInstance(Locale.GERMANY);
calendar2.set(2017, 11, 11);
calendar2.getTime();
calendar2.set(Calendar.DAY_OF_WEEK, Calendar.SUNDAY);
System.out.println(calendar2.getTime().toString());
```
gives me the correct Date "Sun Dec 17 11:18:42 CET 2017"
Can someone explain why the first exmple is behaving this way? Is this really intended?
Thanks
|
Basically, the `Calendar` API is horrible, and should be avoided. It's not documented terribly clearly, but I *think* I see where it's going, and it's behaving as intended in this situation. By that I mean it's following the intention of the API authors, not the intention of you or anyone reading your code...
From the [documentation](https://docs.oracle.com/javase/9/docs/api/java/util/Calendar.html):
>
> The calendar field values can be set by calling the set methods. Any field values set in a Calendar will not be interpreted until it needs to calculate its time value (milliseconds from the Epoch) or values of the calendar fields. Calling the get, getTimeInMillis, getTime, add and roll involves such calculation.
>
>
>
And then:
>
> When computing a date and time from the calendar fields, there may be insufficient information for the computation (such as only year and month with no day of month), or there may be inconsistent information (such as Tuesday, July 15, 1996 (Gregorian) -- July 15, 1996 is actually a Monday). Calendar will resolve calendar field values to determine the date and time in the following way.
>
>
> If there is any conflict in calendar field values, Calendar gives priorities to calendar fields that have been set more recently. The following are the default combinations of the calendar fields. The most recent combination, as determined by the most recently set single field, will be used.
>
>
> For the date fields:
>
>
> - YEAR + MONTH + DAY\_OF\_MONTH
> - YEAR + MONTH + WEEK\_OF\_MONTH + DAY\_OF\_WEEK
> - YEAR + MONTH + DAY\_OF\_WEEK\_IN\_MONTH + DAY\_OF\_WEEK
> - YEAR + DAY\_OF\_YEAR
> - YEAR + DAY\_OF\_WEEK + WEEK\_OF\_YEAR
>
>
>
In the first example, the fact that the last field set was "day of week" means it will then use the `YEAR + MONTH + WEEK_OF_MONTH + DAY_OF_WEEK` calculation (I think). The year and month have been set to December 2017, but the week-of-month is the current week-of-month, which is the week 5 of January 2018... so when you then say to set the day of week to Sunday, it's finding the Sunday in the "week 5" of December 2017. December only had 4 weeks, so it's effectively rolling it forward... I think. It's all messy and you shouldn't have to think about that, basically.
In the second example, calling `getTime()` "locks in" the year/month/day you've specified, and computes the other fields. When you set the day of week, that's then adjusting it within the existing computed fields.
Basically, avoid this API as far as you possibly can. Use `java.time`, which is a *far* cleaner date/time API.
|
How to use secondary user actions with to improve recommendations with Spark ALS?
Is there a way to use secondary user actions derived from the user click stream to improve recommendations when using Spark Mllib ALS?
I have gone through the explicit and implicit feedback based example mentioned here : <https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html> that uses the same ratings RDD for the train() and trainImplicit() methods.
Does this mean I need to call trainImplicit() on the same model object with a RDD(user,item,action) for each secondary user action? Or train multiple models , retrieve recommendations based on each action and then combine them linearly?
For additional context, the crux of the question is if Spark ALS can model secondary actions like Mahout's spark item similarity job. Any pointers would help.
|
Disclaimer: I work with Mahout's Spark Item Similarity.
ALS does not work well for multiple actions in general. First an illustration. The way we consume multiple actions in ALS is to weight one above the other. For instance buy = 5, view = 3. ALS was designed in the days when ratings seemed important and predicting them was the question. We now know that ranking is more important. In any case ALS uses predicted ratings/weights to rank results. This means that a view is really telling ALS nothing since a rating of 3 means what? Like? Dislike? ALS tries to get around this by adding a regularization parameter and this will help in deciding if 3 is a like or not.
But the problem is more fundamental than that, it is one of user intent. When a user views a product (using the above ecom type example) how much "buy" intent is involved? From my own experience there may be none or there may be a lot. The product was new, or had a flashy image or other clickbait. Or I'm shopping and look at 10 things before buying. I once tested this with a large ecom dataset and found no combination of regularization parameter (used with ALS trainImplicit) and action weights that would beat the offline precision of "buy" events used *alone*.
So if you are using ALS, check your results before assuming that combining different events will help. Using two models with ALS doesn't solve the problem either because from buy events you are recommending that a person buy something, from view (or secondary dataset) you are recommending a person view something. The fundamental nature of intent is not solved. A linear combination of recs still mixes the intents and may very well lead to decreased quality.
What Mahout's Spark Item Similarity does is to correlate views with buys--actually it correlates a primary action, one where you are clear about user intent, with other actions or information about the user. It builds a correlation matrix that in effect scrubs the views of the ones that did not correlate to buys. We can then use the data. This is a very powerful idea because now almost any user attribute, or action (virtually the entire clickstream) may be used in making recs since the correlation is always tested. Often there is little correlation but that's ok, it's an optimization to remove from the calculation since the correlation matrix will add very little to the recs.
BTW if you find integration of Mahout's Spark Item Similarity daunting compared to using MLlib ALS, I'm about to donate an end-to-end implementation as a template for [Prediction.io,](https://prediction.io/) all of which is Apache licensed open source.
|
AvalonDock 2.0: adding LayoutDocument not working after Deserialize layout
Simple AvalonDock application with only LayoutDocumentPane.

XAML:
```
<Window x:Name="MainWindow1" x:Class="AvalonTest2.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:xcad="http://schemas.xceed.com/wpf/xaml/avalondock"
Title="MainWindow" Height="350" Width="525" Loaded="MainWindow1_Loaded">
<Grid Name="MainGrid" HorizontalAlignment="Stretch" VerticalAlignment="Stretch">
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<ToolBar Grid.Row="0" Grid.Column="0">
<Button Content="New Tab" Margin="10,0,0,0" Click="NewTabClick"/>
<Button Content="Save layout" Margin="10,0,0,0" Click="Button_Click"/>
</ToolBar>
<xcad:DockingManager x:Name="dockManager" Grid.Row="1">
<xcad:LayoutRoot x:Name="_layoutRoot">
<xcad:LayoutPanel Orientation="Horizontal" x:Name="_layoutPanel">
<xcad:LayoutDocumentPane x:Name="workSpace">
<xcad:LayoutDocument ContentId="dummy" Title="Dummy" >
<Button x:Name="dummyButton" Content="Dummy Content" HorizontalAlignment="Center" VerticalAlignment="Center"/>
</xcad:LayoutDocument>
</xcad:LayoutDocumentPane>
</xcad:LayoutPanel>
</xcad:LayoutRoot>
</xcad:DockingManager>
</Grid>
</Window>
```
Code:
```
using System;
using System.IO;
using System.Windows;
using Xceed.Wpf.AvalonDock.Layout;
using Xceed.Wpf.AvalonDock.Layout.Serialization;
namespace AvalonTest2
{
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
private void NewTabClick(object sender, RoutedEventArgs e)
{
var ld = new LayoutDocument();
var x = new Random().Next().ToString();
ld.Title = x;
ld.ContentId = x;
workSpace.Children.Add(ld);
}
private void Button_Click(object sender, RoutedEventArgs e)
{
var layoutSerial = new XmlLayoutSerializer(dockManager);
layoutSerial.Serialize("layout.xml");
}
private void MainWindow1_Loaded(object sender, RoutedEventArgs e)
{
if (File.Exists("layout.xml"))
{
// var serializer = new XmlLayoutSerializer(dockManager);
// serializer.Deserialize("layout.xml");
}
}
}
}
```
I press "New tab" button and everything is fine: new tab appears.
I restart the program, save layout ("Save layout" button), close program and remove comments on Deserialize lines.
I start the program and on pressing "New tab", no tabs appears. I'd look inside LayoutDocumentPane (workSpace), "Children" property is filled with new Documents.
What's wrong?
|
Unfortunately, the (de-)serializing functionality is implemented not very nice in the `AvalonDock` suite.
On deserializing, a completely new `LayoutRoot` object will be created. You define in XAML a `LayoutDocumentPane` with name `workSpace` and add the newly created `LayoutDocument` into this pane. But, after deserialization, this `LayoutDocumentPane` does not belong to the layout anymore - it has been removed; a new `LayoutDocumentPane` has been created instead.
Therefore you cannot see any `LayoutDocument` views added into the `workSpace` container.
Don't reference the instance of your `LayoutDocumentPane` by name, but rather find it dynamically instead. You have to change your code to something like:
```
var ld = new LayoutDocument();
var x = new Random().Next().ToString();
ld.Title = x;
ld.ContentId = x;
LayoutDocumentPane documentPane = this.dockManager.Layout.Descendents().OfType<LayoutDocumentPane>().SingleOrDefault();
if (documentPane != null)
{
documentPane.Children.Add(ld);
}
```
|
When should I call CancellationToken.ThrowIfCancellationRequested?
I developed a C# based Windows Service which runs all of its logic in several different tasks.
To allow the service to shutdown gracefully when it is being stopped, I am using a CancellationToken which is passed to any function that accepts one (mostly from 3rd party libraries which I am using) in order to abort processing before completion.
I noticed that none of those functions throw an `OperationCanceledException` when the cancellation is requested while the function is being called, so my application simply continues executing until I call `ThrowIfCancellationRequested()` somewhere else later in my code. Am I supposed to manually call `ThrowIfCancellationRequested()` after calling every single of those functions to make sure that the tasks stop as soon as possible, or when exactly am I supposed to call `ThrowIfCancellationRequested()` in my own code?
|
Yes, you are supposed to call `ThrowIfCancellationRequested()` manually, in the appropriate places in your code (where appropriate is determined by you as a programmer).
Consider the following example of a simple job processing function that reads jobs from a queue and does stuff with them. The comments illustrate the sort of thinking the developer might go through when deciding whether to check for cancellation.
Note also that you are right - the standard framework functions that accept a token will *not* throw a cancellation exception - they will simply return early, so you have to check for cancellation yourself.
```
public async Task DoWork(CancellationToken token)
{
while(true)
{
// It is safe to check the token here, as we have not started any work
token.ThrowIfCancellationRequested();
var nextJob = GetNextJob();
// We can check the token here, because we have not
// made any changes to the system.
token.ThrowIfCancellationRequested();
var jobInfo = httpClient.Get($"job/info/{nextJob.Id}", token);
// We can check the token here, because we have not
// made any changes to the system.
// Note that HttpClient won't throw an exception
// if the token is cancelled - it will just return early,
// so we must check for cancellation ourselves.
token.ThrowIfCancellationRequested();
// The following code is a critical section - we are going to start
// modifying various databases and things, so don't check for
// cancellation until we have done it all.
ModifySystem1(nextJob);
ModifySystem2(nextJob);
ModifySystem3(nextJob);
// We *could* check for cancellation here as it is safe, but since
// we have already done all the required work *and* marking a job
// as complete is very fast, there is not a lot of point.
MarkJobAsCompleted(nextJob);
}
}
```
Finally, you might not want to leak cancellation exceptions from your code, because they aren't "real" exceptions - they are expected to occur whenever someone stops your service.
You can catch the exception with an exception filter like so:
```
public async Task DoWork(CancellationToken token)
{
try
{
while(true)
{
// Do job processing
}
}
catch (OperationCanceledException e) when (e.CancellationToken == token)
{
Log.Info("Operation cancelled because service is shutting down.");
}
catch (Exception e)
{
Log.Error(e, "Ok - this is actually a real exception. Oh dear.");
}
}
```
|
Mp4 video won't play in the iPad
Good day, I've been working on this project and learning how to place a video on the ipad, and all the other browsers.
But after writing the code for this, I noticed that the only thing I get from the iPad is the first keyframe of the video, but the video is not playing. When I press the "Play" button that appears in that screen of the ipad, the video won't just play. Weird is, the script has an autoplay command.
What is wrong with this script ? Is there a better way of doing this ? I've spent way too much time on this and I can't get it done. I checked this on Safari for Windows, Firefox, Internet Explorer and it works everywhere except for the iPad.
```
<video controls width="920px" height="560px" autoplay>
<!-- if Safari/Chrome-->
<source src="video.mp4" type="video/mp4" />
<source src="video.ogv" type="video/ogg" />
<!--If the browser doesn't understand the <video> element, then reference a Flash file. You could also write something like "Use a Better Browser!" if you're feeling nasty. (Better to use a Flash file though.)-->
<object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="920" height="560" id="csSWF">
<param name="movie" value="video_controller.swf" />
<param name="quality" value="best" />
<param name="bgcolor" value="#1a1a1a" />
<param name="allowfullscreen" value="true" />
<param name="scale" value="showall" />
<param name="allowscriptaccess" value="always" />
<param name="flashvars" value="autostart=true&showstartscreen=false&color=0x1A1A1A,0x1A1A1A" />
<!--[if !IE]>-->
<object type="application/x-shockwave-flash" data="video_controller.swf" width="920" height="560">
<param name="quality" value="best" />
<param name="bgcolor" value="#1a1a1a" />
<param name="allowfullscreen" value="true" />
<param name="scale" value="showall" />
<param name="allowscriptaccess" value="always" />
<param name="flashvars" value="autostart=true&showstartscreen=false&color=0x1A1A1A,0x1A1A1A" />
<!--<![endif]-->
<div id="noUpdate">
<p>The video content presented here, requires a more recent version of the Adobe Flash Player. If you are using a browser with JavaScript disabled please enable it now. Otherwise, please update your version of the free Flash Player by <a href="http://www.adobe.com/go/getflashplayer">downloading here</a>.</p>
</div>
<!--[if !IE]>-->
</object>
<!--<![endif]-->
</object>
<script type="text/javascript" src="swfobject.js"></script>
<script type="text/javascript">
swfobject.registerObject("csSWF", "9.0.28", "expressInstall.swf");
</script>
</video>
```
|
What are the spec of the mp4 video (resolution and profile)?. The iPad doesn't support all the varieties of mp4.
---
Did you try a simple html without the flash and the ogv source?
---
It looks like your over sepc:
### iPad Supported Video Format
- H.264 video (up to 720p, 30 frames per second; main profile level 3.1 with AAC-LC audio up to 160 Kbps, 48kHz, stereo audio in .m4v, .mp4, and .mov file formats)
- MPEG-4 video, up to 2.5 Mbps, **640 by 480 pixels**, 30 frames per second, Simple Profile with AAC-LC audio up to 160 Kbps, 48kHz, stereo audio in .m4v, .mp4, and .mov file formats.
|
A command used in another command
For djvused command, there is an option:
>
> -e command
>
>
> Cause djvused to execute the commands specified by the option argument
> commands. It is advisable to surround the djvused commands by single
> quotes in order to prevent unwanted shell expansion.
>
>
>
For example, `djvused myfile.djvu -e 'print-pure-txt'`.
It is quite unusual to me in that a command (here djvused) can run other commands (here by `-e` option). I was wondering how it is possible? Is this a frequent practice in command line interface?
Is this similar to print command used in awk command?
The only way I know for a command to be used in another command is:
```
echo `echo hello`
```
Thanks and regards!
|
It is quite usual, some programs base their working exclusively on this.
Some of the more common examples that come to mind are `su`, `sudo` and `xterm`.
```
su -c 'ls -l /root'
sudo ls -l root
xterm -e 'top -d 10'
```
It is different from your example
```
echo `echo hello`
```
where the inverse quotes are interpreted by the shell, and the program do not execute anything itself.
Note also the difference between `su` and `sudo`. The first take a string, and could be difficult to set up such a string from the user point of view, for example to expand a variable *before* it is seen by the command; the second a series of string and is far more simple (there are no quotes in the `sudo` example).
What they use to implement their internal working? There are essentially two ways: the `system` library routine and the `exec` system call. The first will call a shell, and allow for various shell expansions, like
```
su -c 'ls -ld /root/.*'
```
while the second method do not allow such freedom.
|
How do you inspect an Optional-String's length?
```
class PersonEntry: NSObject {
var firstName: String?
var lastName: String?
}
//This errors
if (self.person.firstName?.isEmpty) {
println("Empty")
}
//Compiler auto-correction is this
if ((self.model.firstName?.isEmpty) != nil) {
println("Empty")
}
```
I understand that optional chaining returns an optional type. So I suppose my question is, how do you unwrap an optional string, to inspect it's length, without risking a crash ?
|
I presume that if the property is nil then you want to consider it empty - in that case you can use the nil coalescing operator in combination with the first version of the if statement:
```
if self.person.firstName?.isEmpty ?? true {
println("Empty")
}
```
If `firstName` is nil, the expression evaluates to the right side of the coalescing operator, `true` - otherwise it evaluates to the value of the `isEmpty` property.
---
References:
[Nil Coalescing Operator](https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/BasicOperators.html#//apple_ref/doc/uid/TP40014097-CH6-ID72)
[Optional Chaining](https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/OptionalChaining.html#//apple_ref/doc/uid/TP40014097-CH21-ID245)
|
All possible ways to declare Javascript variables
To create an IDE that would autocomplete all variables the user declares but would be oblivious to other variables such as `Math.PI` or even the module `Math`, the IDE would need to be able to identify all identifiers relating to variables declared by the user. What mechanism could be used to capture all such variables, assuming you already have access to the AST (Abstract Symbol Table) for the program?
I am using reflect.js (<https://github.com/zaach/reflect.js>) to generate the AST.
|
## I think it's pretty much impossible
Here is why I think it's pretty much impossible without executing it:
Let us go through the unexplored parts, from easy to hard.
### Easy to catch:
**Function scope** is missed here:
```
(function(x){
//x is now an object with an a property equal to 3
// for the scope of that IIFE.
x;
})({a:3});
```
Here is some fun dirty tricks for you all.:
Introducing... drum roll... **Block Scoping!!**
```
with({x:3}){
x;//x is now declared in the scope of that with and is equal to 3.
}
try{ throw 5}catch(x){
x // x is now declared in the scope of the try block and is equal to 5;
}
```
(people reading: I beg you to please not use these last two for actual scoping in code :))
### Not easy:
Bracket notation:
```
var n = "lo";
a["h"+"e"+"l"+n] = "world"; // need to understand that a.hello is a property.
// not a part of the ast!
```
### The really hard parts:
Let us not forget invoking the compiler **These would not show up in the AST:**
```
eval("var x=5"); // declares x as 5, just a string literal and a function call
new Function("window.x = 5")();// or global in node
```
In node.js this can also be done with the `vm` module. In the browser using document.write or script tag injection.
What else? Of course they can obfuscate all they want:
```
new Function(["w","i","n","dow.x"," = ","5"].join(""))(); // Good luck finding this!
new Function('new Function(["w","i","n","dow.x"," = ","5"].join(""))()')();// Getting dizzy already?
```
---
## So what can be done?
- Execute the code, once, in a closed, timed environment when you update the symbol table (just the relevant parts)
- See what's the generated symbol table is from the execution
- Boom, you got yourself a symbol table.
This is not reliable but it's probably as close as you get.
The only other alternative I can think of, which is what *most* IDEs are doing is to simply ignore anything that is not:
```
object.property = ... //property definition
var a = ... //scoped
b = ... //global, or error in strict mode
function fn(){ //function declaration
object["property"] //property with a _fixed_ literal in bracket notation.
```
And also, function parameters.
I have seen **no** IDE that has been able to deal with *anything* but these. Since they're the most common by far, I think it's perfectly reasonable to count those.
|
Why doesn't Eclipse recognize new JAR?
I've received a new SOAP service client JAR from a teammate, and am busy integrating it into my app. I've put the JAR into the appropriate path, overwriting the previous version. I've refreshed my workspace, performed a Clean and a Build, but...
...I'm not able to see any of the classes within the JAR through intellisense, even though code I utilize from the JAR will compile and execute correctly. Furthermore, Eclipse is still trying to use the previous version of the JAR, in that when I perform a "fix imports", it re-imports the old classes, rather than the new ones.
The code builds and runs, but I have to reference classes absolutely, as opposed to relatively:
```
org.acmewidgets.soap.inputs.HeaderType header = new org.acmewidgets.soap.inputs.HeaderType()
```
Instead of
```
HeaderType header = new HeaderType()
```
The previous version of the JAR did not require this.
How do I "Force" Eclipse to ditch the old references and use the new ones? Or, is Eclipse runnin' fine, and my new JAR is suspect?
Thanks!
|
Looks like something got broken for good. If closing and opening the project again doesn't help, try this in order:
- If your project depends on other projects from the workspace, you could end up with *two* JARs in the classpath. Run tests or your app, then switch to the "Debug" perspective, select the project (which is terminated), then select "Properties" from the context menu. This gives you the classpath which was used.
Copy that to the text editor and search for the JAR file and make sure you only have a single copy in the classpath.
- Search your workspace for the JAr file. Maybe there is a second copy which you don't expect.
- Create a new workspace (don't delete the old one, yet) and import the project. Does that help?
- Try the latest version of Eclipse (3.6.1)
|
How do you remove invalid characters when creating a friendly url (ie how do you create a slug)?
Say I have this webpage:
<http://ww.xyz.com/Product.aspx?CategoryId=1>
If the name of CategoryId=1 is "Dogs" I would like to convert the URL into something like this:
<http://ww.xyz.com/Products/Dogs>
The problem is if the category name contains foreign (or invalid for a url) characters. If the name of CategoryId=2 is "Göra äldre", what should be the new url?
Logically it should be:
[http://ww.xyz.com/Products/Göra](http://ww.xyz.com/Products/G%C3%B6ra) äldre
but it will not work. Firstly because of the space (which I can easily replace by a dash for example) but what about the foreign characters? In Asp.net I could use the URLEncode function which would give something like this:
<http://ww.xyz.com/Products/G%c3%b6ra+%c3%a4ldre>
but I can't really say it's better than the original url (<http://ww.xyz.com/Product.aspx?CategoryId=2>)
Ideally I would like to generate this one but how can I can do this automatically (ie converting foreign characters to 'safe' url characters):
<http://ww.xyz.com/Products/Gora-aldre>
|
I've come up with the 2 following extension methods (asp.net / C#):
```
public static string RemoveAccent(this string txt)
{
byte[] bytes = System.Text.Encoding.GetEncoding("Cyrillic").GetBytes(txt);
return System.Text.Encoding.ASCII.GetString(bytes);
}
public static string Slugify(this string phrase)
{
string str = phrase.RemoveAccent().ToLower();
str = System.Text.RegularExpressions.Regex.Replace(str, @"[^a-z0-9\s-]", ""); // Remove all non valid chars
str = System.Text.RegularExpressions.Regex.Replace(str, @"\s+", " ").Trim(); // convert multiple spaces into one space
str = System.Text.RegularExpressions.Regex.Replace(str, @"\s", "-"); // //Replace spaces by dashes
return str;
}
```
|
How to write quickCheck on properties of functions?
I'm trying to do one of the Monoid exercises in Haskell Book (Chapter 15, "Monoid, Semigroup") but I'm stuck. The following is given:
```
newtype Combine a b =
Combine { unCombine :: (a -> b) }
```
and I'm supposed to write the `Monoid` instance for Combine.
I wrote something like this:
```
instance (Semigroup b) => Semigroup (Combine a b) where
Combine { unCombine = f } <> Combine { unCombine = g } =
Combine { unCombine = \x -> f x <> g x }
instance (Monoid b) => Monoid (Combine a b) where
mempty = Combine { unCombine = \_ -> mempty }
mappend = (<>)
```
but I do not know how to write the `quickCheck` for the instance.
Here is my try (does not compile):
```
monoidLeftIdentity1 :: (Eq m, Monoid m) => m -> Bool
monoidLeftIdentity1 x = mappend mempty x == x
monoidRightIdentity1 :: (Eq m, Monoid m) => m -> Bool
monoidRightIdentity1 x = mappend x mempty == x
main :: IO ()
main = do
quickCheck (monoidLeftIdentity1 :: Combine Int (Sum Int) -> Bool)
quickCheck (monoidRightIdentity1 :: Combine Int (Sum Int) -> Bool)
```
It seems I must instance `Arbitrary` and `Eq` on this type, but how to write them for a function?
There is a [similar question](https://stackoverflow.com/questions/39456716/how-to-write-semigroup-instance-for-this-data-type), in that question, we are asked to write the `Semigroup` instance for Combine.
|
First a full code example:
```
module Main where
import Test.QuickCheck
import Data.Monoid
newtype Combine a b = Combine { unCombine :: a -> b }
instance (Semigroup b) => Semigroup (Combine a b) where
a <> _ = a
-- (Combine f) <> (Combine g) = Combine $ \a -> (f a) <> (g a)
instance (Monoid b) => Monoid (Combine a b) where
mempty = Combine $ \_ -> mempty
monoidLeftIdentity :: (Eq m, Monoid m) => m -> Bool
monoidLeftIdentity m = mappend mempty m == m
monoidRightIdentity :: (Eq m, Monoid m) => m -> Bool
monoidRightIdentity m = mappend m mempty == m
monoidLeftIdentityF :: (Eq b, Monoid m) => (Fun a b -> m) -> (m -> a -> b) -> a -> Fun a b -> Bool
monoidLeftIdentityF wrap eval point candidate = eval (mappend mempty m) point == eval m point
where m = wrap candidate
monoidRightIdentityF :: (Eq b, Monoid m) => (Fun a b -> m) -> (m -> a -> b) -> a -> Fun a b -> Bool
monoidRightIdentityF wrap eval point candidate = eval (mappend m mempty) point == eval m point
where m = wrap candidate
main :: IO ()
main = do
quickCheck $ (monoidLeftIdentityF (Combine . applyFun) unCombine :: Int -> Fun Int (Sum Int) -> Bool)
quickCheck $ (monoidRightIdentityF (Combine . applyFun) unCombine :: Int -> Fun Int (Sum Int) -> Bool)
```
What are we doing here?
First we need a way to generate random functions. That is, what this `Fun` thing is about. There is an `Arbitrary` instance for `Fun a b`, if there are certain instances available for `a` and `b`. But most of the time we have those.
A value of type `Fun a b` can be shown, so `Fun a b` has a show instance, provided `a` and `b` have one. We can extract the function with `applyFun`.
For QuickCheck to take advantage of this, we need to provide a `Testable` where all argument positions can be randomly generated and shown.
So we have to formulate our Properties in terms of `a`, `b` and `Fun a b`.
To connect all of this with `Combine` we provide a function from `Fun a b` to `Combine a b`.
Now we are stuck with another problem. We can't compare functions, so we can't compare values of type `Combine a b` for equality. As we are already randomly generating test cases, why not just generate the points, on which to test the functions for equality, also randomly. The equality will not be a sure thing, but we are hunting the falsifiable examples! So that is good enough for us. To do that, we provide a function to "apply" a value of type `Combine a b` to a value of type `a`, to get a value of type `b`, which can hopefully be compared for equality.
|
Convert 1d array to lower triangular matrix
I would like to convert a 1 dimensional array into a lower, zero diagonal matrix while keeping all the digits.
I am aware of `numpy.tril` function, but it replaces some of the elements with zeros. I need to expand the matrix to contain all the original digits.
For example:
>
> [10,20,40,46,33,14,12,46,52,30,59,18,11,22,30,2,11,58,22,72,12]
>
>
>
Should be
```
0
10 0
20 40 0
46 33 14 0
12 46 52 30 0
59 18 11 22 30 0
2 11 58 22 72 12 0
```
|
With the input array holding all the values as required to fill up the lower diagonal places, here's one approach with `masking` -
```
def fill_lower_diag(a):
n = int(np.sqrt(len(a)*2))+1
mask = np.tri(n,dtype=bool, k=-1) # or np.arange(n)[:,None] > np.arange(n)
out = np.zeros((n,n),dtype=int)
out[mask] = a
return out
```
Sample run -
```
In [82]: a
Out[82]:
array([10, 20, 40, 46, 33, 14, 12, 46, 52, 30, 59, 18, 11, 22, 30, 2, 11,
58, 22, 72, 12])
In [83]: fill_lower_diag(a)
Out[83]:
array([[ 0, 0, 0, 0, 0, 0, 0],
[10, 0, 0, 0, 0, 0, 0],
[20, 40, 0, 0, 0, 0, 0],
[46, 33, 14, 0, 0, 0, 0],
[12, 46, 52, 30, 0, 0, 0],
[59, 18, 11, 22, 30, 0, 0],
[ 2, 11, 58, 22, 72, 12, 0]])
```
Timings on large array with `5k x 5k` shape -
```
In [146]: np.random.seed(0)
In [147]: n = 5000
In [148]: a = np.random.randint(0,9,n*(n+1)/2)
In [149]: %timeit tril_indices_app(a) #@Brenlla's solution
1 loop, best of 3: 218 ms per loop
In [151]: %timeit fill_lower_diag(a) # From this post
10 loops, best of 3: 43.1 ms per loop
```
|
Eclipse crashes on creating a new "menu" folder under "/res" for android development
I have an old computer: Pentium D 2.8 Ghz, 4 gig ram, win7 ult., running eclipse Helios and jdk1.6. I recently started learning android development and was going through some exercises people posted on the web. During one exercise which basically creates an option menu, it asked to create a menu folder under /res and a xml file to define the menu elements. In eclipse, I right click on the res folder to choose to create a new folder --> that's fine but you can hear my computer fan picks up, then I create a black file and name it "menu.xml" and double click to open it, my computer processor pick up to max by javaw.exe and eclipse stops responding (indicated by windows). If you watch the javaw.exe in the task manager, the amount of memory it uses just keeps adding up (up to 1gig, i then killed the process manually).
I did the same thing but this time, rather than create the menu folder, i just created the menu.xml file under "layout" folder and everything is fine. So is this a bug in android / eclipse? This is not a one time occurrence, it happens every time when i repeat the steps above -- including after rebooting the computer and start fresh.
Any ideas?
|
I can recreate this problem on latest Eclipse on Linux. I'm pretty sure it's a bug in the ADT that tries to validate an empty XML menu resource and gets stuck in an infinite loop, eventually causing a StackOverflow. It doesn't seem to happen for any other folder in res, so it must be related to menu inflation. You may want to submit this to Eclipse or AOSP.
Either way, the ADT plugin provides a wizard for creating XML files, use that. On the toolbar, it's the icon with an 'a' in it and a + in the upper right corner. Or you can use File > New > Other > Android > Android XML File, and create a Menu resource.
Update: after some googling, I found [this thread](http://groups.google.com/group/android-developers/browse_thread/thread/5ea3943f43cd2271/da0bad589c232dd4?lnk=gst&q=res+menu+xml+new+file#da0bad589c232dd4) on the android-developer mailing list. Apparently it's a known issue, and you should use the Android XML Resource wizard as described above.
|
Use NGINX for AD Authentication of both API users and Web Users
We are currently using an NGINX server as a reverse proxy for a variety of services and applications. It is handling and proxying traffic to a web application we have created, an API Proxy server, as well as a separate programmatic API. Below is a sample diagram showing the resources behind the proxy.
```
foobar.com
|-foobar.com/api
|-foobar.com/webapp
|-foobar.com/proxy
```
We have a corporate AD system with Single-Sign On capabilities. I am trying to integrate this with the NGINX reverse proxy to authenticate users when the access one or more of these services behind NGINX.
I have seen posts like <https://www.nginx.com/blog/nginx-plus-authenticate-users/> describe how to have NGINX use AD Authentication of users by having users fill out a login form, but this will not work for our use case. This is because some of these users will be interacting with the API service and the proxy service via HTTP CRUD operations and/or curl commands. In addtion, these API and proxy users may belong to other applications and not human beings. In that sense, they are fully programmatic interactions.
How can I configure NGINX to support both Human and Programmatic AD Authentication?
|
There is no thing like an AD Authentication. What you are looking for is [Kerberos](https://en.wikipedia.org/wiki/Kerberos_(protocol)) authentication via [GSSAPI](https://en.wikipedia.org/wiki/Generic_Security_Services_Application_Program_Interface). To get such an authentication for browser [SPNEGO](https://en.wikipedia.org/wiki/SPNEGO) is used. I just enumerate all these terms to give you a better starting point for further research.
The link you provided has nothing to do with SSO. It only describes a method to authenticate using the ADs LDAP interface.
For the type of authentication you want to implement you have to use the ADs Kerberos interface.
nginx does not support Kerberos out of the box. As far as I know
[spnego-http-auth-nginx-module](https://github.com/stnoonan/spnego-http-auth-nginx-module) is the least experimental way to implement Kerberos authentication in nginx.
Somewhat beyond the scope of your question: Apache has a very mature Kerberos module mod\_auth\_kerb and the somewhat newer mod\_auth\_gssapi.
This is not meant as a recommandation. My personal experience is mostly limited to using Kerberos in Java applications behind a Apache/mod\_proxy\_ajp/Tomcat facade.
|
Modifying private instance variables in Java
So, I came across an example in a Java book I found:
```
public class Account {
private String name;
private double balance;
private int acctNumber;
public Account(){}
public boolean equals(Account anotherAcc) {
return (this.name.equals(anotherAcc.name)
&& (this.balance == anotherAcc.balance) && (this.acctNumber == anotherAcc.acctNumber));
}
}
```
We see that the equals method is overloaded and is passed with another Account object to check if all instance variables are equal. My problem with this piece of code is that it seems as thought we're directly accessing private variables in the **anotherAcc** object, which doesn't seem right, but it works. The same thing happens when I make a main method in the same class where I somehow gain access to the private variables.
Conversely, when I create a main method in another class, it's only then I get a visibility error. My question is, why does Java allow private instance variable to be accessed in a object that is passed in a method? Is it because the object is of type **Account** and the method being passed to is part of a class called **Account**?
|
See (my favorite table) [Controlling Access to Members of a Class](http://docs.oracle.com/javase/tutorial/java/javaOO/accesscontrol.html):
```
Modifier Class Package Subclass World
-------------------------------------------
public Y Y Y Y
protected Y Y Y N
no modifier Y Y N N
private Y N N N
↑
You are here
```
Since you're in the *same* class, `private` members *are* available.
As mentioned in the comments, note that you're not overriding the correct [`equals`](https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html#equals-java.lang.Object-) method. The original one (of `Object` class), expects an object of type `Object` as an argument.
|
Firebase deleted user is able to change data. How can I fix this without modifying application code?
I'm make an app with a Firebase Auth, but when I delete or when I disable an account I need make a signOut() manually (I control this with a user reload), if I don't, the user can keep uploading data. How I can fix this without the app code?
**Firebase rules**
```
{
"rules": {
"users": {
"$uid": {
".read": "auth != null && auth.uid == $uid",
".write": "auth != null && auth.uid == $uid"
}
}
}
}
```
---
**App Code - How I detect it**
```
if(user != null) user.reload().addOnCompleteListener(this, new OnCompleteListener<Void>() {
@Override
public void onComplete(@NonNull Task<Void> task) {
if(!task.isSuccessful()) {
String exc = task.getException().getMessage();
Log.e("FireBaseUser", exc);
auth.signOut();
}
}
});
```
|
When a token is minted, it gets an expiration timestamp. This essentially says: "the information in this token is valid until ...". Deleting the user does not invalidate any existing tokens.
Keep in mind that since the newest [Firebase Authentication SDKs](https://firebase.google.com/docs/auth/), the tokens are only valid for one hour. So after at most an hour, the token will expire and it will be impossible for the deleted user to refresh it.
If this is not enough for your application, you can add logic to your application that marks the deleted users in the database (in a section that only the administrator can access):
```
/deletedUsers
209103: true
37370493: true
```
You can then in your [security rules](https://firebase.google.com/docs/database/security/) validate that only non-deleted users can access data:
```
".read": "!root.child('deletedUsers').child(auth.uid).exists()"
```
|
How do I add applications to the Messaging Menu?
I would like to have an application appear in the messaging menu that is not installed there by default, is there a way to do that on a per-user basis?
|
Applications that are in the messaging menu each have a file in the system directory of:
```
/usr/share/indicators/messages/applications/
```
That directory contains files that have the paths to the desktop files for those applications. You can also have one of these in your home directory with this path:
```
~/.config/indicators/messages/applications/
```
So let's say that I want to add Thunderbird to my messaging menu. I'd do something like this:
```
$ mkdir -p ~/.config/indicators/messages/applications/
$ echo /usr/share/applications/thunderbird.desktop > ~/.config/indicators/messages/applications/thunderbird
```
You'll need to restart your session (log out and back in) the first time you create the applications directory.
Here's an [example of usage](http://bazaar.launchpad.net/~indicator-applet-developers/libindicate/trunk/view/head:/examples/im-client.py) from a python application.
|
What is viewstate in JSF, and how is it used?
In JSF, there is a viewstate associated with each page, which is passed back and forth with submits etc.
I know that viewstate is calculated using the states of the various controls on the page, and that you can store it either client side or server side.
The question is: how is this value used? Is it used to validate the values sent at submit, to ensure that the same request is not sent twice?
Also, how is it calculated - I realise that richfaces may be calculated differently from myfaces, but an idea would be nice.
Thanks.
|
>
> The question is: how is this value
> used? Is it used to validate the
> values sent at submit, to ensure that
> the same request is not sent twice?
>
>
>
The original reason why the viewstate exists is because HTTP is stateless. The state of the components across requests need to be maintained one way or the other. Either you store the state in memory on the server and bind it to the session, or serialize/deserialize it in the request/response each time.
AFAIK, the viewstate is not used to detect double submit, but it could if you attach a timestamp or something similar to it.
The viewstate can also be encrypted to make sure the client doesn't alter it.
>
> Also, how is it calculated - I realise
> that richfaces may be calculated
> differently from myfaces, but an idea
> would be nice.
>
>
>
Each component is responsible to persist its state with `saveState` and `restoreState` (see this [tutorial](http://www.ibm.com/developerworks/java/library/j-jsf4/)). So different component suites result in different view state. Similarly, different JSF implementations might result in different view state.
|
How to catch Enter key and change event to Tab in Java
I have a swing application with multiple jtextfield on it. How do you replace the function of the enter key wherein when you press the Enter key, it will transfer to the nextfocusable component just like the tab key? I dont want to put a keylistener on each jtextfield.
|
You're looking for [`Container.setFocusTraversalKeys`](http://docs.oracle.com/javase/7/docs/api/java/awt/Container.html#setFocusTraversalKeys%28int,%20java.util.Set%29):
```
Container root = ...
// pressed TAB, control pressed TAB
Set<AWTKeyStroke> defaultKeys = root.getFocusTraversalKeys(KeyboardFocusManager.FORWARD_TRAVERSAL_KEYS);
// since defaultKeys is unmodifiable
Set<AWTKeyStroke> newKeys = new HashSet<>(defaultKeys);
newKeys.add(KeyStroke.getKeyStroke("pressed ENTER"));
root.setFocusTraversalKeys(KeyboardFocusManager.FORWARD_TRAVERSAL_KEYS, newKeys);
```
For more information, take a look at the [Focus Subsystem tutorial](http://docs.oracle.com/javase/tutorial/uiswing/misc/focus.html).
|
Get the Route Template from IOwinContext
I am looking to get the route template from a request. I am using OwinMiddleware and am overriding the Invoke method accepting the IOwinContext.
```
public override async Task Invoke(IOwinContext context)
{
...
}
```
Given the Request URL: <http://api.mycatservice.com/Cats/1234>
I want to get "**Cats/{CatId}**"
I have unsuccessfully tried converting it using the following approachs:
```
HttpRequestMessage msg = new HttpRequestMessage(new HttpMethod(context.Request.Method), context.Request.Uri);
HttpContextBase httpContext = context.Get<HttpContextBase>(typeof(HttpContextBase).FullName);
```
## For reference:
Here is a [post about how to do this using HttpRequestMessage](https://stackoverflow.com/questions/25222277/web-api-get-route-template-from-inside-handler#answer-25275874) which I have successfully implemented for another project
|
I had the same problem, this seems to work. A bit by magic, but so far so good:
```
public class RouteTemplateMiddleware : OwinMiddleware
{
private const string HttpRouteDataKey = "MS_SubRoutes";
private readonly HttpRouteCollection _routes;
public RouteTemplateMiddleware(OwinMiddleware next, HttpRouteCollection routes) : base(next)
{
_routes = routes;
}
public override async Task Invoke(IOwinContext context)
{
var routeData = _routes.GetRouteData(new HttpRequestMessage(new HttpMethod(context.Request.Method), context.Request.Uri));
var routeValues = routeData?.Values as System.Web.Http.Routing.HttpRouteValueDictionary;
var route = routeValues?[HttpRouteDataKey] as System.Web.Http.Routing.IHttpRouteData[];
var routeTemplate = route?[0].Route.RouteTemplate;
// ... do something the route template
await Next.Invoke(context);
}
}
```
Register the middleware like so:
```
public void Configuration(IAppBuilder app)
{
_httpConfiguration = new HttpConfiguration();
_httpConfiguration.MapHttpAttributeRoutes();
...
app.Use<RouteTemplateMiddleware>(_httpConfiguration.Routes);
...
}
```
|
Pop to root view controller without animation crash for the table view
I have 3 view controller in a tab bar controller. Clicking on any tab loads its root view controller in the navigation stack.
e.g. tab1, tab2, and tab3.
The 2nd view controller in the navigation stack (tab2VC2), has a tableView.
Click on tab2 show VC in tab2, then tap on tab1, tries to go to its rootVC. Then the app is crashing saying
>
> [UserDetailVC
> tableView:cellForRowAtIndexPath:]:
> message sent to deallocated instance
> 0xe0a23b0
>
>
>
If I popToRootVC with animation then its okay. I found viewDidAppear in the tab2VC2 is called where the tableView.reloadData is called, then dealloac, seems in the meantime reloadData starts working, the table is released. in case of animation, it gets some time, so it dont crash. But without animation, it is crashing.
Do you think, its an iPhone bug? or I am doing wrong? Since pop to root controller have an option without animation, it should work, not it?
```
#pragma mark Tab bar controller delegate
- (void)tabBarController:(UITabBarController *)tbController didSelectViewController:(UIViewController *)viewController {
int i = tbController.selectedIndex;
NSArray *mycontrollers = tbController.viewControllers;
[[mycontrollers objectAtIndex:i] popToRootViewControllerAnimated:NO];
}
```
|
I consider this a bug or at least a weakness in UIKit, but I've already blown half my day on it, so I'm not going to write it up with example code and report it to Apple right now. If someone else wants to do that, I would appreciate it.
Here's what I think is going on under the hood. You have a UITableViewController, let's call it myTable, on the stack of a UINavigationController, and that navigation stack is hidden because it's on an unselected tab or whatever. Then, you call [myTable.tableView reloadData], and iOS cleverly optimizes by *not* reloading the data right away, because the user won't be seeing it anyway if it's on a hidden tab. Instead, the reload request is deferred and stored somewhere for when the view is shown. But before it can be shown, you pop myTable off the navigation stack. When myTable's original tab is shown, the reload request gets executed, but its dataSource is no longer there, so it's a bad access.
Now from my tests with a subclass of UITableViewController that uses the automatically provided tableView property (not loaded from a NIB file), the UITableView is not being deallocated when myTable deallocates as in the situation above. That would be fine, except the default dealloc implementation for UITableViewController does not clear the dataSource property of the UITableView (which was set by the default implementation of init).
So, there are probably a couple good workarounds, like deferring the request to reloadData yourself, but the simplest one I can think of is putting this in the implementation of your UITableViewController subclass:
```
- (void)dealloc {
...
self.tableView.delegate = nil;
self.tableView.dataSource = nil;
[super dealloc];
}
```
Any additional wisdom would be most welcome.
|
Text User Interface for fdisk?
Is there a `TUI` for `fdisk` ?
Where can I get it from and how can I use it?
I would like one which is in the ubuntu repos
|
Yes, There is such a tool
## Cfdisk
It is a `curses` based partition editor. It is based on `fdisk`
It is preinstalled on ubuntu, If not just use the following command to install it
```
sudo apt-get install util-linux
```
run it using
```
sudo cfdisk
```
You will get this screen

# Using cfdisk
## The user interface
After you have started cfdisk you'll get an interface where the current partition table is listed with the names and some data about each partition, and some command buttons on the bottom of the screen. To change between partitions, use the up and down arrow keys. To change between commands, use the left and right arrow keys.
Deleting a partition
To delete an existing partition, highlight it with the up and down keys, select the Delete command with the left and right arrow keys, and press Enter.
## Making a new partition
To make a new partition, select the New command with the left and right arrow keys, and press enter. You'll get the choice between a primary and a logical partition. If you want a logical partition, the program will automatically make an extended partition for you. Then you must choose the size of the partition (in MB). If you can't enter a value in MB, return to the main screen with the Esc key, and select MB with the Units command.
## Set the type of a partition
To set the type of a partition, for bootable PReP, Linux swap or Linux ext2, highlight the actual partition, and select the Type command. You'll get a list over different types. Press space, and you'll get even more. Find what type you need, and enter the number at the prompt.
## Make a partition bootable
To be able to boot from a primary partition, you need to make it bootable. Highlight the actual partition and select the Bootable command.
## Write the result to disk and quit
When you are content with the layout of the disk, select the Write command. The partition table will be written to disk. Remember that this will destroy all data on partitions you have deleted or changed. You should therefore be very sure that you want to do this before actually press the Return key.
For more deeper information, see here: <http://manpages.ubuntu.com/manpages/precise/man8/cfdisk.8.html>
---
*Taken From:* <http://www.absolutelinux.org/installing/cfdisk.shtml>
|
Python command line parameters
I am just starting with python so I am struggling with a quite simple example. Basically I want pass the name of an executable plus its input via the command line arguments, e.g.:
```
python myprogram refprogram.exe refinput.txt
```
That means when executing `myprogram`, it executes `refprogram.exe` and passes to it as argument `refinput`. I tried to do it the following way:
```
import sys, string, os
print sys.argv
res = os.system(sys.argv(1)) sys.argv(2)
print res
```
The error message that I get is:
```
res = os.system(sys.argv(1)) sys.argv(2)
^
SyntaxError: invalid syntax
```
Anyone an idea what I am doing wrong?
I am running Python 2.7
|
This line
```
res = os.system(sys.argv(1)) sys.argv(2)
```
Is wrong in a couple of ways.
First, sys.argv is a list, so you use square brackets to access its contents:
```
sys.argv[1]
sys.argv[2]
```
Second, you close out your parentheses on `os.system` too soon, and `sys.argv(2)` is left hanging off of the end of it. You want to move the closing parenthesis out to the very end of the line, after all of the arguments.
Third, you need to separate the arguments with commas, a simple space won't do.
Your final line should look like this:
```
res = os.system(sys.argv[1], sys.argv[2])
```
|
Restrict controller action from web site only
Is it possible to restrict controller action access for my action methods to AJAX-only requests?
I mean that a non-Ajax request to action it will be redirected to 404 or error thrown. Action will only return the correct result, if request is made by JS on my own page.
I tried to google it, but didn't found an answer. I learned that I can implement custom filter for that purpose, but maybe it exists in MVC already?
Thank you.
|
create a custom action method selector to block non-Ajax calls
```
public class AjaxOnlyAttribute : ActionMethodSelectorAttribute
{
public override bool IsValidForRequest(ControllerContext controllerContext, MethodInfo methodInfo)
{
return controllerContext.HttpContext.Request.IsAjaxRequest();
}
}
```
Your attribute would be applied in the classic manner
```
[AjaxOnly]
public ActionResult Index()
{
```
Or you can apply it to your Controller
```
[AjaxOnly]
public class HomeController : Controller
{
```
I'm not sure I would recommend you send your users to a 404, but as always, the developer alone understands their requirements. So I recommend you consider if you would want to approach this in a "softer" fashion of redirecting to a warning page.
|
htaccess leverage browser caching for images and css
I am trying to create a htaccess file for my website and the pageSpeed insights has shown that there are images and one css file without expiration.
I am not sure where to start with this or how to do it, I have this code from a tutorial online and was wondering if this would be enough to work.
```
<IfModule mod_expires.c>
ExpiresActive On
############################################
## Add default Expires header
## http://developer.yahoo.com/performance/rules.html#expires
<FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$">
ExpiresDefault "access plus 1 year"
</FilesMatch>
</IfModule>
```
Or does this code do what I need it to do?
Thanks
|
try something like
```
<IfModule mod_expires.c>
ExpiresActive On
ExpiresDefault "access plus 1 seconds"
ExpiresByType text/html "access plus 1 seconds"
ExpiresByType image/x-icon "access plus 2592000 seconds"
ExpiresByType image/gif "access plus 2592000 seconds"
ExpiresByType image/jpeg "access plus 2592000 seconds"
ExpiresByType image/png "access plus 2592000 seconds"
ExpiresByType text/css "access plus 604800 seconds"
ExpiresByType text/javascript "access plus 86400 seconds"
ExpiresByType application/x-javascript "access plus 86400 seconds"
</IfModule>
```
or
```
<FilesMatch "\.(?i:gif|jpe?g|png|ico|css|js|swf)$">
<IfModule mod_headers.c>
Header set Cache-Control "max-age=172800, public, must-revalidate"
</IfModule>
</FilesMatch>
```
|
Property does not exist on type 'DetailedHTMLProps, HTMLDivElement>' with React 16
Since React 16 now allows [custom DOM attributes](https://facebook.github.io/react/blog/2017/09/08/dom-attributes-in-react-16.html), I tried to leverage this in my Typescript code:
```
import * as React from 'react';
<div className="page" size="A4">
</div>
```
but receive this error message:
>
> error TS2339: Property 'size' does not exist on type
> 'DetailedHTMLProps< HTMLAttributes< HTMLDivElement>, HTMLDivElement>'.
>
>
>
This [thread](https://stackoverflow.com/questions/40093655/how-do-i-add-attributes-to-existing-html-elements-in-typescript-jsx) suggests to do a `module augmentation`, so I tried this way:
```
import * as React from 'react';
declare module 'react' {
interface HTMLProps<T> {
size?:string;
}
}
```
Same error message.
Finally, I also tried to declare `page` as a new HTML tag:
```
declare global {
namespace JSX {
interface IntrinsicElements {
page: any
}
}
}
<page className="page" size="A4">
</page>
```
It gets rid of the error message, but the `size` attribute is completely ignored in the compiled code, and I end up with:
```
<page className="page">
</page>
```
Ideally, the last one is my preferred solution. I'd like to use the `size` custom attribute alongside the `page` custom tag.
>
> tsconfig.js
>
>
>
```
{
"compilerOptions": {
"outDir": "build/dist",
"module": "esnext",
"target": "es5",
"lib": ["es6", "dom"],
"sourceMap": true,
"allowJs": true,
"jsx": "react",
"moduleResolution": "node",
"rootDir": "src",
"forceConsistentCasingInFileNames": true,
"noImplicitReturns": true,
"noImplicitThis": true,
"noImplicitAny": true,
"strictNullChecks": true,
"suppressImplicitAnyIndexErrors": true,
"allowSyntheticDefaultImports": true,
"noUnusedLocals": false,
"noUnusedParameters": false,
"allowUnusedLabels": true,
"allowUnreachableCode": true
}
}
```
|
HTML supports data-\* attribute type for custom attributes. You can read about it more [here](https://www.w3schools.com/tags/att_global_data.asp).
>
> **Definition and Usage** The data-\* attributes is used to store custom
> data private to the page or application.
>
>
> The data-\* attributes gives us the ability to embed custom data
> attributes on all HTML elements.
>
>
> The stored (custom) data can then be used in the page's JavaScript to
> create a more engaging user experience (without any Ajax calls or
> server-side database queries).
>
>
> The data-\* attributes consist of two parts:
>
>
> - The attribute name should not contain any uppercase letters, and must
> be at least one character long after the prefix "data-"
> - The attribute value can be any string
>
>
> **Note:** Custom attributes prefixed with "data-" will be completely ignored by the user agent.
>
>
>
Rather than just using `size="A4"` you can use `data-size="A4"`
**Example**
```
<div className="page" data-size="A4">
// ....
</div>
```
|
How to model a `UNIQUE` constraint in SQLAlchemy?
I am writing a Flask/SQLAlchemy application in which I have users and groups.
Users can belong to several groups, and **they have a unique number within each group**. [Asking about how to model the database](https://dba.stackexchange.com/questions/32961/how-to-model-a-user-group-relationship-where-users-are-assigned-an-unique-number/32965#32965) I was advised to use the following table structure for my many-to-many relationship:
```
TABLE UserGroups
GroupID
UserID
UserNumber
PRIMARY KEY (GroupID, UserID)
UNIQUE (GroupID, UserNumber)
FOREIGN KEY (GroupID)
REFERENCES Groups (GroupID)
FOREIGN KEY (UserID)
REFERENCES Users (UserID)
```
Now I know how to create a regular many-to-many relationship with SQLAlchemy, but I don't know how to represent the `UNIQUE` constraint with the additional `UserNumber` field.
I don't have a lot of experience with database design, ORMs and SQLAlchemy, so this may be obvious, but I can't find a way to express it.
On of the things I don't get is: using a regular many-to-many relationship, my `User` class has a *list-like* attribute `groups` which contains all the groups he belongs to, but this completely hides the `UserGroups` joining-table and I don't know how to access the `UserNumber` field.
This is all a bit blur to me. Do you have any good example or explanations on how-to do such a thing with SQLAlchemy ?
|
The first part of the question (about creating a unique constraint with multiple columns) is already [answered by cleg](https://stackoverflow.com/a/14355621/851737).
However, the default many-to-many approach doesn't work if you want to have additionally columns in the mapping table. Instead, you should use the [Association Object Pattern](http://docs.sqlalchemy.org/en/rel_0_7/orm/relationships.html#association-object). Additionally, you can simplify the access between user and group with an [association\_proxy](http://docs.sqlalchemy.org/en/rel_0_7/orm/extensions/associationproxy.html#simplifying-association-objects).
The [`proxied_association.py`](https://bitbucket.org/zzzeek/sqlalchemy/src/master/examples/association/proxied_association.py) from the [SQLAlchemy examples](http://docs.sqlalchemy.org/en/rel_0_7/orm/examples.html#associations) should be a good place to start.
|
Pandas: Combine TimeGrouper with another Groupby argument
I have the following DataFrame:
```
df = pd.DataFrame({
'Branch' : 'A A A A A B'.split(),
'Buyer': 'Carl Mark Carl Joe Joe Carl'.split(),
'Quantity': [1,3,5,8,9,3],
'Date' : [
DT.datetime(2013,1,1,13,0),
DT.datetime(2013,1,1,13,5),
DT.datetime(2013,10,1,20,0),
DT.datetime(2013,10,2,10,0),
DT.datetime(2013,12,2,12,0),
DT.datetime(2013,12,2,14,0),
]})
from pandas.tseries.resample import TimeGrouper
```
How can I group this data by the Branch and on a 20 day period using TimeGrouper?
All my previous attempts failed, because I could not combine TimeGrouper with another argument in the groupby function.
I would deeply appreciate your help.
Thank you
Andy
|
From the discussion here: <https://github.com/pydata/pandas/issues/3791>
```
In [38]: df.set_index('Date').groupby(pd.TimeGrouper('6M')).apply(lambda x: x.groupby('Branch').sum())
Out[38]:
Quantity
Branch
2013-01-31 A 4
2014-01-31 A 22
B 3
```
And a bit more complicated question
```
In [55]: def testf(df):
....: if (df['Buyer'] == 'Mark').sum() > 0:
....: return Series(dict(quantity = df['Quantity'].sum(), buyer = 'mark'))
....: return Series(dict(quantity = df['Quantity'].sum()*100, buyer = 'other'))
....:
In [56]: df.set_index('Date').groupby(pd.TimeGrouper('6M')).apply(lambda x: x.groupby('Branch').apply(testf))
Out[56]:
buyer quantity
Branch
2013-01-31 A mark 4
2014-01-31 A other 2200
B other 300
```
|
Binding classes with property injection to kernel in inversify
I am trying to achieve dependency injection on my node project via [inversify](http://inversify.io/).
I have a kernel config like this: `inversify.config.ts`
```
import "reflect-metadata";
import {Kernel, interfaces} from "inversify";
import {Config} from "./config";
import {Collection} from "./collection";
let kernel = new Kernel();
kernel.bind<Config>("Config").to(Config).inSingletonScope();
// kernel.bind<interfaces.Newable<Collection>>("Collection").toConstructor<Collection>(Collection);
// it works without the line above
```
A class: `config.ts`
```
import {injectable} from "inversify";
@injectable()
export class Config {
constructor() {}
}
```
A class with property DI `collection.ts`
```
import {injectable} from "inversify";
import getDecorators from "inversify-inject-decorators";
import kernel from "../inversify.config";
let {lazyInject} = getDecorators(kernel);
@injectable()
export class Collection {
@lazyInject(Config)
private Config: Config;
constructor(a: string, b: string) {}
}
```
Everything works as expected if I don't bind a class with property injection. When I try to bind a class with `@lazyInject` as shown in the example
```
kernel.bind<interfaces.Newable<Collection>>("Collection").toConstructor<Collection>(Collection);
```
import line in `inversify.config.ts` starts processing `Collection.ts` and the line
```
import kernel from "../inversify.config";`
```
in it. However as we arrived `Collection.ts` while we already processing `inversify.config.ts` file line
```
import kernel from "../inversify.config";`
```
somehow returns `undefined` making kernel `undefined` for `Collection` class. Therefore `@lazyInject` DI fails.
Eventually when I try to read `Config` in `Collection` it fails with:
```
TypeError: Cannot read property 'get' of undefined
at resolve (node_modules/inversify-inject-decorators/lib/decorators.js:24:30)
at Category.getter (node_modules/inversify-inject-decorators/lib/decorators.js:6:47)
```
I wonder if there is a way to accomplish binding a class with property DI via `@lazyInject` **without** moving kernel definition into the same file with one of the classes. I am looking for a way to **import** kernel as it is now, but make it work properly.
|
Your problem is that you have a circular dependency:
[](https://i.stack.imgur.com/7mtpc.png)
I'm going to change your code and add a couple of additional files. Nothe that the he file `types.ts` is not needed but it is recommended to keep all your type identifiers in one single location.
After changing your code the dependency diagram will change to the following:
[](https://i.stack.imgur.com/MboGd.png)
As you can se we have eliminated the circular dependency.
### inversify.config.ts
```
import "reflect-metadata";
import {Kernel, interfaces} from "inversify";
import getDecorators from "inversify-inject-decorators";
import { makeProvideDecorator } from "inversify-binding-decorators";
let kernel = new Kernel();
let {lazyInject} = getDecorators(kernel);
let provide = makeProvideDecorator(kernel);
export { lazyInject, provide };
```
### types.ts
```
let TYPES = {
Config: "Config",
Collection: "Collection",
};
export default TYPES;
```
### config.ts
```
import { provide } from "../inversify.config";
import TYPES from "./types";
@provide(TYPES.Config)
export class Config {
constructor() {}
}
export default Config;
```
### collection.ts
```
import { lazyInject, provide } from "../inversify.config";
@provide(TYPES.Collection)
export class Collection {
@lazyInject(TYPES.Config)
private Config: Config;
constructor(a: string, b: string) {}
}
export default Collection;
```
### main.ts
You need to import all your dependencies when they are imported @provide is executed and then bindings are generated.
```
import Config from "./lib/config";
import Collection from "./models/collection";
//...
```
We have been able to eliminate the circular dependency using the `@provide` decorator from [inversify-binding-decorators](https://github.com/inversify/inversify-binding-decorators).
|
Function to get previous business day given a date in Linux
Given an input date, I want to write a bash function that will output the previous business day.
By this I mean the preceding ***week***day (Monday through Friday);
I don't need it to take holidays into account.
So, for example, given "Jan 2, 2018" the result should be "Jan 1, 2018"
(even though that is a holiday),
but given "Jan 1, 2018" the result should be "Dec 29, 2017"
(because Dec 30 and 31 were Saturday and Sunday).
I don't require any particular format;
just something that is human-readable and acceptable to `date -d`.
I have tried the following but the input date does not seem to be correctly taken into account:
```
function get_previous_busday()
{
DAY_OF_WEEK=`$1 +%w`
if [ $DAY_OF_WEEK -eq 0 ] ; then
LOOKBACK=-2
elif [ $DAY_OF_WEEK -eq 1 ] ; then
LOOKBACK=-3
else
LOOKBACK=-1
fi
PREVDATE=date -d "$1 $LOOKBACK day"
}
```
I want to apply it for today:
```
PREVDATE=$(get_previous_busday $(date))
echo $PREVDATE
```
and for yesterday:
```
PREVDATE=$(get_previous_busday (date -d "$(date) -1 day"))
echo $PREVDATE
```
But it is not working:
```
main.sh: line 3: Fri: command not found
main.sh: line 4: [: -eq: unary operator expected
main.sh: line 6: [: -eq: unary operator expected
main.sh: line 11: -d: command not found
main.sh: command substitution: line 20: syntax error near unexpected token `date'
main.sh: command substitution: line 20: `get_previous_busday (date -d "$(date) -1 day"))'
```
|
A function to do what you want is:
```
get_previous_busday() {
if [ "$1" = "" ]
then
printf 'Usage: get_previous_busday (base_date)\n' >&2
return 1
fi
base_date="$1"
if ! day_of_week="$(date -d "$base_date" +%u)"
then
printf 'Apparently "%s" was not a valid date.\n' "$base_date" >&2
return 2
fi
case "$day_of_week" in
(0|7) # Sunday should be 7, but apparently some people
# expect it to be 0.
offset=-2 # Subtract 2 from Sunday to get Friday.
;;
(1) offset=-3 # Subtract 3 from Monday to get Friday.
;;
(*) offset=-1 # For all other days, just go back one day.
esac
if ! prev_date="$(date -d "$base_date $offset day")"
then
printf 'Error calculating $(date -d "%s").\n' "$base_date $offset day"
return 3
fi
printf '%s\n' "$prev_date"
}
```
For example,
```
$ get_previous_busday
Usage: get_previous_date (base_date)
$ get_previous_busday foo
date: invalid date ‘foo’
Apparently "foo" was not a valid date.
$ get_previous_busday today
Fri, Nov 30, 2018 1:52:15 AM
$ get_previous_busday "$(date)"
Fri, Nov 30, 2018 1:52:51 AM
$ PREVDATE=$(get_previous_busday $(date))
$ echo "$PREVDATE"
Fri, Nov 30, 2018 12:00:00 AM
$ get_previous_busday "$PREVDATE"
Thu, Nov 29, 2018 12:00:00 AM
$ PREVPREVDATE=$(get_previous_busday "$PREVDATE")
$ printf '%s\n' "$PREVPREVDATE"
Thu, Nov 29, 2018 12:00:00 AM
$ get_previous_busday "$PREVPREVDATE"
Wed, Nov 28, 2018 12:00:00 AM
```
|
Looking for the Code Converter which converts C# to Java
Can anybody help me by suggesting the name of an converter which converts C# code to Java code. Actually, I have a tool which is written in C# code and I am trying to modify it. As I have no idea about C# and .NET framework, it seems difficult to me to convert the large code by my own. I found from some web information that there exist some tools which can convert C# to Java (may not be properly, however they can). Can anybody help me by suggesting some name of those tools.
|
**Disclaimer:** *No tool is perfect.*
However, if you still want to try then there are these converters available:
- [CS2J](http://www.cs2j.com/)
- [JCLA](http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=14349) : Convert Java-language code to C#
- [Grasshopper](http://dev.mainsoft.com/Default.aspx?tabid=130#What%20is%20Grasshopper?)
- [CSharpJavaMerger](http://csharpjavamerger.org/)
- [Tangible Software C# to Java Converter](http://www.tangiblesoftwaresolutions.com/Product_Details/CSharp_to_Java_Converter_Details.html)
Not a converter but a bridge between .NET and the JVM:
- [JNI4NetBridge](http://jni4net.sf.net/)
|
AWS - how do you share an access token between lambda processes?
First i have a question about the way Lambda works:
If it's only triggered by 1 SQS queue and that queue now contains 100 messages, would it sequentially create and tear down 100 lambdas processes? Or would it do it in parallel?
My second question is the main one:
The job of my lambda is to request an access token (for an external service) that expires every hour and using it, perform some action on that external service.
Now, i want to be able to cache that token and only ask for it every 1 hour, instead of every time i make the request using the lambda.
Given the nature of how Lambda works, is there a way of doing it through code?
How can i make sure all Lambdas processes use the same access token?
(I know i can create a new Redis instance and make them all point to it, but i'm looking for a "simpler" solution)
|
You can stuff the token in the SSM parameter store. You can encrypt the value. Lambdas can check the last modified date on the value to monitor when expiration is pending and renew. No Redis instance to maintain, and the value would be encrypted.
<https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html>
You could also use DynamoDB for this. Lower overhead than Redis since it’s serverless. If you have a lot of concurrent Lambda, this may be preferable to SSM because you may run into rate limiting on the API. A little more work because you have to setup a DynamoDB table.
Another option would be to have a “parent” Lambda function that gets the API token and calls the “worker” Lambdas and passes the token as a parameter.
|
Patterns / design suggestions for permission handling
We have a rather complicated system of permission handling in our (ASP.NET web) application. Users can have specific permissions on different kinds of objects, some permissions are even packed into groups / roles that are assigned to users. All in all this ends up in a pretty complicated mess where for determining whether a user can do / see something you have to evaluate many different sources of permissions and this is done somehow on-demand and based on specific situations.
My question is (from a high level point of view) whether there are some suggestions / common design patterns to deal with permission concept in general and probably also what is your experience with handling them in your architecture.
|
**Users** and **Groups** with the ability to test `bool UserHasPermission( SOME_PERMISSION )` for an atomic permission associated with a Group is the standard approach for authorization, however things are changing to Claims-based:
<http://msdn.microsoft.com/en-us/magazine/ee335707.aspx>
<http://msdn.microsoft.com/en-us/magazine/cc163366.aspx>
<http://www.infoq.com/news/2009/10/Guide-Claim-Based-Identity>
It however, is not ideal for all situations.
For the old model, I find that performance can be gained by using memoization during permissions checks. That way I'm not going to the database n times per session to check access control. Memoization effectively stores in a cache the result of a call with the same parameters, so all calls by a particular user to check XYZ permission would return the same result. Of course, you'd make sure you stored the memoized permissions for the user in the Session so it's per-user. If you load the permissions at login then you don't need to cache them, but in large systems with many permissions sometimes it's best to get them only when needed.
<http://www.infoq.com/news/2007/01/CSharp-memory>
|
How do I draw a cosine or sine curve in Swift?
I've been trying to figure out how to use UIBezierPath for this project, but I don't know how to implement this kind of drawing. I can draw a circle and arcs and straight lines, but I am feeling pretty lost on this one. Appreciate the help
|
To draw a sine wave on a `UIBezierPath` called `path`, draw a number of line segments using `path.addLine(to:)`. The trick is to convert the angle (`0` to `360`) to the `x` coordinate of a point, and `sin(x)` to the `y` coordinate of a point.
Here is an example:
```
class SineView: UIView{
let graphWidth: CGFloat = 0.8 // Graph is 80% of the width of the view
let amplitude: CGFloat = 0.3 // Amplitude of sine wave is 30% of view height
override func draw(_ rect: CGRect) {
let width = rect.width
let height = rect.height
let origin = CGPoint(x: width * (1 - graphWidth) / 2, y: height * 0.50)
let path = UIBezierPath()
path.move(to: origin)
for angle in stride(from: 5.0, through: 360.0, by: 5.0) {
let x = origin.x + CGFloat(angle/360.0) * width * graphWidth
let y = origin.y - CGFloat(sin(angle/180.0 * Double.pi)) * height * amplitude
path.addLine(to: CGPoint(x: x, y: y))
}
UIColor.black.setStroke()
path.stroke()
}
}
let sineView = SineView(frame: CGRect(x: 0, y: 0, width: 200, height: 200))
sineView.backgroundColor = .white
```
---
Here it is running in a Playground:
[](https://i.stack.imgur.com/IoG0C.png)
---
@Rob updated this code making it `@IBDesignable` with `@IBInspectable` properties in addition to adding a `periods` property. Check out his answer [here](https://stackoverflow.com/a/40412471/1630618).
|
How do I set configure the hosts file for hosts in ansible inventory?
I've started learning and using ansible for configuring my staging and production servers. One thing I'd like to do is configure /etc/hosts through the inventory file.
It seems that this is possible. Here is one such usage:
<https://gist.github.com/rothgar/8793800>
However, I'm a little green to Ansible and I don't get it. Can someone please explain in plain english how I make it work in practice?
For example, if my inventory file contains.
```
[compute]
1.2.3.4
5.6.7.8
[db]
2.3.4.5
6.7.8.9
10.11.12.13
[all]
compute
db
[all:vars]
...
```
I'd like to say with consistency that my hosts file after running the playbook contained
```
2.3.4.5 db1
6.7.8.9 db2
10.11.12.13 db3
1.2.3.4 compute1
5.6.7.8 compute2
```
Is this possible?
|
You could generate your hosts entries from a template. Loop over the list of groups, discard groups like `all` and `ungrouped`, and then loop over the list of hosts in each group:
```
{# this loops over the list of groups. inside the loop #}
{# "group" will be the group name and "hosts" will be the #}
{# list of hosts in that group. #}
{% for group,hosts in groups.items() %}
{# skip the "all" and "ungrouped" groups, which presumably #}
{# you don't want in your hosts file #}
{% if group not in ["ungrouped", "all"] %}
{# generate a hosts entry for each host, using the "loop.index" #}
{# variable and the group name to generate a unique hostname. #}
{% for host in hosts %}
{{host}} {{group}}{{loop.index}}
{% endfor %}
{% endif %}
{% endfor %}
```
The above is using `{{host}}` for the ip address, because this lets me tested it out on my system, but you would probably prefer `{{hostvars[host]['ansible_default_ipv4']['address']}}` in a real environment, unless you are positive you are always using ip addresses in your inventory.
|
Elastic search term query not working if used field.keyword
I have below mapping for elastic search
```
{
"2021-05-ui":{
"mappings":{
"userinfo":{
"properties":{
"address":{
"type":"text",
"fields":{
"keyword":{
"type":"keyword",
"ignore_above":256
}
}
},
"name":{
"type":"keyword"
},
"userId":{
"type":"keyword"
}
}
}
}
}
```
I want to use the term query on userId field like below its doesn't work , but when i remove `.keyword` from below query its work , i want the query which match exact value for userId field
```
{
"query": {
"bool": {
"filter": [
{
"term": {
"userId.keyword": "[email protected]"
}
}
]
}
}
}
```
i want the query with `"userId.keyword"` only since other environments its has both the text and keyword mapping , could you please suggest which query we can use to get the exact match , i tried using match,match\_phrase but didnt help much.
|
`userId` field is of `keyword` type, which means it uses keyword analyzer instead of the standard analyzer. In this case, you will be getting the result only for the exact match
Since `userId` is of `keyword` type and there is no field for `userId.keyword`, you have to use `userId` for term query
```
{
"query": {
"bool": {
"filter": [
{
"term": {
"userId": "[email protected]"
}
}
]
}
}
}
```
---
**However if you want to store `userId` field as of both `text` and `keyword` type, then you can [update your index mapping as shown below to use multi fields](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html#add-multi-fields-existing-field-ex)**
```
PUT /_mapping
{
"properties": {
"userId": {
"type": "keyword",
"fields": {
"raw": {
"type": "text"
}
}
}
}
}
```
And then reindex the data again. After this, you will be able to query using the `"userId"` field as of `keyword` type and `"userId.raw"` as of `text` type
|
converting non-numeric to numeric value using Panda libraries
I am learning Pandas and I came to an interesting question. So I have a Dataframe like this:
```
COL1 COL2 COL3
a 9/8/2016 2
b 12/4/2016 23
...
n 1/1/2015 21
```
COL1 is a String, Col2 is a timestamp and Col3 is a number. Now I need to do some analysis on this Dataframe and I want to convert all the non-numeric data to numeric. I tried using [DictVectorizer()](http://scikit-learn.org/stable/modules/feature_extraction.html#dict-feature-extraction) to convert COL1 and 2 to numeric but first of all I am not sure if this is the best way doing such a thing and second I don't know what to do with the timestamp.
When I use DictVectorizer the output would be like:
```
{u'COL3: {0:2, 1:23 , ...,n:21}, 'COL1': {0: u'a', 1:'b', ... , n:'n'}, 'COL2': {0: u'9/8/2016' , 1: u'12/4/2016' , ... , n:u'1/1/2016'}}
```
but from what I learned it should be like this or at least I know I need something like this:
```
{COL1:'a', COL2: '9/8/2016' , COL3: 2 and so on}
```
so, questions:
1-what is the best way of converting non- numeric (including date) to numeric values to use in sklearn libraries
2- what is the right way of using DictVectorize()
Any help would be appreciated.
|
To encode non-numeric data to numeric you can use scikit-learn's [LabelEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html#sklearn.preprocessing.LabelEncoder). It will encode each category such as COL1's `a`, `b`, `c` to integers.
Assuming df is your dataframe, try:
```
from sklearn.preprocessing import LabelEncoder
enc = LabelEncoder()
enc.fit(df['COL1'])
df['COL1'] = enc.transform(df['col1'])
```
- `enc.fit()` creates the corresponding integer values.
- `enc.transform()` applies the encoding to the df values.
For the second column, using Pandas [to\_datetime()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html) function should do the trick, like @quinn-weber mentioned, try:
```
df['COL2'] = pd.to_datetime(df['COL2'])
```
|
Outlook VBA macro for saving emails copies in a local folder
Whenever sending an email, I would like a copy of that email to be saved in the local folder, together with all attachments.
I don't think this is possible with a custom rule in Outlook but perhaps it could be done with a VBA script?
I use Outlook and MS Exchange.
|
Sure it can be done using the `Application_ItemSend` event procedure to call a custom procedure which will save your sent mails to a local folder.
This code goes in "ThisOutlookSession" module.
```
Private Sub Application_ItemSend(ByVal Item As Object, Cancel As Boolean)
Call SaveACopy(Item)
End Sub
Sub SaveACopy(Item As Object)
Const olMsg As Long = 3
Dim m As MailItem
Dim savePath As String
If TypeName(Item) <> "MailItem" Then Exit Sub
Set m = Item
savePath = "c:\users\your_user_name\desktop\" '## Modify as needed
savePath = savePath & m.Subject & Format(Now(), "yyyy-mm-dd-hhNNss")
savePath = savePath & ".msg"
m.SaveAs savePath, olMsg
End Sub
```
You will need to ensure that the specified path is unique/etc., the above example is fairly crude. You also need strip out any illegal characters that can't be put in a file name (slash, pipes, etc.)...
As an alternative, I would suggest simply archiving your folder(s) periodically. You can configure Outlook to save a copy of sent mail to a "Sent" folder, and then you should be able to archive that folder; saving each item individually seems less-than-optimal.
|
How do I go straight to template, in Django's urls.py?
Instead of going to views.py, I want it to go to to a template, robots.txt.
|
# Django 2+
Note: is valid still as of Django 4+
Use the class based generic views but register with the django 2.0+ pattern.
```
from django.urls import path
from django.views.generic import TemplateView
urlpatterns = [
path('foo/', TemplateView.as_view(template_name='foo.html'))
]
```
<https://docs.djangoproject.com/en/4.1/topics/class-based-views/#usage-in-your-urlconf>
# Django 1.5+
Use the class based generic views.
```
from django.views.generic import TemplateView
urlpatterns = patterns('',
(r'^foo/$', TemplateView.as_view(template_name='foo.html')),
)
```
#Django <= 1.4
Docs: <https://docs.djangoproject.com/en/1.4/ref/generic-views/#django-views-generic-simple-direct-to-template>
```
urlpatterns = patterns('django.views.generic.simple',
(r'^foo/$', 'direct_to_template', {'template': 'foo_index.html'}),
(r'^foo/(?P<id>\d+)/$', 'direct_to_template', {'template': 'foo_detail.html'}),
)
```
|
Storing hex values as integers
I am currently working with python docx and it requires hex values to format font color for example
```
font.color.rgb = RGBColor(0x70, 0xad, 0x47)
```
However I need to store the arguement for `RGBColor` in a variable, a dictionary to be exact however when you store a hex value in a variable it will format it to an int. example:
```
code = (0x70, 0xad, 0x47)
print(code)
```
returns: `(112, 173, 71)`
and storing it using the `hex()` function will format it to be a `str`.
```
code = (hex(0x70), hex(0xad), hex(0x47))
print(code)
```
returns: `('0x70', '0xad', '0x47')`
and the `RGBColor` operator will not accept strings, and I cannot re-format these strings back into an `int` because I get the error `ValueError: invalid literal for int() with base 10: '0x70'`
In summery how can I store a hex value such as `0x70, 0xad, 0x47` as an integer that I can then feed into the `RGBColor` opperator?
|
```
font.color.rgb = RGBColor(112, 173, 71)
```
produces the same result as:
```
font.color.rgb = RGBColor(0x70, 0xad, 0x47)
```
The `0x7f` format is just an alternate Python literal form for an `int` value. There is only one kind of `int`, just multiple ways of expressing the same value as a literal. See the Python docs on numeric literals for a full breakdown of those options:
<https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals>
Note that you can also use:
```
font.color.rgb = RGBColor.from_string("70ad47")
```
If that is more convenient for you.
<https://python-docx.readthedocs.io/en/latest/api/shared.html#docx.shared.RGBColor.from_string>
|
sed - how to use \b word boundary correctly?
I want to replace quotes that are at the start of a word.
For example:
```
$ echo "a 'line' a single ' after a 'keyword' with a few space's for a program"\
| sed "s/\b'/X/g"
a 'lineX a single ' after a 'keywordX with a few spaceXs for a program
```
but as you can see they are getting replaced at the **end** of words not the beginning.
How to replace ones that start a word?
|
This replaces `'` at word beginnings:
```
$ echo "a 'line' a single ' after a 'keyword' with a few space's for a program"| sed "s/'\b/X/g"
a Xline' a single ' after a Xkeyword' with a few spaceXs for a program
```
For humans, not computers, there is one peculiarity here: the computer counts `space's` as two words and replaces the `'` because it is at the beginning of the second word.
### Discussion
`\b` marks a word boundary, either beginning or end. Now consider `\b'`. This matches a word boundary followed by a `'`. Since `'` is *not* a word character, this means that the end of word must precede the `'` to match. To use `\b` to match at beginnings of words, reverse the order: `'\b`. Again, since `'` is not a word character, this will only match if a word follows the `'`.
Some seds, like GNU sed, support `\<` to match the beginning of words. This doesn't help in your case because `'` is already not a word character and that forces the issue.
### What to do about "space's"
If you don't want the `'` replaced in `space's` because the `'` appears at the end of a word, you can reverse the logic by using `\B`:
```
$ echo "a 'line' a single ' after a 'keyword' with a few space's for a program"| sed "s/\B'/X/g"
a Xline' a single X after a Xkeyword' with a few space's for a program
```
`\B` matches at *not a word boundary*. Thus `\B'` matches at `'` (space-quote) because neither space nor quote are word characters. It does not match at `space'` because `e'` is a word boundary: `e` is a word character and `'` isn't.
|
How can I compile & run assembly in Ubuntu 18.04?
So recently I've wanted to learn assembly, so I learnt a bit. I put this into nano and saved it as playground.asm. Now I'm wondering, how do I compile and run it? I've already searched everywhere and still cant find it. I'm really curious and there's no point learning a language if you can't even use it.
|
In all currently supported versions of Ubuntu open the terminal and type:
```
sudo apt install as31 nasm
```
[**as31**](https://www.pjrc.com/tech/8051/tools/as31-doc.html): Intel 8031/8051 assembler
This is a fast, simple, easy to use Intel 8031/8051 assembler.
[**nasm**](https://www.nasm.us/doc/nasmdoci.html): General-purpose x86 assembler
Netwide Assembler. NASM will currently output flat-form binary files, a.out, COFF and ELF Unix object files, and Microsoft 16-bit DOS and Win32 object files.
This is the code for an assembly language program that prints Hello world.
```
section .text
global _start
_start:
mov edx,len
mov ecx,msg
mov ebx,1
mov eax,4
int 0x80
mov eax,1
int 0x80
section .data
msg db 'Hello world',0xa
len equ $ - msg
```
If you are using NASM in Ubuntu 18.04, the commands to compile and run an .asm file named hello.asm are:
```
nasm -f elf64 hello.asm # assemble the program
ld -s -o hello hello.o # link the object file nasm produced into an executable file
./hello # hello is an executable file
```
|
Electron Login / Register etc
I am pretty new to making Electron apps. After asking a question on Stack Overflow *([Properly Using Electron](https://stackoverflow.com/questions/47557586/properly-using-electron "Properly Using Electron"))* I followed the advice I was given and created a desktop application in Electron.
*I have another app created in NodeJS that handles my database
methods.*
I am not sure where to start with this problem, as there are hundreds of options online, and I am not sure which ones suit my needs.
**Should I:**
- *Have a client application made in Electron and a server made in NodeJS that handles the database*
*OR*
- *Have a client application and connect to my database from Electron?*
If I were to do it *all* in a client app, I think that would cause major security issues, so I am unsure what to do. The problem is as simple as creating a login/system application, but I have no idea how to incorporate it into Electron.
Where do I start? How do I approach this?
|
If all you need is a login and your server already manages the database I would keep this logic on the server. This is also something the answer to your previous question states.
- Your client sends a POST request containing the username, password over HTTPS (SSL encryption), this can happen in the renderer process.
- Your Server checks if the password and user are valid.
- Server returns your client if the authentification was valid, via token.
- Either the main process (Electron-NodeJs) persists the token (File)
- Or the Render process (Electron-Chromium) persists the token (Web Storage)
This is the most simple example which is neglecting advanced security concerns, if you want to make it more secure you definitely should look for general advise about web security like crypto-pbkdf2 and how to handle the tokens. But this is not electron specific.
|
What is an EJB, and what does it do?
Been trying to learn what `EJB` beans are, what does it mean their instances are managed in a pool, blah blah. Really can't get a good grip of them.
Can you explain me what they really are (practically for a Java Programmer)? What do they do? Which are their purposes? *Why really use them? (Why not just stick to `POJO`?)* Perhaps an example application?
Please refer only to updated information, that is `EJB 3.1`. Dated information about EJB can be misleading.
For EJB learning beginners please note:
EJB are based on **distributed objects**, this refer to software pieces running on multiple machines (virtual or physical) linked by a **network**.
|
>
> *Why really use them? (Why not just stick to POJO?)*
>
>
>
IF you need a component that accesses the database, or accesses other connectivity/ directory resources, or is accessed from multiple clients, or is intended as a SOA service, EJBs today are usually "bigger, stronger, faster (or at least more scalable) and simpler" than POJOs. They are most valuable for servicing large numbers of users over the web or corporate network and somewhat less valuable for small apps within a department.
1. Reusing/Sharing Logic across multiple applications/clients with Loose Coupling.
EJBs can be packaged in their own jars, deployed, and invoked from lots of places.
They are common components. True, POJOs can be (carefully!) designed as libraries and
packaged as jars. But EJBs support both local and remote network access - including
via local java interface, transparent RMI, JMS async message and SOAP/REST web service,
saving from cut-and-paste jar dependencies with multiple (inconsistent?) deployments.
They are very useful for creating SOA services. When used for local access they are
POJOs (with free container services added). The act of designing a separate EJB layer
promotes extra care for maximizing encapsulation, loose coupling and cohesion, and
promotes a clean interface (Facade), shielding callers from complex processing & data
models.
2. Scalability and Reliability
If you apply a massive number of requests from various calling messages/processes
/threads, they are distributed across the available EJB instances in the pool first
and then queued. This means that if the number of incoming requests per second is
greater than the server can handle, we degrade gracefully - there are always some
requests being processed efficiently and the excess requests are made to wait. We
don't reach server "meltdown" - where ALL requests experience terrible response time
simultaneously, plus the server tries to access more resources than the hardware & OS
can handle & hence crashes. EJBs can be deployed on separate tier that can be
clustered - this gives reliability via failover from one server to another, plus
hardware can be added to scale linearly.
3. Concurrency Management.
The container ensures that EJB instances are automatically accessed safely (serially)
by multiple clients. The container manages the EJB pool, the thread pool, the
invocation queue, and automatically carries out method-level write locking (default) or
read locking (through @Lock(READ)). This protects data from corruption through
concurrent write-write clashes, and helps data to be read consistently by preventing
read-write clashes.
This is mainly useful for @Singleton session beans, where the bean is manipulating and
sharing common state across client callers. This can be easily over-ridden to manually
configure or programmatically control advanced scenarios for concurrent code execution
and data access.
4. Automated transaction handling.
Do nothing at all and all your EJB methods are run
in a JTA transaction. If you access a database using JPA or JDBC it is automatically
enlisted in the transaction. Same for JMS and JCA invocations. Specify
@TransactionAttribute(someTransactionMode) before a method to specify if/how that
particular method partakes in the JTA transaction, overriding default mode: "Required".
5. Very simple resource/dependency access via injection.
The container will lookup resources and set resource references as instance fields in
the EJB: such as JNDI stored JDBC connections, JMS connections/topics/queues, other
EJBs, JTA Transactions, JPA entity manager persistence contexts, JPA entity manager
factory persistence units, and JCA adaptor resources.
e.g. to setup a reference to another EJB & a JTA Transaction & a JPA entity Manager &
a JMS connection factory and queue:
```
@Stateless
public class MyAccountsBean {
@EJB SomeOtherBeanClass someOtherBean;
@Resource UserTransaction jtaTx;
@PersistenceContext(unitName="AccountsPU") EntityManager em;
@Resource QueueConnectionFactory accountsJMSfactory;
@Resource Queue accountPaymentDestinationQueue;
public List<Account> processAccounts(DepartmentId id) {
// Use all of above instance variables with no additional setup.
// They automatically partake in a (server coordinated) JTA transaction
}
}
```
A Servlet can call this bean locally, by simply declaring an instance variable:
```
@EJB MyAccountsBean accountsBean;
```
and then just calling its' methods as desired.
6. Smart interaction with JPA.
By default, the EntityManager injected as above uses a transaction-scoped persistence
context. This is perfect for stateless session beans. When a (stateless) EJB method
is called, a new persistence context is created within the new transaction, all
entity object instances retrieved/written to the DB are visible only within that
method call and are isolated from other methods. But if other stateless EJBs are
called by the method, the container propagates and shares the same PC to them, so same
entities are automatically shared in a consistent way through the PC in the same
transaction.
If a @Stateful session bean is declared, equal smart affinity with JPA is achieved by
declaring the entityManager to be an extended scope one:
@PersistentContent(unitName="AccountsPU, type=EXTENDED). This exists for the life of
the bean session, across multiple bean calls and transactions, caching in-memory copies
of DB entities previously retrieved/written so they do not need to be re-retrieved.
7. Life-Cycle Management.
The lifecycle of EJBs is container managed. As required, it creates EJB instances,
clears and initializes stateful session bean state, passivates & activates, and calls
lifecycle callback methods, so EJB code can participate in lifecycle operations to
acquire and release resources, or perform other initialization and shutdown behavior.
It also captures all exceptions, logs them, rolls back transactions as required, and
throws new EJB exceptions or @ApplicationExceptions as required.
8. Security Management.
Role-based access control to EJBs can be configured via a simple annotation or XML
setting. The server automatically passes the authenticated user details along with each
call as security context (the calling principal and role). It ensures that all RBAC
rules are automatically enforced so that methods cannot be illegally called by the
wrong role. It allows EJBs to easily access user/role details for extra programmatic
checking. It allows plugging in extra security processing (or even IAM tools) to the
container in a standard way.
9. Standardization & Portability.
EJB implementations conform to Java EE standards and coding conventions, promoting quality
and ease of understanding and maintenance. It also promotes portability of code to new
vendor app servers, by ensuring they all support the same standard features and
behaviors, and by discouraging developers from accidentally adopting proprietary
non-portable vendor features.
10. The Real Kicker: Simplicity. All of the above can be done with
very streamlined code - either using default settings for EJBs
within Java EE 6, or adding a few annotations. Coding
enterprise/industrial strength features in your own POJOs would
be *way* more volumous, complex and error-prone. Once you
start coding with EJBs, they are rather easy to develop and give a great set of "free ride" benefits.
In the original EJB spec of 10 years ago, EJBs were a major productivity hassle. They were bloated, needed lots of code and configuration artifacts and provided about 2/3 of the benefits above. Most web projects did not actually use them. But this has changed significantly with 10 years of tweaking, overhauling, functional enhancement and development stream-lining. In Java EE 6 they provide maximum level industrial strength and simplicity of use.
What's not to like?? :-) :-)
|
Having 2 Initialize Paragraphs in 1 COBOL program
I am designing a COBOL program that does 2 things.
1 reads specific rows from a table, and inserts a new row based on the original.
2 reads different rows from the same table, and updates those rows.
Is it bad standards to have 2 initialization procedures? I need to declare 2 SQL cursors, one for each task. So I was thinking:
`1 initialize cursor, 2 process, 3 initialize cursor, 4 process.`
But that seems like bad logic. something is telling me it should be more like:
`1 initialize cursor, 2 process` But I'm having trouble thinking it out because to me, they are 2 separate tasks.
I tried to think of a way that my SQL query could do the work, and then I could just do some validation. But I'm not sure the query would be any more efficient than having a second read of the table.
[Here's my structure ideas](https://docs.google.com/drawings/d/1q4X8fu4HDLsZZ_emCFFNjl6TL-3W5wNw1-VjzxU_Y1c/edit)
I've added a 3rd chart that I believe follows the answer given to me on this question. Does the 3rd chart seem to me the most logical?
|
You should separate the two tasks. You should have two cursors and initialize each separately since one is used for read and the other is for read/write.
You'd go like:
```
Move 0 to rc.
Perform Task1
If rc > 0
logic to display message and end run.
end
perform Task2
If rc > 0
logic to display message and end run.
end
perform end of job messages and close open files if any.
Stop Run.
.....................
Task1 is:
Open cursor 1 ...
...
Close cursor 1.
.....................
Task2 is:
Open cursor 2 ...
...
Close cursor 2.
```
|
Promise Js: Wait till promise fulfilled
I'm working with [Bluebird](http://bluebirdjs.com/) for promises in Node.Js, and was wondering how can I make a function return when the promise is fulfilled(completed).The behavior I want is :
```
function getItem(){
functionReturningPromise.then(function(result){
//do some operation on result
return result;
});
}
```
However, the above implementation won't return anything since the promise isn't completed at the time of execution. What would be the best workaround around this?
The purpose of getItem is to modify whatever functionReturningPromise returns, and then return it
|
You can't, `var value = someAsyncFunction()` is not possible (until we have generators, at least). Promises are viral, once you use a Promise in a function, its caller must use `then` and the caller of the caller too, etc up to your main program. So, in the function that calls `getItem` you have to change this
```
item = getItem();
do something with item
```
into this
```
getItem().then(function(item) {
do something with item
})
```
and `getItem` should return a Promise as well
```
function getItem(){
return functionReturningPromise().then(function(result){
//do some operation on result
return result;
});
}
```
(Theoretically it can also be a callback, but I prefer to always return promises).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.