text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
Loop within a loop indexes
I'm building a HTML table using JSON to populate it.
Here's the JSON:
{
"grid": {
"name": "JsonGrid",
"columns": [
{
"name": "ID",
"width": "100px"
},
{
"name": "Name",
"width": "100%"
},
{
"name": "Departments",
"width": "250px"
},
{
"name": "Locations",
"width": "250px"
}
]
},
"data": [
{
"id": 1,
"name": "Company A",
"departments": [
"Software",
"Recruitment",
"Consulting"
],
"locations": [
"Sheffield",
"Rotherham",
"London",
"New York"
]
},
{
"id": 2,
"name": "Company B",
"departments": "",
"locations": [
"Hillsborough",
"City Centre",
"Crystal Peaks"
]
},
{
"id": 3,
"name": "Company C",
"departments": [
"Medical",
"Family",
"Criminal"
],
"locations": [
"Sheffield",
"Rotherham"
]
}
]
}
and the function that loops through the data object:
function addDataFromJson(json)
{
var data = json.data;
for(var i=0;i<data.length;i++) // for each row
{
var columns = '';
for(var b=0;b<Object.keys(data[i]).length;b++) // for each column
{
var content = data[i][b];
console.log(content);
columns += '<td>'+content+'</td>';
}
var row = columns;
$( '<tr>' + row + '</tr>' ).appendTo('.uiGridContent tbody').hide().fadeIn();
}
}
So I loop through to get the rows and look inside to find what columns I need, and then try and put the data into each column and then append the row. The columns and rows are perfect, but the data never gets pulled out!
It looks like I'm getting confused as I step into the second loop that pulls the actual data for each column. What should the content variable contain? Taking into consideration that sometimes the content may contain arrays instead of just strings.
A:
The problem is the use of b, it is just an index of the key not the actual property key so you need
var content = data[i][Object.keys(data[i])[b]];
like
var json = {
"grid": {
"name": "JsonGrid",
"columns": [{
"name": "ID",
"width": "100px"
}, {
"name": "Name",
"width": "100%"
}, {
"name": "Departments",
"width": "250px"
}, {
"name": "Locations",
"width": "250px"
}]
},
"data": [{
"id": 1,
"name": "Company A",
"departments": [
"Software",
"Recruitment",
"Consulting"
],
"locations": [
"Sheffield",
"Rotherham",
"London",
"New York"
]
}, {
"id": 2,
"name": "Company B",
"departments": "",
"locations": [
"Hillsborough",
"City Centre",
"Crystal Peaks"
]
}, {
"id": 3,
"name": "Company C",
"departments": [
"Medical",
"Family",
"Criminal"
],
"locations": [
"Sheffield",
"Rotherham"
]
}]
};
function addDataFromJson(json) {
var data = json.data;
for (var i = 0; i < data.length; i++) // for each row
{
var columns = '',
keys = Object.keys(data[i]);
for (var b = 0; b < keys.length; b++) // for each column
{
var content = data[i][keys[b]];
console.log(content);
columns += '<td>' + content + '</td>';
}
var row = columns;
$('<tr>' + row + '</tr>').appendTo('.uiGridContent tbody').hide().fadeIn();
}
}
addDataFromJson(json)
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<table class="uiGridContent">
<tbody></tbody>
</table>
A more simpler way will be
var json = {
"grid": {
"name": "JsonGrid",
"columns": [{
"name": "ID",
"width": "100px"
}, {
"name": "Name",
"width": "100%"
}, {
"name": "Departments",
"width": "250px"
}, {
"name": "Locations",
"width": "250px"
}]
},
"data": [{
"id": 1,
"name": "Company A",
"departments": [
"Software",
"Recruitment",
"Consulting"
],
"locations": [
"Sheffield",
"Rotherham",
"London",
"New York"
]
}, {
"id": 2,
"name": "Company B",
"departments": "",
"locations": [
"Hillsborough",
"City Centre",
"Crystal Peaks"
]
}, {
"id": 3,
"name": "Company C",
"departments": [
"Medical",
"Family",
"Criminal"
],
"locations": [
"Sheffield",
"Rotherham"
]
}]
};
function addDataFromJson(json) {
var data = json.data;
var rows = $.map(data, function(record) {
var cols = $.map(record, function(value, key) {
return '<td>' + value + '</td>';
})
return '<tr>' + cols + '</tr>';
})
$(rows.join('')).hide().appendTo('.uiGridContent tbody').fadeIn();
}
addDataFromJson(json)
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<table class="uiGridContent">
<tbody></tbody>
</table>
| {
"pile_set_name": "StackExchange"
} |
Q:
Windows API MoveFile() not working for running exe
Here is a simple C program for illustration:
#include <windows.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
//MoveFile(argv[0], "dst.exe");
getchar();
return 0;
}
make an test.exe from code above.
Now execute test.exe, the test.exe hangs there due to getchar(), then I can cut and paste this exe freely.
But when I uncomment that MoveFile(argv[0], "dst.exe");, I was hoping it could move itself to dst.exe, it turns out to have a dst.exe, while program.exe is still there, just like CopyFile() does.
From what I know, in Windows, when exe is running I can rename it, move it, but not deleting it, that MoveFile() behaves as a combination of CopyFile() and DeleteFile()
And also see this from Microsoft doc MoveFileEx.
BOOL WINAPI MoveFileEx(
_In_ LPCTSTR lpExistingFileName,
_In_opt_ LPCTSTR lpNewFileName,
_In_ DWORD dwFlags
);
dwFlags has an option MOVEFILE_COPY_ALLOWED
the file is to be moved to a different volume, the function simulates the move by using the CopyFile and DeleteFile functions.
If the file is successfully copied to a different volume and the original file is unable to be deleted, the function succeeds leaving the source file intact.
This value cannot be used with MOVEFILE_DELAY_UNTIL_REBOOT.
Further confirming my guess, and I tested with MoveFileEx() with option MOVEFILE_REPLACE_EXISTING , recompiled the program, run it, now MoveFileEx() just returned as fail, not even dst.exe generated.
But I can definitely cut and paste that exe while running, MoveFileEx() should does so, why???
If they can't, what should I do to make it just like cut and paste.
A:
If the target destination is on the same volume, MoveFile just updates the corresponding directory entries. The file's MFT record is not changed, its index remained the same, its contents is not touched. Because the file is not affected at all, you can move it within the same directory (i.e. rename) or within the same volume even if the file is in use (note: this is true for files being executed; in general, this is true only if the file was open with FILE_SHARE_DELETE).
If the target directory is on another volume, the system needs to copy it (will fail if the file is open in exclusive mode) and delete it on the old volume (will fail unconditionally if the file is in use).
Cut&paste works ok within the same volume and does not on different volumes. The reason is that the file clipboard operations use a different technique than the text ones.
When you select a text and press Ctrl-X, the text string is moved to an allocated global memory block and the block is passed to Windows. The program does not own it anymore. The text is physically in the Windows Clipboard and you can paste it as many times as you wish.
When you press Ctrl-X on a file, it is not moved to the Clipboard. The Clipboard will receive a file descriptor, which contains info about the file and the requested operation (this technique is known as delayed rendering). When you press Ctrl-C, the Clipboard will simply ask the object owner (i.e. Windows Explorer) to perform the requested operation. And the Explorer will perform it using the very same MoveFile.
Note that you can paste a cut file only once because the first Ctrl-C will invalidate the descriptor in the Clipboard. A copied file can be pasted several times.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to implement a Comprehensive Search Solution in SharePoint?
I am looking to implement a search solution for one of my clients. Any comments will be highly appreciated. We currently have SharePoint 2007 Enterprise.
We have the following requirements:
Users can search for SharePoint
Items (Documents, Tasks, and Events
etc.)
Users can search for SalesForce Items (Chatter Discussions, Documents etc.)
Users can search for Exchange Mailboxes Items (Emails, attachments etc.)
Users can search for Network Shared Folder items
All search results, which have the synonyms/related terms (defined by the business some where), will also be displayed. for example, if user searches for "TFS" then search
results containing both "TFS" and "Team Foundation Server" should be displayed.
Search results from all different sources must be combined and presented on a single page but should have indicator of some sort (e.g. icon) to highlight their source of origin. For example, SharePoint result can have a different icon than a search result from SalesForce. This will help user to quickly glance over the results and get the idea of which result come from where.
Users must not be forced to learn different ways of searching for different sources; they just use normal search techniques as they are used to of doing in normal SharePoint search.
Search Results should respect the per item security from all sources. For example, if someone is not allowed to access "Z:\HR Documents", he should not see results from there.
Searching must be very fast or reasonably fast and should not heavily impact the content sources (Exchange Mailboxes, Shared Folder,Sales Force etc.)
Search infrastructure must be extendable so that we can add more content sources afterward, if required.
Basically, we want to provide a single point search experience to make sure one should not miss any information, which may be sitting somewhere in an organization, but due to inability to search
for it, nobody knows about it.
A:
1.Users can search for SharePoint Items (Documents, Tasks, and Events etc.)
Thats a standard functionality.
2.Users can search for SalesForce Items (Chatter Discussions, Documents etc.)
You could go with a federated search that would display results in the same way as e.g. Google presents its ads on the right side of the result page. A second way would be to integrate the data via DCS as external lists. I think AvePoint has even a SalesForce-SharePoint integration tool -but I have never used it.
3.Users can search for Exchange Mailboxes Items (Emails, attachments etc.)
You can setup a content source for your Exchange server. However, I think that only public folders are searchable. Personal mailboxes are not.
4.Users can search for Network Shared Folder items
Yes, thats no problem either. Just set up a content source for that share and give proper permissions to the crawl-account.
5.All search results, which have the synonyms/related terms (defined by the business some where), will also be displayed. for example, if user searches for "TFS" then search results containing both "TFS" and "Team Foundation Server" should be displayed.
Thats a functionality that the managed metadata service in SharePoint 2010 gives you. But it will only be available to SharePoint content.
6.Search results from all different sources must be combined and presented on a single page but should have indicator of some sort (e.g. icon) to highlight their source of origin. For example, SharePoint result can have a different icon than a search result from SalesForce. This will help user to quickly glance over the results and get the idea of which result come from where.
That works out of the box. Instead of an icon the path to the document is shown. If you want you can customize the XML/XLST of the search result webaprt.
7.Users must not be forced to learn different ways of searching for different sources; they just use normal search techniques as they are used to of doing in normal SharePoint search.
Yes, all your content is searchable from a single SharePoint search center.
8.Search Results should respect the per item security from all sources. For example, if someone is not allowed to access "Z:\HR Documents", he should not see results from there.
SharePoint search are security trimmed. However, that topic is pretty complex and I suggest you start reading following articles:
http://msdn.microsoft.com/en-us/library/aa981236(v=office.12).aspx
http://msdn.microsoft.com/en-us/library/aa981314(v=office.12).aspx
9.Searching must be very fast or reasonably fast and should not heavily impact the content sources (Exchange Mailboxes, Shared Folder,Sales Force etc.)
Searching, which means querying I guess, is very fast. A well performing SharePoint search should give you results in less than one second. However, crawling and indexing may take a lot of time. Depending on the amount of content it can take hours or even several days. You can imfluence the impact on content sources with "crawler impact rules" (http://technet.microsoft.com/en-us/library/cc262861(office.12).aspx)
10.Search infrastructure must be extendable so that we can add more content sources afterward, if required
You can always add new content sources, that not the point when talking about extensibility. Whats important is that MOSS 2007 can only have one index server. That single point of failure is "fixed" in SharePoint 2010.
| {
"pile_set_name": "StackExchange"
} |
Q:
Styling a GWT Button with CSS
I'm trying to apply css with GWT Button widget, however I can apply CSS gradient, the "embossed" style of the widget is still there. How can I remove that?
My gwt application is inherting from this theme:
<inherits name='com.google.gwt.user.theme.clean.Clean'/>
Als have tried:
<inherits name='com.google.gwt.user.theme.standard.Standard'/>
I also have tried adding:
.gwt-Button {}
In the main css file that is loaded on the page however the embossed style is still there.
Anyone knows how to remove the embossed style?
A:
Option 1: Not using themes at all
If you don't need the default styles, and generally want to give the page your own style, then it's easiest to completely omit the <inherits> statement for the themes. GWT works very well without using a theme.
(Note: If you still need the (image) resources from the theme, but without injecting the CSS stylesheet into the page, you can inherit com.google.gwt.user.theme.clean.CleanResources instead of com.google.gwt.user.theme.clean.Clean. This way they will still be copied automatically to your war folder.)
Option 2: Selectively turning off theming for buttons
If however you generally want to use a theme, and only need to give some buttons your own style, then an easy solution is calling
button.removeStyleName("gwt-Button");
Note: Instead of removeStyleName() you could also use setStyleName("my-Button").
For convenience (and for easier usage in UiBinder) you may want to derive your own class like
package org.mypackage;
public class MyStyleButton extends Button {
public MyStyleButton(final String html) {
super(html);
removeStyleName("gwt-Button");
}
/* ... override all the other Button constructors similarly ... */
}
Which can then be imported and used in UiBinder like
<ui:UiBinder xmlns:ui='urn:ui:com.google.gwt.uibinder'
xmlns:g='urn:import:com.google.gwt.user.client.ui'
xmlns:my='urn:import:org.mypackage'>
...
<my:MyStyleButton>...
Option 3: Actually changing the gwt-Button class attributes
If you want to keep the themed look of the buttons, and only change a few style attributes, then it's also possible to overwrite certain attributes in the predefined style classes with !important (as suggested by @bouhmid_tun). But be careful: The list of attributes might change in the future. Here are all the predefined style classes for .gwt-Button of GWT 2.4.0 for your convenience:
.gwt-Button {
margin: 0;
padding: 5px 7px;
text-decoration: none;
cursor: pointer;
cursor: hand;
font-size:small;
background: url("images/hborder.png") repeat-x 0px -2077px;
border:1px solid #bbb;
border-bottom: 1px solid #a0a0a0;
border-radius: 3px;
-moz-border-radius: 3px;
}
.gwt-Button:active {
border: 1px inset #ccc;
}
.gwt-Button:hover {
border-color: #939393;
}
.gwt-Button[disabled] {
cursor: default;
color: #888;
}
.gwt-Button[disabled]:hover {
border: 1px outset #ccc;
}
A:
To avoid using GWT default style, I just use !important tag in my CSS file. You'll find here an example of doing so : Remove absolute position generated by GWT. Good luck!
| {
"pile_set_name": "StackExchange"
} |
Q:
Where do uninitialized Global Variables go after initializing?
I struck a little problem when learning. I know that uninitialized global variables in C are assigned to the .bss section in the executable ELF file. But what happens to them when I start to use them?
I.e. do they get a place on the heap or somewhere else?
I tried to find out by printing the address of the (still uninitialized) global variable with
printf("%x",&glbl);
which always return the same value 0x80495bc... Why?
A:
When the OS loads your program, it allocates enough storage from your program's address space to store everything in the .bss section and zeros all of that memory. When you assign or read from or take the address of the variable, you're manipulating that memory that was allocated to provide storage for the .bss section.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to update profile table on user registration
I have two tables: one for User and one for Profile.
I'm trying to figure out how to update the profile table upon a user registering.
Here's my User and Profile Model classes:
<?php
use Illuminate\Auth\UserInterface;
use Illuminate\Auth\Reminders\RemindableInterface;
class User extends Eloquent implements UserInterface, RemindableInterface{
protected $fillable = array('fname','lname','email','password','create_at','updated_at');
/**
* The database table used by the model.
*
* @var string
*/
protected $table = 'users';
/**
* The attributes excluded from the model's JSON form.
*
* @var array
*/
protected $hidden = array('password');
/**
* Get the unique identifier for the user.
*
* @return mixed
*/
public function getAuthIdentifier()
{
return $this->getKey();
}
/**
* Get the password for the user.
*
* @return string
*/
public function getAuthPassword()
{
return $this->password;
}
/**
* Get the e-mail address where password reminders are sent.
*
* @return string
*/
public function getReminderEmail()
{
return $this->email;
}
/**
* @method to insert values into database.
*/
public static function create_user($data = array())
{
return User::create($data);
}
/**
*@method to validate a user in the database
*/
public static function validate_creds($data)
{
return Auth::attempt($data);
}
public function profile()
{
return $this->hasOne('Profile');
}
public function post()
{
return $this->hasMany('Post');
}
}
And my profile model:
<?php
class Profile extends Eloquent{
public static function createNewProfile($data)
{
return Profile::create($data);
}
public static function editProfile()
{
//
}
public function user()
{
return $this->belongsTo('User');
}
}
A:
You can try events:
http://laravel.com/docs/events
Make a On.user.create event
and build a listener that does the things you need.
I'm not sure, but you could try to put that code that updates the Profile in the User Constructor.
| {
"pile_set_name": "StackExchange"
} |
Q:
sort events based on event date custom field
I'm trying to list events (custom posttype 'kurs') by event date, which are stored as custom fields ('dato').
My loop so far looks like this:
<ul>
<?php $loop = new WP_Query( array( 'post_type' => 'kurs' ) ); ?>
<?php while ( $loop->have_posts() ) : $loop->the_post(); ?>
<li><?php the_title( '<a href="' . get_permalink() . '" title="' . the_title_attribute( 'echo=0' ) . '" rel="bookmark">', '</a>' ); ?></li>
<?php endwhile; ?>
</ul>
What I need is a list of post(event)-titles from today forward in the future...
A:
You need to use the meta_key to sort your events in your array. Like so:
<?php $loop = new WP_Query( array( 'post_type' => 'kurs', 'meta_key' => 'dato', 'order_by' => 'meta_value', 'order' => 'ASC' ) ); ?>
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I set the shipcarrier to More on a sales order using SuiteScript in a RESTlet?
SuiteScript v1, but I'll switch to SS v2 if it's the only way to make it work.
I've tried:
salesOrder.setFieldText('shipcarrier', 'More');
salesOrder.setFieldValue('shipcarrier', 'noups');
salesOrder.setFieldValue('shipcarrier', 'nonups');
But UPS is always selected once the record is saved.
A:
shipcarrier is a bit of an odd thing.
I'm not sure it is actually sticky - in some contexts it appears to be and in some it doesn't.
It appears to be pointless to set unless you are also setting shipmethod.
salesOrder.setFieldValue('shipcarrier', ffShipCarrier); //'ups' || 'nonups'
salesOrder.setFieldValue('shipmethod', ffShipMethod)
PS from cja: My conclusion/solution: Setting shipmode does nothing unless recordmode is dynamic and shipmethod is set at the same time. If both those conditions are met then the value shipmode will be updated.
NetSuite support have warned me against using this solution:
"With regards to your concern, I am able to set the Ship carrier field on the Sales Order record in the client script(nlapiSetFieldValue('shipcarrier', 'ups');) however I was unable to set the value of the field in the server side script. Upon further investigation, the field (ship carrier) isn't exposed in the Record browser hence the field isn't officially exposed for scripting needs. Please refer to the following Suiteanswer article for your reference.
"I am really glad that the solution worked for you perfectly. In order to explain further, I would say it is not advisable to write scripts using unexposed fields in the record browser. It may change in the future without any prior notification and can cause problem and NetSuite will not hold any kind of responsibility for the same.
"User groups contains simple solution to complex tips and tricks provided by the experienced customers. On the other hand, NetSuite Support are stickly adhering to the official documentation/processes to assist any of its customer. The solutions provided in the User groups are totally upto the consent of the customers and can be implemented at their own risk if not confirmed in the official documentation or NetSuite Support."
| {
"pile_set_name": "StackExchange"
} |
Q:
iptables-restore sometimes fails on reboot
Today and yesterday, my server automatically rebooted and failed to bring up the network device during boot. If I reboot the machine again, then it starts up fine, I've also not encountered any issues with this during the past 2 months.
The only error logs I can find relating to this are:
Aug 23 06:37:14 server systemd[1]: Started ifup for ens16.
Aug 23 06:37:14 server systemd[1]: [email protected]: Main process exited, code=exited, status=1/FAILURE
and
Aug 23 06:37:14 server sh[281]: iptables-restore: line 10 failed
Aug 23 06:37:14 server systemd[1]: [email protected]: Main process exited, code=exited, status=1/FAILURE
Aug 23 06:37:14 server sh[281]: run-parts: /etc/network/if-pre-up.d/iptables exited with return code 1
Aug 23 06:37:14 server sh[281]: ifup: failed to bring up ens16
/etc/network/if-pre-up.d/iptables contains:
#!/bin/sh
/sbin/iptables-restore < /etc/iptables.up.rules
/etc/iptables.up.rules contains:
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [896:90530]
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-port-unreachable
COMMIT
What could possibly be going wrong with this in an intermittent fashion, and how can I make sure it doesn't happen again?
A:
I still suspect that two executions of /etc/network/if-pre-up.d/iptables are running at the same time throughout the boot process. Because of systemd normal behavior of starting things concurrently unless advised not to do so, I believe the boot process triggers one script process for the lo interface and another for the ens16 interface. That would result in a concurrent execution of iptables-restore, which may cause errors such as iptables-restore: line 10 failed. I am unable to supply evidences though.
I am used to managing CentOS and Red Hat systems. Once upon a time, one of such servers failed to initialize iptables service on boot because systemd was starting ip6tables concurrently. That specific error is documented here: https://bugzilla.redhat.com/show_bug.cgi?id=1477413
I suggest you to handle concurrency in your script, for example, by using flock:
#!/bin/sh
/usr/bin/flock /run/.iptables-restore /sbin/iptables-restore < /etc/iptables.up.rules
Alternatively, you could check the actual value of ${IFACE} variable before restoring iptables rules (reference: man 5 interfaces):
#!/bin/sh
if [ "${IFACE}" == ens16 ]; then
/sbin/iptables-restore < /etc/iptables.up.rules
fi
Additionally, if you just want to load iptables rules at boot time, I suggest you to use iptables-persistent instead:
# apt-get install iptables-persistent netfilter-persistent
# mv -v /etc/iptables.up.rules /etc/iptables/rules.v4
# systemctl enable netfilter-persistent.service
# rm -v /etc/network/if-pre-up.d/iptables
| {
"pile_set_name": "StackExchange"
} |
Q:
Opengl show vertex in 2d or 3d
I have question.
Is it possible to have two Vertex shaders. One will view vertex in 2D and second in 3D.
At the moment my program have 2D view.
The only one difference between vertex in 2d and 3d will be that
vec2(x,y) in 3d will vec3(x,y,z). So i am thinking about sending to gpu vec3 and set gl_Position.z=0;
My biggest problem is that i choose magic numbers for glm:lookat and glm::perspective. If i see something it means that it works. So when i have 2d and 3d view everything looks bad.
I can move camera so doing 3d and changing position of camera only wont work.
A:
No, this is not possible. But you can always render the same geometry multiple times, but with different glViewport and projection matrices applied. This is the canonical way to render the classical "top, front, side, perspective" views of 3D editors.
My biggest problem is that i choose magic numbers for glm:lookat and glm::perspective.
Well, then I'd tackle that problem and instead of magical numbers use actual math to create the desired effect.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use angular-scenario with requirejs
Angular-scenario works well when your angular app is ready on DOM ready. Is not the case when using requirejs or an other AMD lib. How to add support for AMD in angular-scenario?
A:
What you have to do is to override the default frame load behaviour and to emit a new event when you amd app is ready.
The first thing is to add the following lines in your angular application along with the angular.bootstrap call. This data is required by angular-scenario.
angular.bootstrap(document, ['app']);
var html = document.getElementsByTagName('html')[0];
html.setAttribute('ng-app', 'app');
html.dataset.ngApp = 'app';
if (top !== window) {
top.postMessage({
type: 'loadamd'
}, '*');
}
Next, in your e2e runner, you have to include those lines. If it's an external script, it must be loaded after angular-scenario and it must be parsed before the DOM is ready :
/**
* Hack to support amd with angular-scenario
*/
(function() {
var setUpAndRun = angular.scenario.setUpAndRun;
angular.scenario.setUpAndRun = function(config) {
if (config.useamd) {
amdSupport();
}
return setUpAndRun.apply(this, arguments);
};
function amdSupport() {
var getFrame_ = angular.scenario.Application.prototype.getFrame_;
/**
* This function should be added to angular-scenario to support amd. It overrides the load behavior to wait from
* the inner amd frame to be ready.
*/
angular.scenario.Application.prototype.getFrame_ = function() {
var frame = getFrame_.apply(this, arguments);
var load = frame.load;
frame.load = function(fn) {
if (typeof fn === 'function') {
angular.element(window).bind('message', function(e) {
if (e.data && e.source === frame.prop('contentWindow') && e.data.type === 'loadamd') {
fn.call(frame, e);
}
});
return this;
}
return load.apply(this, arguments);
}
return frame;
};
}
})();
Finally, to enable the amd configuration, you must add the attribute ng-useamd to angular-scenario's script tag.
<script type="text/javascript" src="lib/angular-1.0.3/angular-scenario.js" ng-autotest ng-useamd></script>
You're now ready to go
tested with angular-scenario v1.0.3
A:
The above answer failed partly in my scenario (Angular 1.1.4, fresh Karma).
In the debug view it ran fine, in the normal overview page it failed. I noticed an extra nested .
I changed the code to message to the parent iframe of the tested application.
if (top !== window) {
window.parent.postMessage({
type: 'loadamd'
}, '*');
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I make a Kivy (KVlang) cascaded spinner set respond to my first on_release?
My cascaded spinners do what i want on the SECOND release of the master spinner.
How can i make this work on the FIRST release?
Thanks in advance. Here is my KVlang and Python code.
KVlang:
# 0009_spinnerCascade.kv
<MyLayout@BoxLayout>:
orientation: 'vertical'
Spinner:
id: s1
text: 'colors'
values: 'colors numbers days'.split()
size_hint_y: None
height: '48dp'
my_string_property: 'hello from s1'
on_release: s2.my_key = self.text
Label:
text: 'Mid label'
Spinner:
id: s2
text: 'choose one'
my_dict: {'colors': 'red green blue'.split(), 'numbers': '1 2 3'.split(), 'days':'mon tue wed'.split() }
my_key: 'numbers'
values: self.my_dict[self.my_key]
size_hint_y: None
height: '48dp'
MyLayout
Python
''' 0009_spinnerCascade.py
'''
import kivy
kivy.require('1.8.0')
from kivy.app import App
from kivy.lang import Builder
from kivy.config import Config
Config.set('graphics', 'width', '430')
Config.set('graphics', 'height', '430')
class MyApp(App):
def build(self):
self.root = Builder.load_file('0009_spinnerCascade.kv')
return self.root
if __name__ == '__main__':
MyApp().run()
A:
release event is triggered when your spinner opens and shows options, before you select any. In that moment value of s2.my_key is set to s1.text. After you select option, release event is not triggered and s2.my_key remains unchanged. Then after you display list of s1 options with second click, second release event is triggered and value od s2.my_key is finally setted to right value. Observe this behaviour with:
<MyLayout@BoxLayout>:
# ...
Spinner:
id: s1
# ...
on_release: s2.my_key = self.text ; print("spinner opened")
# ...
You actually need to observe text property to detect changes:
<MyLayout@BoxLayout>:
# ...
Spinner:
id: s1
# ...
on_text: s2.my_key = self.text ; print("option selected")
# ...
| {
"pile_set_name": "StackExchange"
} |
Q:
Video loop on startup
I've setted up Rasp to start in Destktop mode and launch a video in loop mode.
The script, called by:
/etc/rc.local
is the seguent:
#!/bin/sh
SERVICE='omxplayer'
while true; do
if ps ax | grep -v grep | grep $SERVICE > /dev/null
then
:
else
omxplayer -o hdmi /home/pi/movie.mov &
fi
done
The problem is that, on each video loop, run also an instance of dbus-daemon
dbus-daemon --fork --print-address 5 --print-pid 6 --session
In this way Rasp could crash.
What can I do to resolve this problem? Why is there this behaviour?
A:
first, check how you call your loop script from /etc/rc.local, for example if your script is called omx-loop it should be called as omx-loop &, so the /etc/rc.local can continue execution until the end.
second, if it still does not work, try python solution (not mine):
#!/usr/bin/env python
import subprocess
while True :
subprocess.call( [ 'omxplayer', '/home/pi/movie.mov' ] )
This definitely should not call any dbus-daemon, just don't forget to make it executable with chmod +x omx-loop.py.
Or there's more advanced "seamless" player in python available here.
| {
"pile_set_name": "StackExchange"
} |
Q:
skel.js Framework / HTML5UP Template CSS issues
I'm new to the skel.js framework and I'm having some issues.
I downloaded a template from HTML5UP.net (Zerofour theme) and I have modified it all for my site however the CSS doesn't show up properly on my no-sidebar & left-sidebar pages.
I have an include with the follwing links (identical to their templates):
<link href="http://fonts.googleapis.com/css?family=Open+Sans:400,300,700,800" rel="stylesheet" type="text/css" />
<script src="http://www.**********.com/js/jquery.min.js"></script>
<script src="http://www.**********.com/js/jquery.dropotron.js"></script>
<script src="http://www.**********.com/js/config.js"></script>
<script src="http://www.**********.com/js/skel.min.js"></script>
<script src="http://www.**********.com/js/skel-panels.min.js"></script>
<noscript>
<link rel="stylesheet" href="http://www.*********.com/css/skel-noscript.css" />
<link rel="stylesheet" href="http://www.*********.com/css/style.css" />
<link rel="stylesheet" href="http://www.*********.com/css/style-desktop.css" />
</noscript>
If I bypass the noscript the webpage appears as it should but loses ALL mobile and flowable capabilites.
Any ideas would be much appreciated!
Addon: If I move my pages to the root directory and update the links then the CSS works but in their child directories only the basic layout works.
<?php
define(currentDIR,'../');
include (currentDIR.'includes/_functions.php');
?>
<html>
<head>
<?php include(currentDIR.'includes/_metalinks.php'); ?> //This is where the code above is stored
</head>
A:
I also came across this issue while using an HTML5up template with Django. If you are using a different directory configuration for your static files, you must specify this in either the init.js or config.js file (the exact file depends on which template you are using and how recently it has been updated). For me, I had to modify the following skelJS prefix in the init.js file:
var helios_settings = {
// other settings here
skelJS: {prefix: '/static/css/style',
// other settings here
}
Basically, this directory prefix needs to match wherever you have your static files.
| {
"pile_set_name": "StackExchange"
} |
Q:
restrict file upload selection to specific types
Anyway to restrict the selection of file types via the <input type="file" /> element?
For instance, if I wanted only images types to be uploaded, I would restrict the possible selections to (image/jpg,image/gif,image/png), and the selection dialog would grey out files of other mime types.
p.s. I know that I can do this after the fact with the File API by scanning the .type attributes. I'm really trying to restrict this before hand.. I also know I can do this via flash, but I do NOT want to have use flash for this.
A:
There is an html attribute for this specific purpose called accept but it has little support across browsers. Because of this server side validation is recommended instead.
<input type="file" name="pic" id="pic" accept="image/gif, image/jpeg" />
If you don't have access to the backend have a look at a flash based solution like SWFUpload.
See more on this here: File input 'accept' attribute - is it useful?
A:
It's better to let user select any file, and then check its type - this is better supported by the browsers:
var
file = (el[0].files ? el[0].files[0] : el[0].value || undefined),
supportedFormats = ['image/jpg','image/gif','image/png'];
if (file && file.type) {
if (0 > supportedFormats.indexOf(file.type)) {
alert('unsupported format');
}
else {
upload();
}
}
You can also check for file size using file.size property.
| {
"pile_set_name": "StackExchange"
} |
Q:
Wordpress unable to establish connection to mySQL database
I'm trying to set up a new wordpress site on my local server.
I've added the correct credentials in the config.php but when I try to begin the WP installation process I get the error message:
Warning: mysql_connect(): Access denied for user 'x'@'localhost' (using password: YES) in /home/httpd/html/xxx/public_html/wordpress/wp-includes/wp-db.php on line 1518
In wp-db.php line 1518 the code is:
$this->dbh = mysql_connect( $this->dbhost, $this->dbuser, $this->dbpassword, $new_link, $client_flags );
I'm not sure what this error is telling me to fix. I tried actually writing the credentials in where it says dbhost, dbuser etc. Tried writing the credentials minus the password as some solutions has suggested but it still does not work.
Also granted all privileges to the user on the database but still same error message. If anyone can help that'd be great.
A:
In my humble opinion the problem is not in WP files, but it is in your DB server settings.
Create a file called test_connection.php and save it in the same folder of your wp installation (where you have your index).
In that file write the following code, save it and then launch it to test your connection to the database.
In the code just change USERNAME, PASSWORD and DB_NAME with yours.
UPDATE
<?php
header('Content-type: text/html; charset=utf-8');
$con = mysqli_connect("localhost","USERNAME","PASSWORD","DB_NAME");
// Check connection
if (mysqli_connect_errno())
{
echo "Failed to connect to MySQL: " . mysqli_connect_error();
} else {
echo "Ok, you're connected"; }
?>
This is the first test I suggest you.
| {
"pile_set_name": "StackExchange"
} |
Q:
POST request with Ruby and Calabash
I use Calabsh to test iOS app. Duringtest I need to create POST request to change some values and then verify that changes are reflected in UI.
Request looks like:
wwww.testserver.com/userAddMoney?user_id=1&amount=999
To authorize on server I need to pass special parameters to Header of request:
Headers: X-Testing-Auth-Secret: kI7wGju76kjhJHGklk76
A:
require 'net/http'
uri = URI.parse('http://www.testserver.com/userAddMoney?user_id=1&amount=999')
http = Net::HTTP.new(uri.host,uri.port)
## https.use_ssl = true # use https, need require net/https
req = Net::HTTP::Post.new(uri.path)
req['X-Testing-Auth-Secret'] = 'kI7wGju76kjhJHGklk76'
res = http.request(req)
Docs here: Net::HTTP::Post Net::HTTPSession
| {
"pile_set_name": "StackExchange"
} |
Q:
Extract a motion filter using camera movement in number of pixel and angle, in VHDL or Verilog
I have read some papers about the topic and searched some MATLAB algorithm. There is one called 'fspecial' in MATLAB. And it could return a Motion filter, when motion is given in number of pixel and angle. I have read 'fspecial' working principles and and it contains many MATLAB built-in function like max, mod, cos, sin, sign, fix, meshgrid, sqrt, abs, find. Is it difficult to implement this MATLAB code in VHDL to obtain a Motion -blur filter.
... This code is portion of 'fspecial' in MATLAB ******
case 'motion' % Motion filter uses bilinear interpolation
len = max(1,p2);
half = (len-1)/2;% rotate half length around center
phi = mod(p3,180)/180*pi;
cosphi = cos(phi);
sinphi = sin(phi);
xsign = sign(cosphi);
linewdt = 1;
% define mesh for the half matrix, eps takes care of the right size
% for 0 & 90 rotation
sx = fix(half*cosphi + linewdt*xsign - len*eps);
sy = fix(half*sinphi + linewdt - len*eps);
[x y] = meshgrid(0:xsign:sx, 0:sy);
% define shortest distance from a pixel to the rotated line
dist2line = (y*cosphi-x*sinphi);% distance perpendicular to the line
rad = sqrt(x.^2 + y.^2);
% find points beyond the line's end-point but within the line width
lastpix = find((rad >= half)&(abs(dist2line)<=linewdt));
%distance to the line's end-point parallel to the line
x2lastpix = half - abs((x(lastpix) + dist2line(lastpix)*sinphi)/cosphi);
dist2line(lastpix) = sqrt(dist2line(lastpix).^2 + x2lastpix.^2);
dist2line = linewdt + eps - abs(dist2line);
dist2line(dist2line<0) = 0;% zero out anything beyond line width
% unfold half-matrix to the full size
h = rot90(dist2line,2);
h(end+(1:end)-1,end+(1:end)-1) = dist2line;
h = h./(sum(h(:)) + eps*len*len);
if cosphi>0,
h = flipud(h);
end
I wonder, whether I have in correct way to implement a motion filter in VHDL. Maybe there should be an effective way to this task? Or is there already implemented motion -filter in VHDL or Verilog ?
Thank you.
A:
Implementing that function in Verilog or VHDL is far from trivial.
Also what I gather this has to be calculated for each pixel. Thus it will need a s[censored]od amount if pipelining and/or parallel processing.
Then you still need a way of getting the pixels into and out of your FPGA.
All and all a project which, I won't say can't be done, but I very much suspect is beyond your abilities.
| {
"pile_set_name": "StackExchange"
} |
Q:
issue with inserting data to table view in ios
I have developed a application where i am fetching data from core data and i am displaying it in a table view. I got everything fine until fetching the data but while inserting them to table view only the last entry is displaying in the table view. Below is the code i have written please preview and help me to get the solution.
Deviceinfo *app = [arr objectAtIndex:indexPath.row];
switch(indexPath.row)
{
case 0:
NSLog(@"%@",app.platform);
cell.textLabel.text = @"platform";
cell.detailTextLabel.text =[app platform];
case 1:
NSLog(@"%@",app.model);
cell.textLabel.text = @"model";
cell.detailTextLabel.text = [app model];
case 2:
NSLog(@"%@",app.mac_address);
cell.textLabel.text = @"mac address";
cell.detailTextLabel.text = [app mac_address];
}
return cell;
I implement this code in the cellForRowAtIndexpath delegate. I am getting only the mac address in table view. hoping for better solution
Thanks
A:
Insert break statements after each case, otherwise the case will just fall though to the next one
switch(x) {
case 1:
break;
default:
break;
}
Based on your additional comments, try something like the following: Each device will have its own table view section and each section will have 3 table view rows, one for each piece of information.
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView
{
return devices.count
}
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
{
return 3;
}
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
static NSString *CellIdentifier = @"Cell";
cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier];
if (!cell) {
cell = [[FEMenuItemInfoCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:CellIdentifier];
}
Device *device = [devices objectAtIndex:indexPath.section];
switch (indexPath.row) {
case 0:
cell.textLabel.text = @"platform";
cell.detailTextLabel.text = [device platform];
break;
case 1:
cell.textLabel.text = @"model";
cell.detailTextLabel.text = [device mac_address];
break;
case 2:
cell.textLabel.text = @"mac address";
cell.detailTextLabel.text = [device mac_address];
break;
}
return cell;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Laravel 5.4 Higher Order Messaging
Question: Why do the old approach returns the right value see result old approach and the new HOM approach returns the whole collection?
Role Model
class Role extends Model {
public function getName() {
return $this->name;
}
}
Controller
$roles = Role::all(); // get all roles to test
// old approach
$roles->each(function(Role $i) {
var_dump($i->getName());
});
// new approach (HOM)
var_dump($roles->each->getName());
If I implement the new approach with higher order messaging it returns me the whole Collection, if I use the old one I get the right result.
Result old approach
string(11) "Application"
string(6) "System"
string(7) "Network"
string(7) "Manager"
Result new approach
object(Illuminate\Database\Eloquent\Collection)#196 (1) {
["items":protected]=>
array(4) {
[0]=>
object(App\Modules\Role\Role)#197 (24) {
["connection":protected]=>
NULL
["table":protected]=>
NULL
["primaryKey":protected]=>
string(2) "id"
["keyType":protected]=>
string(3) "int"
["incrementing"]=>
bool(true)
["with":protected]=>
array(0) {
}
["perPage":protected]=>
int(15)
["exists"]=>
bool(true)
["wasRecentlyCreated"]=>
bool(false)
["attributes":protected]=>
array(5) {
["id"]=>
int(1)
["name"]=>
string(11) "Application"
["description"]=>
string(91) "Voluptatem similique pariatur iure. Et quaerat possimus laborum non sint aspernatur fugiat."
["created_at"]=>
string(19) "2017-03-03 11:56:09"
["updated_at"]=>
string(19) "2017-03-03 11:56:09"
}
["original":protected]=>
array(5) {
["id"]=>
int(1)
["name"]=>
string(11) "Application"
["description"]=>
string(91) "Voluptatem similique pariatur iure. Et quaerat possimus laborum non sint aspernatur fugiat."
["created_at"]=>
string(19) "2017-03-03 11:56:09"
["updated_at"]=>
string(19) "2017-03-03 11:56:09"
}
["casts":protected]=>
array(0) {
}
["dates":protected]=>
array(0) {
}
["dateFormat":protected]=>
NULL
["appends":protected]=>
array(0) {
}
["events":protected]=>
array(0) {
}
["observables":protected]=>
array(0) {
}
["relations":protected]=>
array(0) {
}
["touches":protected]=>
array(0) {
}
["timestamps"]=>
bool(true)
["hidden":protected]=>
array(0) {
}
["visible":protected]=>
array(0) {
}
["fillable":protected]=>
array(0) {
}
["guarded":protected]=>
array(1) {
[0]=>
string(1) "*"
}
}
}
A:
each just iterates over the collection, it doesn't actually return anything. If you want it to return the value of getName() for each iteration then you could use map e.g.
dump($roles->map->getName());
Hope this helps!
| {
"pile_set_name": "StackExchange"
} |
Q:
Field value must be unique unless it is NULL
I'm using SQL Server 2005.
I have a field that must either contain a unique value or a NULL value. I think I should be enforcing this with either a CHECK CONSTRAINT or a TRIGGER for INSERT, UPDATE.
Is there an advantage to using a constraint here over a trigger (or vice-versa)? What might such a constraint/trigger look like?
Or is there another, more appropriate option that I haven't considered?
A:
I create a view with the an index that ignores the nulls through the where clause...i.e. if you insert null into the table the view doesn't care but if you insert a non null value the view will enforce the constraint.
create view dbo.UniqueAssetTag with schemabinding
as
select asset_tag
from dbo.equipment
where asset_tag is not null
GO
create unique clustered index ix_UniqueAssetTag
on UniqueAssetTag(asset_tag)
GO
So now my equipment table has an asset_tag column that allows multiple nulls but only unique non null values.
Note:
If using mssql 2000, you'll need to "SET ARITHABORT ON" right before any insert, update or delete is performed on the table. Pretty sure this is not required on mssql 2005 and up.
A:
Here is an alternative way to do it with a constraint. In order to enforce this constraint you'll need a function that counts the number of occurrences of the field value. In your constraint, simply make sure this maximum is 1.
Constraint:
field is null or dbo.fn_count_maximum_of_field(field) < 2
EDIT I can't remember right now -- and can't check it either -- whether the constraint check is done before the insert/update or after. I think after with the insert/update being rolled back on failure. If it turns out I'm wrong, the 2 above should be a 1.
Table function returns an int and uses the following select to derive it
declare @retVal int
select @retVal = max(occurrences)
from (
select field, count(*) as occurrences
from dbo.tbl
where field = @field
group by field
) tmp
This should be reasonably fast if your column as a (non-unique) index on it.
A:
You can accomplish this by creating a computed column and put the unique index on that column.
ALTER TABLE MYTABLE
ADD COL2 AS (CASE WHEN COL1 IS NULL THEN CAST(ID AS NVARCHAR(255)) ELSE COL1 END)
CREATE UNIQUE INDEX UQ_COL2 ON MYTABLE (COL2)
This is assuming that ID is the PK of your table and COL1 is the "unique or null" column.
The computed column (COL2) will use the PK's value if your "unique" column is null.
There is still the possibility of collisions between the ID column and COL1 in the following example:
ID COL1 COL2
1 [NULL] 1
2 1 1
To get around this I usually create another computed column which stores whether the value in COL2 comes from the ID column or the COL1 column:
ALTER TABLE MYTABLE
ADD COL3 AS (CASE WHEN COL1 IS NULL THEN 1 ELSE 0 END)
The index should be changed to:
CREATE UNIQUE INDEX UQ_COL2 ON MYTABLE (COL2, COL3)
Now the index is on both computed columns COL2 and COL3 so there is no issue:
ID COL1 COL2 COL3
1 [NULL] 1 1
2 1 1 0
| {
"pile_set_name": "StackExchange"
} |
Q:
Convert nested Json to flat Json with parentId to every node
The following Json structure is a result of Neo4J apoc query. I want to convert this nested Json to flat Json structure as shown in the second json.
[
{
"child1": [
{
"_type": "EntityChild1",
"name": "Test222",
"_id": 2
}
],
"child2": [
{
"_type": "EntityChild2",
"name": "Test333",
"_id": 3,
"child2_child1": [
{
"_type": "EntityChild2_1",
"name": "Test444",
"_id": 6,
"child2_child1_child1": [
{
"_type": "EntityChild2_1_1",
"name": "Test555",
"_id": 7
}
]
}
]
}
],
"_type": "EntityParent",
"name": "Test000",
"_id": 1,
"child3": [
{
"_type": "EntityChild3",
"name": "Test111",
"_id": 4
}
],
"child4": [
{
"_type": "EntityChild4",
"name": "Test666",
"_id": 5
}
]
}
]
This is the result i am looking for, I also want the parentId appended to every node. If no parent is there for a particular node then it should have parentid as -1.
[
{
"_type": "EntityParent",
"name": "Test000",
"_id": 1,
"parentid": -1
},
{
"_type": "EntityChild1",
"name": "Test222",
"_id": 2,
"parentid": 1
},
{
"_type": "EntityChild2",
"name": "Test333",
"_id": 3,
"parentid": 1
},
{
"_type": "EntityChild2_1",
"name": "Test444",
"_id": 6,
"parentid": 3
},
{
"_type": "EntityChild2_1_1",
"name": "Test555",
"_id": 7,
"parentid": 6
},
{
"_type": "EntityChild3",
"name": "Test111 ",
"_id": 4,
"parentid": 1
},
{
"_type": "EntityChild4",
"name": "Test666",
"_id": 5,
"parentid": 1
}
]
Let me know if any further information is required.
A:
You could take an iterative and recursive approach by using a function which takes an array and a parent id for the actual level.
If a property starts with child, it calls the function again with the actual _id and pushes all items to the result set.
function getFlat(array, parentid) {
return array.reduce((r, o) => {
var temp = {};
r.push(temp);
Object.entries(o).forEach(([k, v]) => {
if (k.startsWith('child')) {
r.push(...getFlat(v, o._id));
} else {
temp[k] = v;
}
});
temp.parentid = parentid;
return r;
}, []);
}
var data = [{ child1: [{ _type: "EntityChild1", name: "Test222", _id: 2 }], child2: [{ _type: "EntityChild2", name: "Test333", _id: 3, child2_child1: [{ _type: "EntityChild2_1", name: "Test444", _id: 6, child2_child1_child1: [{ _type: "EntityChild2_1_1", name: "Test555", _id: 7 }] }] }], _type: "EntityParent", name: "Test000", _id: 1, child3: [{ _type: "EntityChild3", name: "Test111", _id: 4 }], child4: [{ _type: "EntityChild4", name: "Test666", _id: 5 }] }],
flat = getFlat(data, -1);
console.log(flat);
.as-console-wrapper { max-height: 100% !important; top: 0; }
| {
"pile_set_name": "StackExchange"
} |
Q:
matplotlib graph shows only points instead of line
I have got this code to display an year versus elements graph,
import matplotlib.pyplot as plt
import pylab
import numpy as np
x = [0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,
1,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,1.9,
2,2.1,2.2,2.3,2.4,2.5,2.6,2.7,2.8,2.9,
3,3.1,3.2,3.3,3.4,3.5,3.6,3.7,3.8,3.9,
4,4.1,4.2,4.3,4.4,4.5,4.6,4.7,4.8,4.9,
5,5.1,5.2,5.3,5.4,5.5,5.6,5.7,5.8,5.9,
]
y = [190.70,118.90,98.30,45,20.01,6.60,54.20,200.70, 269.30,261.70,
225.10,159,76.40,53.40,39.9,15,22,66.8,132.90,150,
149.40,148,94.40,97.60,54.10,49.20,22.50,18.40,39.30, 131,
220.10, 218.90,198.90, 162.40,91, 60.50, 20.60, 14.80, 33.9,123,
211,191.80, 203.30, 133, 76.10, 44.9, 25.10, 11.6, 28.9, 88.30,
136.30, 173.90, 170.40, 163.60, 99.30 , 65.30, 45.80, 24.7, 12.6,4.20 ]
labels = ['1950', '1960', '1970', '1980', '1990','2000', '2010']
plt.xticks(x, labels, rotation='horizontal')
pylab.ylim(0, 250)
pylab.xlim(0, 6)
plt.yticks(np.linspace(0,250,6,endpoint=True))
plt.xticks(np.linspace(0,6,7,endpoint=True))
pylab.xlabel('YEAR')
pylab.ylabel('No. of sunspots')
pylab.title('SUNSPOT VS YEAR GRAPH')
plt.plot(x, y, 'ro')
plt.show()
The output of the code is this:
What I want to display is a line graph, not simply the points
A:
You missed the linestyle code in your .plot call:
plt.plot(x, y, 'ro-')
The - will create a solid line like so:
If you want other styles, they are available in the documentation.
| {
"pile_set_name": "StackExchange"
} |
Q:
preventDefault doesn't take effect in firefox
I have an input named shop_cat_edit and the code as below. But in FireFox this code never works... Even IE feels ok with it. What am I doing wrong?
$('[name=shop_cat_edit]').on('click',function(e){
$('#shop_cat_selector_form').on('submit', function(){e.preventDefault();});
});
A:
The variable e is not declared in the function you use it
function(){ <-- no function param for event is set here
e.preventDefault();
}
It should look like this
function(e){e.preventDefault();}
| {
"pile_set_name": "StackExchange"
} |
Q:
Bitwise switch most significant and least significant bytes
Say I have a buffer which stores 2 bytes:
char *buf=new char[4];
// 00000010 00000000 (.. other stuff ..)
What I want to do is switching the least with the most significant byte, and store that value in a variable. Trying to do so as follows:
short len=buf[1];
len <<= 8;
len |= buf[0];
// Result, as expected: 00000000 00000010
It works fines, UNLESS the most significant byte (buf[0]) is >= 128, which makes the or operator (|) fill half of the short with 1's. Example:
Original: 10000110 00000000
Should be: 00000000 10000110
But is: 11111111 10000110
Thanks (oh, I'm reading the bytes from a file with file.read(...,4); - don't even know anymore if this is relevant)
A:
Your implementation obviously uses two's complement representation of signed numbers. Use unsigned values instead
unsigned char * buf = new unsigned char[2];
...
unsigned short len=buf[1];
len <<= 8;
len |= buf[0];
| {
"pile_set_name": "StackExchange"
} |
Q:
How should I handle state in this Activity?
When I press the back button, while my app's running, the onPause method gets called, but the onSaveInstanceState isn't. Then, if I enter my app again, I receive a call to onCreate, which will re-run my initialization logic.
I would like to save the state, so I can restore it when the user enters the app again, however I'm not sure how to go about this. What use will the onCreate's Bundle be to me, since in onPause I can only save data through SharedPreferences? Or I shouldn't rely on the Bundle at all, and just reinitialize from the SharedPreferences?
How could the reinitialization be handled best?
EDIT: I would prefer if the changes could persist only until the app is killed. Only for the application's lifetime.
A:
From the onSaveInstanceState API Page
If called, this method will occur before onStop(). There are no guarantees about whether it will occur before or after onPause().
As long as your activity is being killed and you have overridden the onSaveInstanceState correctly then it should be fired.
As for your recent edit (from the same API page)
Called to retrieve per-instance state from an activity before being killed so that the state can be restored in onCreate(Bundle) or onRestoreInstanceState(Bundle) (the Bundle populated by this method will be passed to both).
onSaveInstanceState is best used for persisting state when an activity is killed, not so much just for storing state for the application's lifetime.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can I scroll the windows in emacs synchronously?
I mean if I type in c-x 2, then for comparasion I want it to scroll the other window synchronously when I scroll the temporary window.
A:
Try Follow Mode:
M-x follow-mode
Follow mode is a minor mode that makes two windows, both showing the same buffer, scroll as a single tall “virtual window”.
Note that follow mode can only synchronize scrolling windows which content a same file. For scrolling windows with diffrent files synchronously, here's an answer for it.
Follow mode can only synchronize scrolling windows which content a
same file. Does it possible to scrolling windows with diffrent files
synchronously? – 9307420654
There is a minor mode to apply all scrolling commands to all visible
windows in the current frame. Check scroll-all-mode for that. Or else,
if you want to scroll together only 2 windows and only when you use a
specific keystroke, you can do something like a function that scrolls
the current window (scroll-up or scroll-down) and then the “other
window” (i.e., the next one in the list of windows), with
scroll-other-window (with argument '-, i.e., the symbol “minus”, for
scrolling down). Then you can assign this to a keystroke, e.g., C-M-up
/ down.
scroll-all-mode:
Use M-x scroll-all-mode to scroll multiple buffers together.
Very useful for visually comparing two files which are hard to diff
because of lots of trivial changes amongst the changes you are looking
for.
A:
Another solution might be the scroll-all-mode:
M-x scroll-all-mode
| {
"pile_set_name": "StackExchange"
} |
Q:
Add another item/row on form, insert into DB
These two fields are inserted into a database. However, I want to give the user the ability to "Add another item". They should be able to, ideally, add as many items as they like. When they submit the form, the data would be inserted into a mysql table.
How can I go about doing this? Creating 10 extra columns in my database to accommodate extra items being added does not sound realistic nor ideal.
Thanks for the help!
Here is a snippet of my code, where I insert my data into the DB:
if ($stmt = $mysqli->prepare("INSERT items (number, description) VALUES (?, ?)"))
{
$stmt->bind_param("ss", $number, $description);
$stmt->execute();
$stmt->close();
}
A:
Just use :
name="item[]"
for item # '
and
name="description[]"
<input type="text" name="item[]" /><input type="text" name="description[]" />
For item description field and in your server side iterate through both of them.
in the front end just create a script that duplicate that node many times.
EDIT
Based in the code you showed it should be something similar to the below code :
if(isset($_POST['item']){
for($i = 0; $i < count($_POST['item']); $i++){
$number = $_POST['item'][$i];
$description= $_POST['description'][$i];
if ($stmt = $mysqli->prepare("INSERT items (number, description) VALUES (?, ?)"))
{
$stmt->bind_param("ss", $number, $description);
$stmt->execute();
$stmt->close();
}
}
In your javascript do like below:
function addRow(){
document.getELementById('container').el.innerHTML += '<input type="text" name="item[]" /><input type="text" name="description[]" />' ;
}
HTML Code:
<form name='myform' >
<div id="container">
<input type="text" name="item[]" /><input type="text" name="description[]" />
</div>
<p onclick="addRow() ;" >Add another item</p>
</form>
I hope this helps.
UPDATE
Try this one :
if(isset($_POST['item']){
if ($stmt = $mysqli->prepare("INSERT items (number, description) VALUES (?, ?)"))
{
for($i = 0; $i < count($_POST['item']); $i++){
$number = $_POST['item'][$i];
$description= $_POST['description'][$i];
$stmt->bind_param("ss", $number, $description);
$stmt->execute();
}
$stmt->close();
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use searchDelegate with sqlite flutter
I have created an app that stores some notes in sqlite database. I did all CRUD operations and it's working well, but when I'm trying to make search operation inside my database with SearchDelegate,
I got some problem. I'll show you my code before I make search with SearchDelegate.
main.dart:
import 'package:flutter/material.dart';
import 'package:my_info/ui/home.dart';
void main() => runApp(
MaterialApp(
home: Home(),
title: 'My notes',
)
);
databaseHelper:
import 'package:path/path.dart';
import 'package:sqflite/sqflite.dart';
import 'package:my_info/model/notes.dart';
class DatabaseHelper{
String tableName = 'info' ;
String columnId = 'id' ;
String columnTitle = 'title' ;
String columnSubTitle = 'subTitle' ;
String columnContent = 'conent' ;
Database _db ;
Future<Database> get db async {
if(_db != null) return _db ;
_db = await initDb();
return _db ;
}
initDb()async{
String databasePath = await getDatabasesPath();
String path = join(databasePath,'info.db');
var db = await openDatabase(path,version:1,onCreate: _onCreate);
return db;
}
_onCreate(Database db, int newVersion) async{
var sql = "CREATE TABLE $tableName ($columnId INTEGER PRIMARY KEY ,"
"$columnTitle TEXT, $columnSubTitle TEXT, $columnContent TEXT)" ;
await db.execute(sql);
}
Future<int> saveNote(Notes note)async{
var dbClient = await db ;
var result = await dbClient.insert(tableName,
note.toMap());
return result ;
}
Future<int> getCount()async{
var dbClient = await db;
return Sqflite.firstIntValue(await dbClient.rawQuery('SELECT COUNT(*) FROM $tableName'));
}
Future<List> getAllNotes() async{
var dbClient = await db ;
List result = await dbClient.query(tableName,
columns: [
columnId,columnTitle, columnSubTitle,columnContent
]);
return result.toList() ;
}
Future<Notes> searchNotes(String title) async{
var dbClient = await db ;
var result = await dbClient.rawQuery(
' select * from $tableName WHERE $columnTitle LIKE %$title% '
);
if(result.length > 0 ) return Notes.fromMap(result.first);
return null ;
}
Future<Notes> getNote(int id)async{
var dbClient = await db ;
List<Map> result = await dbClient.query(tableName,
columns: [
columnTitle, columnSubTitle,columnContent
],
where: '$columnId = ?', whereArgs: [id]);
if(result.length > 0 ) return new Notes.fromMap(result.first);
return null ;
}
Future<int> updateNote(Notes note) async{
var dbClient = await db ;
return await dbClient.update(tableName, note.toMap(),
where: '$columnId = ? ',
whereArgs: [note.id]);
}
Future<int> deleteNote(int id)async{
var dbClient = await db ;
return await dbClient.delete(tableName,
where: '$columnId = ? ', whereArgs: [id]);
}
Future closeDb()async{
var dbClient = await db ;
dbClient.close();
}
}
class Notes
class Notes{
int _id;
String _title;
String _subTitle;
String _content;
Notes(this._title,this._subTitle,this._content);
Notes.map(dynamic obj){
this._id = obj['id'];
this._title = obj['title'];
this._subTitle = obj['subTitle'];
this._content = obj['conent'];
}
int get id => _id;
String get title => _title;
String get subTitle => _subTitle;
String get content => _content;
Map<String,dynamic> toMap(){
Map map = Map<String,dynamic>();
if(id == null) {
map['id'] = _id ;
}
map['title'] = _title;
map['subTitle'] = _subTitle;
map['conent'] = _content ;
return map;
}
Notes.fromMap(Map<String,dynamic>map){
this._id = map['id'];
this._title = map['title'];
this._subTitle = map['subTitle'];
this._content = map['conent'];
}
}
Home
import 'package:flutter/material.dart';
import 'package:my_info/model/notes.dart';
import 'package:my_info/utils/database_helper.dart';
import 'package:my_info/ui/notes_screen.dart';
class Home extends StatefulWidget {
@override
State<StatefulWidget> createState() => HomeState();
}
class HomeState extends State<Home>{
List<Notes> items = new List();
DatabaseHelper db = new DatabaseHelper();
@override
void initState() {
// TODO: implement initState
super.initState();
db.getAllNotes().then((notes){
setState(() {
notes.forEach((note){
items.add(Notes.fromMap(note));
});
});
});
}
@override
Widget build(BuildContext context) {
// TODO: implement build
return Scaffold(
appBar: AppBar(
backgroundColor: Colors.deepPurple,
title: Text('Notes',
style: TextStyle(
fontStyle: FontStyle.italic,
fontWeight: FontWeight.w700,
fontSize: 30
),
),
centerTitle: true,
),
body: Center(
child:ListView.builder(
itemCount: items.length,
padding: EdgeInsets.all(15),
itemBuilder: (context,int position){
return Column(
children: <Widget>[
Divider(height: 5,),
Row(
children: <Widget>[
Expanded(
child: ListTile(
title: Text(items[position].title,
style: TextStyle(
fontSize: 22,
fontWeight: FontWeight.bold,
color: Colors.redAccent
),
),
subtitle: Text(items[position].subTitle,
style: TextStyle(
fontSize: 18,
fontWeight: FontWeight.bold,
fontStyle: FontStyle.italic
),
),
leading: Column(
children: <Widget>[
CircleAvatar(
backgroundColor: Colors.deepPurple,
radius: 25,
child: Icon(Icons.insert_comment,
color: Colors.deepOrange,)
//Image.asset('images/information.png'),
)
],
),
onTap: ()=> _navigateToNoteScreen(context,items[position]),
)
),
IconButton(icon: Icon(Icons.delete,size: 30,
color: Colors.redAccent,),
onPressed: () => _showDialog(context,items[position],position)
)
],
)
],
);
},
),
),
floatingActionButton: FloatingActionButton(child: Icon(Icons.add),
backgroundColor: Colors.deepPurple,
onPressed: () => _createNote(context)
),
);
}
_navigateToNoteScreen(BuildContext context,Notes note)async{
String result = await Navigator.push(context,
MaterialPageRoute(builder: (context) => NoteScreen(note)));
if(result == 'update'){
db.getAllNotes().then((note){
setState(() {
items.clear();
note.forEach((notes){
items.add(Notes.fromMap(notes));
});
});
});
}
}
_deleteNote(BuildContext context,Notes note, int position){
db.deleteNote(note.id).then((notes){
setState(() {
items.removeAt(position);
});
});
Navigator.of(context).pop();
}
void _createNote(BuildContext context)async{
String result = await Navigator.push(context,
MaterialPageRoute(builder: (context) =>
NoteScreen(
Notes('','','')
)
)
);
if(result == 'save'){
db.getAllNotes().then((notes){
setState(() {
items.clear();
notes.forEach((note){
items.add(Notes.fromMap(note));
});
});
});
}
}
void _showDialog(BuildContext context,Notes note, int position) {
// flutter defined function
showDialog(
context: context,
builder: (BuildContext context) {
// return object of type Dialog
return AlertDialog(
title: new Text("Delete ?"),
content: new Text("Do you want delete Content"),
actions: <Widget>[
// usually buttons at the bottom of the dialog
new FlatButton(
child: new Text("Noا"),
onPressed: () {
Navigator.of(context).pop();
},
),
new FlatButton(
child: new Text("Yes"),
onPressed: () {
_deleteNote(context,items[position],position);
},
),
],
);
},
);
}
}
On CRUD, everything is working well and I'm adding 3 records
but when I'm using SearchDelegate to implement search,
I got some error and I didn't find anything helpful from the error to get what I want.
Now I'll show you the code after I try to do search.
import 'package:flutter/material.dart';
import 'package:my_info/model/notes.dart';
import 'package:my_info/utils/database_helper.dart';
import 'package:my_info/ui/notes_screen.dart';
class Home extends StatefulWidget {
@override
State<StatefulWidget> createState() => HomeState();
}
class HomeState extends State<Home>{
List<Notes> items = new List();
DatabaseHelper db = new DatabaseHelper();
@override
void initState() {
// TODO: implement initState
super.initState();
db.getAllNotes().then((notes){
setState(() {
notes.forEach((note){
items.add(Notes.fromMap(note));
});
});
});
}
@override
Widget build(BuildContext context) {
// TODO: implement build
return Scaffold(
appBar: AppBar(
backgroundColor: Colors.deepPurple,
title: Text('Notes',
style: TextStyle(
fontStyle: FontStyle.italic,
fontWeight: FontWeight.w700,
fontSize: 30
),
),
centerTitle: true,
actions: <Widget>[
IconButton(icon: Icon(Icons.search,
color: Colors.white,), onPressed: (){
showSearch(context: context, delegate: DataSearch());
})
],
),
/*body: Center(
child:ListView.builder(
itemCount: items.length,
padding: EdgeInsets.all(15),
itemBuilder: (context,int position){
return Column(
children: <Widget>[
Divider(height: 5,),
Row(
children: <Widget>[
Expanded(
child: ListTile(
title: Text(items[position].title,
style: TextStyle(
fontSize: 22,
fontWeight: FontWeight.bold,
color: Colors.redAccent
),
),
subtitle: Text(items[position].subTitle,
style: TextStyle(
fontSize: 18,
fontWeight: FontWeight.bold,
fontStyle: FontStyle.italic
),
),
leading: Column(
children: <Widget>[
CircleAvatar(
backgroundColor: Colors.deepPurple,
radius: 25,
child: Icon(Icons.insert_comment,
color: Colors.deepOrange,)
//Image.asset('images/information.png'),
)
],
),
onTap: ()=> _navigateToNoteScreen(context,items[position]),
)
),
IconButton(icon: Icon(Icons.delete,size: 30,
color: Colors.redAccent,),
onPressed: () => _showDialog(context,items[position],position)
)
],
)
],
);
},
),
),*/
floatingActionButton: FloatingActionButton(child: Icon(Icons.add),
backgroundColor: Colors.deepPurple,
onPressed: null /*() => _createNote(context)*/
),
);
}
/* _navigateToNoteScreen(BuildContext context,Notes note)async{
String result = await Navigator.push(context,
MaterialPageRoute(builder: (context) => NoteScreen(note)));
if(result == 'update'){
db.getAllNotes().then((note){
setState(() {
items.clear();
note.forEach((notes){
items.add(Notes.fromMap(notes));
});
});
});
}
}
_deleteNote(BuildContext context,Notes note, int position){
db.deleteNote(note.id).then((notes){
setState(() {
items.removeAt(position);
});
});
Navigator.of(context).pop();
}
void _createNote(BuildContext context)async{
String result = await Navigator.push(context,
MaterialPageRoute(builder: (context) =>
NoteScreen(
Notes('','','')
)
)
);
if(result == 'save'){
db.getAllNotes().then((notes){
setState(() {
items.clear();
notes.forEach((note){
items.add(Notes.fromMap(note));
});
});
});
}
}
void _showDialog(BuildContext context,Notes note, int position) {
// flutter defined function
showDialog(
context: context,
builder: (BuildContext context) {
// return object of type Dialog
return AlertDialog(
title: new Text("Delete"),
content: new Text("Do you wand delete content"),
actions: <Widget>[
// usually buttons at the bottom of the dialog
new FlatButton(
child: new Text("NO"),
onPressed: () {
Navigator.of(context).pop();
},
),
new FlatButton(
child: new Text("YES"),
onPressed: () {
_deleteNote(context,items[position],position);
},
),
],
);
},
);
}
*/
}
class DataSearch extends SearchDelegate<Notes> {
DatabaseHelper db = new DatabaseHelper();
List<Notes> items = new List();
List<Notes> suggestion = new List();
HomeState i = HomeState();
@override
List<Widget> buildActions(BuildContext context) {
return [
IconButton(icon: Icon(Icons.clear), onPressed: () {
query = '';
} )
];
}
@override
Widget buildLeading(BuildContext context) {
return IconButton(
icon: AnimatedIcon(
icon: AnimatedIcons.menu_arrow,
progress: transitionAnimation,
),
onPressed: (){
close(context, null);
},
);
}
@override
Widget buildResults(BuildContext context) {
}
@override
Widget buildSuggestions(BuildContext context) {
suggestion = query.isEmpty ? i.items : i.items;/*.where((target) =>
target.title.startsWith(query))*/
if( i.items.isEmpty)
{
print("Null");
}
return ListView.builder(
itemBuilder: (context, position)=>
ListTile(
leading: Icon(Icons.location_city),
title: Text('${i.items[position].title}'),
),
itemCount: suggestion.length,
);
}
}
Now, after I ran the app and when I clicked at the search icon, it doesn't display anything and my list has 3 records but it doesn't display anything.
I hope you help me guys. I really searched for solution for one week
and sorry for my bad english this is my first time asking a question in StackOverflow.
Thanks in advance.
A:
hello for me i think you have to send your data to the datasearch class.
IconButton(icon: Icon(Icons.search,
color: Colors.white,),onPressed:
(){
showSearch(context: context,
delegate: DataSearch(this.items));
})
And in DataSearch class add this
DataSearch({@required this.items});
In DataSearch class change
List<Notes> items = new List(); => final List<Notes> items = new List();
| {
"pile_set_name": "StackExchange"
} |
Q:
Is case number equivalent to data row number in Lime?
Just discovered the Lime package in R and still trying to fully understand the package. I'm stumped though the visualization using 'plot_features'
Please excuse my naivety.
My question is this, is the case number for each row sequential? In other words, is case 416 equivalent to row 416 in the data? If not, how do I know the row each case number is referring to?
Sample code to reproduce the image above:
library(MASS)
library(lime)
data(biopsy)
biopsy$ID <- NULL
biopsy <- na.omit(biopsy)
biopsy2 = data.frame(ID = 1:nrow(biopsy), biopsy)
names(biopsy2) <- c('ID','clump thickness', 'uniformity of cell size',
'uniformity of cell shape', 'marginal adhesion',
'single epithelial cell size', 'bare nuclei',
'bland chromatin', 'normal nucleoli', 'mitoses',
'class')
# Now we'll fit a linear discriminant model on all but 4 cases
set.seed(4)
test_set <- sample(seq_len(nrow(biopsy2)), 4)
prediction <- biopsy2$class
biopsy2$class <- NULL
model <- lda(biopsy2[-test_set, ], prediction[-test_set])
predict(model, biopsy2[test_set, ])
explainer <- lime(biopsy2[-test_set,], model, bin_continuous = TRUE, quantile_bins = FALSE)
explanation <- explain(biopsy2[test_set, ], explainer, n_labels = 1, n_features = 4)
plot_features(explanation, ncol = 1)
EDIT: Added an extra column to the biopsy table called ID
A:
As you can see in explanation, in the plot we go case by case starting from the beginning:
head(explanation[, 1:5])
model_type case label label_prob model_r2
1 classification 416 benign 0.9943635 0.5432439
2 classification 416 benign 0.9943635 0.5432439
3 classification 416 benign 0.9943635 0.5432439
4 classification 416 benign 0.9943635 0.5432439
5 classification 7 benign 0.9527375 0.6586789
6 classification 7 benign 0.9527375 0.6586789
However, since each case has multiple lines, it may be not a bad idea to know which lines to correspond do them. For that you may use
which(416 == explanation$case)
# [1] 1 2 3 4
so that
explanation[which(416 == explanation$case), 1:5]
# model_type case label label_prob model_r2
# 1 classification 416 benign 0.9949716 0.551287
# 2 classification 416 benign 0.9949716 0.551287
# 3 classification 416 benign 0.9949716 0.551287
# 4 classification 416 benign 0.9949716 0.551287
| {
"pile_set_name": "StackExchange"
} |
Q:
Force windows authentication and allow anonymous
My idea is to let my family enter the application using their Windows username (just so that they don't have to write it), and using Identity to keep password and other stuff saved into the local MDF database. The problem is that I cannot figure out how to force Windows authentication (so that Context.User.Identity.Name gives me the Windows username) and use that information to login to the Identity database. To create the project, I used the Web Forms template with Individual Accounts as security type and deleted all Owin third-party login packages (Microsoft, Google, etc).
Here's what I've tried:
Default.aspx.cs (my main page that requires authentication)
protected void Page_Load(object sender, EventArgs e)
{
//According to Identity comments, ApplicationCookie is the AuthenticationType...
//once logged in through Identity
if (Context.User.Identity.AuthenticationType != "ApplicationCookie")
Response.Redirect("Login.aspx", true);
}
Login.aspx.cs
protected void LogIn(object sender, EventArgs e) //login button handler
{
var manager = Context.GetOwinContext().GetUserManager<UserAdministrator>();
var signinManager = Context.GetOwinContext().GetUserManager<SessionAdministrator>();
string windowsName = Context.User.Identity.Name;
User user = manager.Users.Where(u => u.UserName == windowsName).FirstOrDefault();
// rest of the login code...
}
web.config (global)
<location path="Login.aspx"> //this should only allow windows logged-in users
<system.web>
<authorization>
<allow users="*" />
<deny users="?" />
</authorization>
</system.web>
</location>
<location path="default.aspx"> // this should only allow Identity-logged in users
<system.web>
<authorization>
<allow users="*" />
<deny users="?" />
</authorization>
</system.web>
</location>
Project properties
Windows Authentication is set to enabled
Anonymous Authentication is set to disabled
For some reason, starting up the application and browsing to either default.aspx or login.aspx, doesn't use the Windows authentication, so Context.User.Identity.IsAuthenticated returns false. How can I achieve what I want?
A:
This can be considered to be solved.
I removed the Windows authentication and switched to Forms authentication, and get the Windows username using this code I found (cannot remember who answered a SO question with it):
System.Security.Principal.WindowsPrincipal windowsUser = new System.Security.Principal.WindowsPrincipal(Request.LogonUserIdentity);
Request.LogonUserIdentity.Impersonate();
string username = windowsUser.Identity.Name.Substring(windowsUser.Identity.Name.LastIndexOf("\\") + 1);
| {
"pile_set_name": "StackExchange"
} |
Q:
Не срабатывают CSS-тригеры изменения цвета
Есть список в виде меню:
<ul class="menu">
<li>
<a href="#">fond</a>
</li>
<li>
<a href="#">blago</a>
</li>
<li>
<a href="#">knigi</a>
</li>
</ul>
И каскадная таблица:
<style>
ul {
margin-top:20px;
list-style:none;
}
li {
float:left;
width:80px;
height:30px;
line-height:30px;
background:#6495ED;
text-align:center;
margin-left:1px;
border-radius:10px 10px 0 0;
}
ul li a {
text-decoration:none;
color:black;
font-size:19px;
}
ul li:hover {
margin-top:-20px;
height:50px;
line-height:50px;
background:#000080;
color:red;
font-weight:bold;
}
ul li a:hover {
color:red;
font-weight:bold;
}
</style>
При наведении на пункт меню, цвет текста должен быть красным, но он почему-то так и остаётся чёрным. Становится красным только при наведении на ссылку, а мне надо, чтоб текст становился красным, даже когда курсор не касается ссылки.
Подскажите, что сделать?
A:
Тут просто задан неправильный селектор. http://jsfiddle.net/DL6z5/
ul li:hover
Изменяет li при наведении.
ul li a:hover
Изменяет a при наведении.
А нужно изменить a при наведении на li
ul li:hover a
Я бы сделал немного другой css http://jsfiddle.net/bjS8D/1/
ul li {
float: left;
margin-left: 1px;
}
ul li a {
display: block;
margin: 20px 0 0 0;
width: 80px;
text-align: center;
height: 30px;
line-height: 30px;
color: #000;
background: #6495ED;
font-size: 19px;
text-decoration: none;
-webkit-border-radius: 10px 10px 0 0;
-moz-border-radius: 10px 10px 0 0;
border-radius: 10px 10px 0 0;
-webkit-transition: color .5s ease;
-o-transition: color .5s ease;
-ms-transition: color .5s ease;
-moz-transition: color .5s ease;
}
ul li a:hover {
height: 50px;
line-height: 50px;
background: #000080;
color: red;
font-weight: bold;
margin: 0;
}
A:
У вас явно указано, что ссылка всегда должна быть черного цвета. Даже при наведении. Если хотите, чтобы работало, нужно изменяющиеся свойства указывать в родители ul li. Вот... Т.е:
<style>
ul { margin-top:20px; list-style:none; }
li { color:black; float:left; width:80px; height:30px; line-height:30px; background:#6495ED; text-align:center; margin-left:1px; border-radius:10px 10px 0 0; }
ul li a { text-decoration:none; font-size:19px; }
ul li:hover { margin-top:-20px; height:50px; line-height:50px; background:#000080; color:red; font-weight:bold; }
</style>
UPD:
Люди, вы не спешите CSS решение так скоро отметать. С первого раза редко что получается))) Сами же знаете:)
Вот. Все проверил.
ul { margin-top:20px; list-style:none; }
li { color:black; float:left; width:80px; height:30px; line-height:30px; background:#6495ED; text-align:center; margin-left:1px;
border-radius:10px 10px 0 0; margin-top: 20px;}
ul li a { text-decoration:none; font-size:19px; color: black; }
ul li:hover { margin-top:0; height:50px; line-height:50px; background:#000080; color:red; font-weight:bold; }
ul li:hover a { color:red; font-weight:bold; }
ul li:hover a - и есть та самая магическая строчка))
A:
Решение с помошью JavaScript (работает во всех браузерах):
<html>
<head>
<title>Сила JavaScript и DOM</title>
<style>
ul.menu{
background-color: #00FF00;
}
ul li a{
color: #000000;
}
ul li a.lihover{
color: #FF0000;
}
</style>
<script>
function getElementsByClassName(where, className){ //Функция работает во всех
//браузерах а document.getElementsByClassName везде кроме IE<9
var allElements = where.getElementsByTagName("*"); //Получаем все тэги
var elements = [];
for(var i=0;i<allElements.length;i++){
if(allElements[i].className==className){ // Отсеиваем по className
elements.push(allElements[i]);
}
}
return elements; // Возвращяем результат
}
function mover(evt){ // Функция когда мышь в пределах li
liobj=window.event ? window.event.srcElement : evt.target; //Получаем обьект на
//который направили мышь предположительно li
if(liobj.tagName=="LI"){ // Если дейстаительно это li то
obj=liobj.firstChild; // За обьект принимаем первого ребёнка li (у нас это a)
obj.className = "lihover"; // Присваеваем обьекту новый класс
}
}
function mout(evt){ // Функция когда мышь за пределами li
liobj=window.event ? window.event.srcElement : evt.target; // Тоже самое
if(liobj.tagName=="LI"){
obj=liobj.firstChild;
obj.className = ""; //Присваиваем обьекту класс по умолчанию
}
}
window.onload=function(){ //После загрузки страницы
var elements; //Получаем все тэги с классом menu
if(document.getElementsByClassName){ // Проверяем не IE<9 ли это?
elements = document.getElementsByClassName("menu");
}
else{
elements = getElementsByClassName(document, "menu");
}
for(var i=0; i<elements.length; i++){ // Каждому присваиваем события
elements[i].onmouseover=mover; // onMouseOver
elements[i].onmouseout=mout; // и onMouseOut
}
}
</script>
</head>
<body>
<ul class="menu">
<li><a href="#">fond</a></li>
<li><a href="#">blago</a></li>
<li><a href="#">knigi</a></li>
</ul>
</body>
</html>
UPD:
@Рома прогер вы были правы, я всё протестировал и узнал что: цвет a:hover внутри <li/> не меняет color <a/> а меняет color у <li/> эта новая для меня особенность CSS спасибо что открыли её для меня :)
P.S: лучшая альтернатива предложена @Котик'ом: ": Функции для поиска элементов по классам и тегам
P.P.S: т.к. лучший ответ уже дан @omgwtf то я небуду переписывать свой ответ
| {
"pile_set_name": "StackExchange"
} |
Q:
Displaying multiple images with patches overlayed in the same figure in matlab
I have an image I overlay with a patch and I want to be able to display multiple images on the same figure all with different over-layed patches. I tried using the subplot command but it did not work. I defined my axes and parent figure below.
hFig = figure;
hAx = axes('Parent',hFig);
hAx2 = axes('Parent',hFig);
fh = figure('units','pixels',...
'position',[300 300 440 500],...
'menubar','none',...
'name','SliderTool',...
'numbertitle','off',...
'resize','off');
subplot(1,2,1) imshow(image1,'Parent','hAx');
patch(....,'parent', 'hAx');
subplot(1,2,2) imshow(image2,'Parent','hAx2');
patch(....,'parent', 'hAx2');
This is the basic idea for what I want to do but for some reason I cannot get it to work. Am I using the handles incorrectly?
A:
Are you trying to do something like this:
image1 = imread('street1.jpg');
image2 = imread('street2.jpg');
ha1 = subplot(1,2,1) ;
imshow(image1);
X = get(ha1,'Xlim')/2;
Y = get(ha1,'Ylim')/2;
patch(X([1 2 2 1]),Y([1 1 2 2]),'m','FaceAlpha',0.5,'parent', ha1 );
ha2 = subplot(1,2,2) ;
imshow(image2);
X = get(ha2,'Xlim')/2;
Y = get(ha2,'Ylim')/2;
patch(X([1 2 2 1]),Y([1 1 2 2]),'b','FaceAlpha',0.3,'parent', ha2);
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it safe to test the X509Certificate.Thumbprint property when you know an invalid certificate is safe?
I'm attempting to send emails programmatically using SmtpClient.Send. I am currently getting an AuthenticationException when attempting to send the email. This is because of the certificate validation procedure failing.
I know that the certificate is the correct one, but I also understand that it's not secure to trust all certificates much like the suggestions of doing this:
ServicePointManager.ServerCertificateValidationCallback +=
(sender, certificate, chain, sslPolicyErrors) => { return true; };
So I was wondering if testing the Thumbprint for a known valid certificate thumbprint is secure enough, like so:
ServicePointManager.ServerCertificateValidationCallback +=
(sender, certificate, chain, sslPolicyErrors) =>
{
if (sslPolicyErrors == SslPolicyErrors.None)
return true;
else if (certificate.GetCertHashString().Equals("B1248012B10248012B"))
return true;
return false;
};
A:
Yes.
The thumbprint is a SHA1 hash of the certificate, and while not absolutely impossible, is extremely difficult to forge.
In technical terms, there are currently no known feasable second-preimage attacks on SHA1.
However, if in any doubt, you may store the whole certificate, perhaps using the fingerprint as a key. Then you can compare the whole certificate against your stored, trusted certificate.
| {
"pile_set_name": "StackExchange"
} |
Q:
Scala Tuple type inference in Java
This is probably a very noobish question, but I was playing a bit with Scala/Java interaction, and was wondering how well did Tuples play along.
Now, I know that the (Type1, Type2) syntax is merely syntactic sugar for Tuple2<Type1, Type2>, and so, when calling a Scala method that returns a Tuple2 in a plain Java class, I was expecting to get a return type of Tuple2<Type1, Type2>
For clarity, my Scala code:
def testTuple:(Int,Int) = (0,1)
Java code:
Tuple2<Object,Object> objectObjectTuple2 = Test.testTuple();
It seems the compiler expects this to be of parameterized types <Object,Object>, instead of, in my case, <Integer,Integer> (this is what I was expecting, at least).
Is my thinking deeply flawed and is there a perfectly reasonable explanation for this?
OR
Is there a problem in my Scala code, and there's a way of being more... explicit, in the cases that I know will provide an API for Java code?
OR
Is this simply a limitation?
A:
Int is Scala's integer type, which is a value class, so it gets special treatment. It is different from java.lang.Integer. You can specify java.lang.Integer specifically if that's what you need.
[dlee@dlee-mac scala]$ cat SomeClass.scala
class SomeClass {
def testIntTuple: (Int, Int) = (0, 1)
def testIntegerTuple: (java.lang.Integer, java.lang.Integer) = (0, 1)
}
[dlee@dlee-mac scala]$ javap SomeClass
Compiled from "SomeClass.scala"
public class SomeClass implements scala.ScalaObject {
public scala.Tuple2<java.lang.Object, java.lang.Object> testIntTuple();
public scala.Tuple2<java.lang.Integer, java.lang.Integer> testIntegerTuple();
public SomeClass();
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Ruby, simple "threading" example to update progress in console app
I am trying to implement a simple console app that will do lots of long processes. During these processes I want to update progress.
I cannot find a SIMPLE example of how to do this anywhere!
I am still "young" in terms of Ruby knowledge and all I can seem to find are debates about Thread vs Fibers vs Green Threads, etc.
I'm using Ruby 1.9.2 if that helps.
A:
th = Thread.new do # Here we start a new thread
Thread.current['counter']=0
11.times do |i| # This loops and increases i each time
Thread.current['counter']=i
sleep 1
end
return nil
end
while th['counter'].to_i < 10 do
# th is the long running thread and we can access the same variable as from inside the thread here
# keep in mind that this is not a safe way of accessing thread variables, for reading status information
# this works fine though. Read about Mutex to get a better understanding.
puts "Counter is #{th['counter']}"
sleep 0.5
end
puts "Long running process finished!"
| {
"pile_set_name": "StackExchange"
} |
Q:
Is attached code safe from open file leak
Assuming Java6, is this code safe from file descriptor leak:
{
InputStream in = fileObject.getReadStream();
// fileObject cleans it's internal state in case it throws exception
try {
// do whatever, possibly throwing exception
} finally {
try {
in.close();
} catch (Exception ex) {
// failure to close input stream is no problem
}
}
}
Edit: To make question seem less obvious, to state it other way, is above code equal to this longer code:
{
InputStream in = null;
try {
in = fileObject.getReadStream();
// fileObject cleans it's internal state in case it throws exception
// do whatever, possibly throwing exception
} finally {
if (in != null) {
try {
in.close();
} catch (Exception ex) {
// failure to close input stream is no problem
}
}
}
}
That is, does it matter whether a call to a method which returns opened stream or throws exception is immediately before try, or inside the try block?
A:
Yes, fine. Does not even merit an answer. A variant (I less use) is:
InputStream in = null;
try {
in = fileObject.getReadStream();
// do whatever, possibly throwing exception
} finally {
if (in != null) {
try {
in.close();
} catch (Exception ex) {
// failure to close input stream is no problem if everything else was ok
}
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
VSCodeの編集画面で右クリックメニューを変更したい
VSCodeの編集画面で右クリックしたときのコンテキストメニューを編集することはできますか。
例えば単語を選択したときに大文字や小文字に変更したりとか、複数行選択したときにソートするとか。
追記
コメントで提示されたurlを参考にいろいろ試しているのですがどうもうまくいきません。
これらは新規に拡張機能を作成するということなんでしょうか?
自身で作成したcolor-themeのpackage.jsonに"menus"を追加してみましたが表示できませんでした。
{
"name": "theme-slateblue",
"displayName": "%displayName%",
"description": "%description%",
"version": "0.0.1",
"engines": { "vscode": "*" },
"contributes": {
"themes": [
{
"label": "Slateblue",
"uiTheme": "vs-dark",
"path": "./themes/slateblue-color-theme.json"
}
],
"menus": {
"editor/context": [{
"when": "editorHasSelection && editorTextFocus",
"command": "editor.action.transformToLowercase",
"group": "modification"
}]
}
}
}
このような方法ではだめなのでしょうか。
何か間違いがあると思うのですが、ご指摘いただければ幸いです。
A:
コマンドはcommandsセクションで宣言する必要があります。package.jsonのcontributes以下に次のように追記してみてください。
"commands": [
{
"command": "editor.action.transformToLowercase",
"title": "To lowercase"
}
],
ちなみに(VSCodeで)editor/context以下のcommandの部分にマウスオーバーするとこれについてのヘルプが表示されました。
| {
"pile_set_name": "StackExchange"
} |
Q:
Difference between giving pandas a python iterable vs a pd.Series for column
What are some of the differences between passing a List vs a pd.Series type to create a new dataFrame column? For example, from trial-and-error I've noticed:
# (1d) We can also give it a Series, which is quite similar to giving it a List
df['cost1'] = pd.Series([random.choice([1.99,2.99,3.99]) for i in range(len(df))])
df['cost2'] = [random.choice([1.99,2.99,3.99]) for i in range(len(df))]
df['cost3'] = pd.Series([1,2,3]) # <== will pad length with `NaN`
df['cost4'] = [1,2,3] # <== this one will fail because not the same size
d
Are there any other reasons that pd.Series differs from passing a standard python list? Can a dataframe take any python iterable or are there restrictions on what can be passed to it? Finally, is using pd.Series the 'correct' way to add columns, or can it be used interchangably with other types?
A:
List assign to dataframe here require the same length
For the pd.Series assign , it will use the index as key to match original DataFrame index, then fill the value with the same index in Series
df=pd.DataFrame([1,2,3],index=[9,8,7])
df['New']=pd.Series([1,2,3])
# the default index is range index , which is from 0 to n
# since the dataframe index dose not match the series, then will return NaN
df
Out[88]:
0 New
9 1 NaN
8 2 NaN
7 3 NaN
Different length with matched index
df['New']=pd.Series([1,2],index=[9,8])
df
Out[90]:
0 New
9 1 1.0
8 2 2.0
7 3 NaN
| {
"pile_set_name": "StackExchange"
} |
Q:
Login Page is not working.
I am having trouble on getting the MySQL column. EVERYTHING in mysql is set with the username, password, database, table, and the column.
Warning: mysql_num_rows() expects parameter 1 to be resource, boolean given
in /Applications/XAMPP/xamppfiles/htdocs/socialhut/login.php on line 8
Here's the code for login.php:
<?php
$username = $_POST['username'];
$password = $_POST['password'];
$conn = mysqli_connect("localhost","root","","data");
$sql = "SELECT * FROM userdata WHERE username='$username' and password='$password'";
$query = mysql_query($sql);
$result = mysql_num_rows($query);
if ($result==1){
session_register($username);
session_register($password);
header('location:members.php');
}else{
mysql_error();
}
?>
Can anyone figure it out?
Thanks!
A:
You're mixing mysqli and mysql calls in the same code. You can't do that.
Try this:
$conn = mysqli_connect("localhost","root","","data");
$sql = "SELECT * FROM userdata WHERE username='$username' and password='$password'";
$query = mysqli_query($conn, $sql);
if ($query === false) {die(mysqli_error($conn));}
$result = mysqli_num_rows($query);
| {
"pile_set_name": "StackExchange"
} |
Q:
Access jinja2 globals variables inside template
I have a Flask app I'm building and I'm having issues accessing my jinja2 globals variables from within my templates, any idea as to what I'm doing wrong here?
__init__.py
from config import *
...
#Initialize Flask App
app = Flask(__name__)
#Jinja2 global variables
jinja_environ = app.create_jinja_environment()
jinja_environ.globals['DANISH_LOGO_FILE'] = DANISH_LOGO_FILE
jinja_environ.globals['TEMPLATE_MEDIA_FOLDER'] = TEMPLATE_MEDIA_FOLDER
...
config.py
...
TEMPLATE_MEDIA_FOLDER = '../static/img/media_library/' #This is the location of the media_library relative to the templates directory
DANISH_LOGO_FILE = '100danish_logo.png'
...
example template
<p>{{ TEMPLATE_MEDIA_FOLDER }}</p>
In this instance, TEMPLATE_MEDIA_FOLDER prints out nothing to the template.
A:
I'm going to assume you are using Flask's render_template function here which is by default linked to the application's jinja_env attribute.
So try something like this,
app = Flask(__name__)
app.jinja_env.globals['DANISH_LOGO_FILE'] = DANISH_LOGO_FILE
app.jinja_env.globals['TEMPLATE_MEDIA_FOLDER'] = TEMPLATE_MEDIA_FOLDER
A:
The accepted answer works, but it gave me the pylint error
[pylint] E1101:Method 'jinja_env' has no 'globals' member
Do it this way to avoid the error:
app = Flask(__name__)
app.add_template_global(name='DANISH_LOGO_FILE', f=DANISH_LOGO_FILE)
| {
"pile_set_name": "StackExchange"
} |
Q:
Java XStream - How to ignore some elements
I have the following XML:
<xml version="1.0" encoding="UTF-8"?>
<osm version="0.6" generator="CGImap 0.0.2">
<bounds minlat="48.1400000" minlon="11.5400000" maxlat="48.1450000" maxlon="11.5430000"/>
<node id="398692" lat="48.1452196" lon="11.5414971" user="Peter14" uid="13832" visible="true" version="18" changeset="10762013" timestamp="2012-02-22T18:59:41Z">
</node>
<node id="1956100" lat="48.1434822" lon="11.5487963" user="Peter14" uid="13832" visible="true" version="41" changeset="10762013" timestamp="2012-02-22T18:59:39Z">
<tag k="crossing" v="traffic_signals"/>
<tag k="highway" v="traffic_signals"/>
<tag k="TMC:cid_58:tabcd_1:Class" v="Point"/>
<tag k="TMC:cid_58:tabcd_1:Direction" v="positive"/>
<tag k="TMC:cid_58:tabcd_1:LCLversion" v="9.00"/>
<tag k="TMC:cid_58:tabcd_1:LocationCode" v="35356"/>
<tag k="TMC:cid_58:tabcd_1:NextLocationCode" v="35357"/>
<tag k="TMC:cid_58:tabcd_1:PrevLocationCode" v="35355"/>
</node>
</osm>
I just want to map the elements (node) to an object, but I'm having to problems:
It's complaining about bounds elements, because I don't want to map them.
Not all nodes have tags so I'm getting some issues with it.
A:
Unfortunately overriding Mapper behaviour mentioned here does not work with implicit collections or annotations. I checked with version 1.4.3.
So the obvious solution I found was to mock ignored fields with ommiting annotation. Works perfect for me but a bit boring to create them every time.
@XStreamOmitField
private Object ignoredElement;
A:
Since XStream 1.4.5 durring marshaller declaration it's enough to use ignoreEnknownElements() method:
XStreamMarshaller marshaller = new XStreamMarshaller();
marshaller.getXStream().ignoreUnknownElements();
...
to ignore unnecessary elements.
| {
"pile_set_name": "StackExchange"
} |
Q:
Domain Access with URL Alias - same content, but with different automated aliases
I have a site running Domain Access with two domains - the primary domain and the sub-domain. Both domains share the same content in the database, but I would like the automated URL aliases for the sub-domain to be different from that of the primary domain's.
There are two tables in the database right now - the url_alias table and the domain_2_url_alias table (of which the latter is a copy of the original url_alias table).
I tried modifying the settings under /admin/build/path/pathauto and selected the sub-domain under the "Save settings for:" option, but that didn't seem to have any effect at all.
I can change the values in domain_2_url_alias manually for each node (which worked), but the site generates new content very often, that this wouldn't be a viable solution.
I appreciate any help I can get! Thanks in advance.
A:
I believe this is exactly what the Domain Path module is for.
According to the module page:
The Domain Path module allows the creation of separate path aliases per domain for nodes created using the Domain Access module.
However, it looks like this module provides the option per node, rather than offering different pathauto patterns which I think is what you want. There's a postponed request in the issue queue for pathauto integration that offers a partial patch, but it's offered as an anti-proof-of-concept, it seems.
| {
"pile_set_name": "StackExchange"
} |
Q:
Java Binary Trees: Finding the node that reaches two nodes with shortest distance
I'm currently writing a method that searches a binary tree in Java for the node that reaches two nodes in the shortest distance. My idea is that if both nodes exist in the tree, the root would be the first node that can reach both. So I recurse and check the left/right of the root to see if they too can reach both. By finding the first node that cannot reach both after finding at least one node that did, I should have the node of shortest distance from the two I'm searching for.
I have broken this task down to two methods, one named canReach that searches the tree for a node and another that uses canReach's boolean return to determine moving down the tree and in what direction, named reachesBoth.
Through debugging I'm pretty confident canReach is accurately searching the tree. However it seems that I'm never coming out of reachesBoth with any answer except null if both the nodes do not exist in the tree or the root if they are, never anything in between.
Is repeatedly checking for access to both nodes while navigating down the tree a good idea? If anyone can see where my method reachesBoth is bugged, I would appreciate the insight.
public boolean canReach(T d) {
// Helper method for reachesBoth. Takes an argument of data of a Node
int comp = d.compareTo(data); // Compare data to current node
if (comp == 0) return true; // Found the node
if (comp < 0) { // search left for the node
if (left != null) {
if (left.canReach(d) == true) return true;
}
return false;
}
if (comp > 0) { // search right for the node
if (right != null) {
if (right.canReach(d) == true) return true;
}
return false;
}
return false; // Cannot find the node in our tree
}
public T reachesBoth(T a, T b) {
if (canReach(a) == true && canReach(b) == true) {
// Found the first node that can reach both desired nodes.
// Must continue to see if a node with shorter distance
// can reach both nodes as well.
if (left != null && right != null) {
// Case 1: Two more branches to check
if (left.reachesBoth(a, b) != null) left.reachesBoth(a, b);
if (right.reachesBoth(a, b) != null) right.reachesBoth(a, b);
//Both branches cannot reach both nodes sooner than we can.
if (left.reachesBoth(a, b) == null & right.reachesBoth(a, b) == null) {
return this.data;
}
}
if (left != null && right == null) {
// Case 2: Only left branch to check
if (left.reachesBoth(a, b) == null) return this.data;
}
if (left == null && right != null) {
// Case 3: Only right branch to check
if (right.reachesBoth(a, b) == null) return this.data;
}
// Case 4: No more tree to search for a better solution
if (left == null && right == null) return this.data;
}
return null; // Cannot locate both nodes in our tree from our start
}
A:
Change your recursion to return when it checks the left and right children:
if (left.reachesBoth(a, b) != null) return left.reachesBoth(a, b);
if (right.reachesBoth(a, b) != null) return right.reachesBoth(a, b);
| {
"pile_set_name": "StackExchange"
} |
Q:
javascript button that fills the current input instead of the first one
I made a button that can fill forms with javascript, but it only fills the first input on the page. I want it to fill whichever input cursor is blinking on.
I need help.
Heres my code:
<script>
function autoFill() {
var input = document.getElementsByTagName("INPUT")[0]
input.value += "only works with the first input";
}
</script>
<input type="input">
<button type="button" onclick="autoFill()">fill it!</button>
<input type="input">
A:
This will track input focus and when button clicked it will fill correct control.
// track current focus
let currFocus;
document.addEventListener('focusin', (e) => {
if (e.target.nodeName.toLowerCase() === 'input') currFocus = e.target;
});
function autoFill() {
if (currFocus) currFocus.value += "only works with the first input";
}
<input type="input">
<button type="button" onclick="autoFill()">fill it!</button>
<input type="input">
| {
"pile_set_name": "StackExchange"
} |
Q:
Granger causality with stocks and CDS
I would like to take a closer look at stock prices and CDS spreads of different entities. Because both of them are nonstationary in levels, I use log stock returns and the first difference of the CDS.
My question: Can I run the Granger causality test on the Vector Autoregressive model (VAR) with the included variables in differences? And do I have to check for cointegration at first (and use a VECM)?
Help is very much appreciated.
A:
Regarding testing for Granger causality in presence or absence of cointegration, I find the extensive blog post by Dave Giles "Testing for Granger Causality" very helpful.
[M]y question is whether it makes sense to model stock prices <...> instead of log stock returns (not diff), when we are interested in returns. I get that when I am interested in X, I should just leave it in levels even if nonstationary. I wanted to ask whether it still makes sense to model it like that when I am interested in the return itself.
Let me contrast the subject-matter problem to the statistical problem. From the statistical perspective, to ensure the validity of your results, you need to follow sound statistical practice (e.g. as described in Dave Giles' blog). How to interpret the results comes in second. Fortunately, if you assume (and preferably validate the assumption by testing) that the logs of stock prices are cointegrated, you can use VECM where the dependent variables will be the log-returns.
Just to be sure about it: Is that more or less correct? <...> "Stock returns Granger cause spread changes"; this is the interpretation I had in mind (well, after checking whether there is Granger causality of course). The data to be used are stock prices and spreads.
I think your interpretation is fine.
| {
"pile_set_name": "StackExchange"
} |
Q:
Better explanation of bson spec examples?
I'm trying to understand the examples on http://bsonspec.org/#/specification in terms of how the bytes map to the BNF-ish specification, but having a difficult time. Is the text that gets bold when you mouse over either side correctly synchronized with the other side?
A:
Yes. Here's the Ruby implementation's mapping if you want to verify that the constants are correct.
| {
"pile_set_name": "StackExchange"
} |
Q:
MVC Display Files from a Folder in a View
What I'm looking to do, is display the contents of a folder which is located on my server in a View in my MVC application.
I have what I think should be in place for the Action, however I'm a little unsure how to go about implementing the corresponding view and I was wondering if someone could point be in the right direction on that. (and also, if someone thinks my Action could be improved, advice would be welcome :) )
Here is the action:
public ActionResult Index()
{
DirectoryInfo salesFTPDirectory = null;
FileInfo[] files = null;
try
{
string salesFTPPath = "E:/ftproot/sales";
salesFTPDirectory = new DirectoryInfo(salesFTPPath);
files = salesFTPDirectory.GetFiles();
}
catch (DirectoryNotFoundException exp)
{
throw new FTPSalesFileProcessingException("Could not open the ftp directory", exp);
}
catch (IOException exp)
{
throw new FTPSalesFileProcessingException("Failed to access directory", exp);
}
files = files.OrderBy(f => f.Name).ToArray();
var salesFiles = files.Where(f => f.Extension == ".xls" || f.Extension == ".xml");
return View(salesFiles);
}
Any help would be appreciated, thanks :)
A:
If you only want the file names then you can change your Linq query to
files = files.Where(f => f.Extension == ".xls" || f.Extension == ".xml")
.OrderBy(f => f.Name)
.Select(f => f.Name)
.ToArray();
return View(files);
Then (assuming the default project template) add the following to the Index.cshtml view
<ul>
@foreach (var name in Model) {
<li>@name</li>
}
</ul>
Which will display the list of file names
A:
IMHO you should expose only what is really needed by the view. Think about it: do you really need to retrieve a whole FileInfo object, or only a file path? If the latter is true, just return a IEnumerable<string> to the view (instead of a IEnumerable<FileInfo>, which is what you're doing in the above code). Hint: just add a Select call to your Linq expression...
Then your view will just render that model - what you need is a foreach loop and some HTML code to do it.
A:
This is a simplified example of a razor view. It will ouput your file names in a HTML table.
@model IEnumerable<FileInfo>
<h1>Files</h1>
<table>
@foreach (var item in Model) {
<tr>
<td>
@item.Name
</td>
</tr>
}
</table>
| {
"pile_set_name": "StackExchange"
} |
Q:
Are there more secure way to report location?
I want to write a story about a group of people playing Augmented Reality (AR) games in a city that similar to any city we have in the modern world. A game like the one in the new anime movie Sword Art Online: Ordinal Scale, or something like Pokemon Go if you haven't watched that movie before. The problem is that I want it to be a game with money reward (the money are from advertisement and sponsor, more or less like cash price in eSports)
GPS is not a good solution, as you can see how easy it is to fake GPS location in Pokemon Go. In the Sword Art Online movie, it was solved by having small drones flying around the city to locate the players, where I doubt if any city would allow that legally, not to mention that would be very expensive.
So, are there there more secure way to report location for a AR games?
A:
Realistically, they'd use multiple factors. Deriving location from multiple independent sources is always better than any single solution however neat.
Some suggestions:
GPS
Practically free, so there is absolutely no reason to use it.
Celltowers and Wifi
Gives independent verification that the phone is in the area reported by the GPS.
Mesh networking
The player devices can probably connect to each other. This is a useful addition since the players move. This means the set of connections a cheater needs to spoof is constantly changing and it would be difficult to control which devices are close to you reliably.
Player cameras
AR devices need to see the environment. This includes other players. This means the game will get a constant stream of random identity and position checks. This would make cheating risky unless you can control all the players in the area.
Surveillance cameras
Cities are filled with surveillance cameras. If the game can hook into this system it can see all players in many areas. And the game will know in advance where it can observe the players. By putting all critical targets in areas that the game can see or that require players to move thru areas the game can see, the difficulty of cheating goes up drastically.
Required actions
The game can require players to perform verification actions. AR devices are likely to have some biometric verification capability at least as good as with smartphones. The game can require actions that the surveillance cameras ot other players can see and verify. A player observing an action by another player acts as a verification on both players.
Environment modelling
The game can recognize what the players see and verify it matches what other players and cameras have seen. This can include changing elements such as cars, people, or weather which can be difficult to spoof over time, if the game has independent data sources.
Behauvior modelling The game can build models of how players act. This allows it to spot player characteristic actions to support positive identification and suspicious actions or patterns that trigger added verification by the system.
By combining these and other data the game should be able to verify player location and identity with high confidence. More importantly, the more factors the game uses the more difficult the system will be reliably to spoof. If your GPS says you are in a location where a surveillance camera sees nothing, the game will not be fooled by your GPS. If your location data suggests you can teleport or walk thru the walls the system will not trust it. If you see a red car when other players see an empty parking lot the system will not trust your video.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to retain responsive navigation menu in twentyeleven child theme
I use TwentyTwelve as the parent theme for most of my site development.
However, when i change any styling or positioning associated with the menu, the responsive nature of menu does not come into effect on small-screens. Instead, all the menu items are displayed in a vertical list.
What i did:
Created a new child theme
Created header.php by copy pasting contents from original header.php file,
Created a style.css and added style overrides here.
Then i moved the site-navigation div into hgroup so that i could display menu at top, just after the logo.
Tried to add some margins to .main-navigation and the menu does not fallback to responsive layout on small screen.
This is how the menu looks on small screen:
Here is the header.php code:
<?php
/**
* The Header for our theme.
*
* Displays all of the <head> section and everything up till <div id="main">
*
* @package WordPress
* @subpackage Twenty_Twelve
* @since Twenty Twelve 1.0
*/
?><!DOCTYPE html>
<!--[if IE 7]>
<html class="ie ie7" <?php language_attributes(); ?>>
<![endif]-->
<!--[if IE 8]>
<html class="ie ie8" <?php language_attributes(); ?>>
<![endif]-->
<!--[if !(IE 7) | !(IE 8) ]><!-->
<html <?php language_attributes(); ?>>
<!--<![endif]-->
<head>
<meta charset="<?php bloginfo( 'charset' ); ?>" />
<meta name="viewport" content="width=device-width" />
<title><?php wp_title( '|', true, 'right' ); ?></title>
<link rel="profile" href="http://gmpg.org/xfn/11" />
<link rel="pingback" href="<?php bloginfo( 'pingback_url' ); ?>" />
<?php // Loads HTML5 JavaScript file to add support for HTML5 elements in older IE versions. ?>
<!--[if lt IE 9]>
<script src="<?php echo get_template_directory_uri(); ?>/js/html5.js" type="text/javascript"></script>
<![endif]-->
<?php wp_head(); ?>
</head>
<body <?php body_class(); ?>>
<div id="page" class="hfeed site">
<header id="masthead" class="site-header" role="banner">
<hgroup>
<nav id="site-navigation" class="main-navigation" role="navigation">
<h3 class="menu-toggle"><?php _e( 'Menu', 'twentytwelve' ); ?></h3>
<a class="assistive-text" href="#content" title="<?php esc_attr_e( 'Skip to content', 'twentytwelve' ); ?>"><?php _e( 'Skip to content', 'twentytwelve' ); ?></a>
<?php wp_nav_menu( array( 'theme_location' => 'primary', 'menu_class' => 'nav-menu' ) ); ?>
</nav><!-- #site-navigation -->
<h1 class="site-title">
<a href="http://b2.mumacro.com/" title="Bennys Salon" rel="home">
<img class="fplogoimg" src="http://b2.mumacro.com/wp-content/uploads/2013/04/BennysLogoOnlyText.png"></img>
<h2 class="site-description"><?php bloginfo( 'description' ); ?></h2>
</a>
</h1>
</hgroup>
<?php $header_image = get_header_image();
if ( ! empty( $header_image ) ) : ?>
<a href="<?php echo esc_url( home_url( '/' ) ); ?>"><img src="<?php echo esc_url( $header_image ); ?>" class="header-image" width="<?php echo get_custom_header()->width; ?>" height="<?php echo get_custom_header()->height; ?>" alt="" /></a>
<?php endif; ?>
</header><!-- #masthead -->
<div id="main" class="wrapper">
Style.css:
/*
Theme Name: Bennys Salon
Description: Bennys Salon Child Theme
Author: MuMacro
Template: twentytwelve
(optional values you can add: Theme URI, Author URI, Version)
*/
@import url("../twentytwelve/style.css");
@import url(http://fonts.googleapis.com/css?family=Acme);
@import url(http://fonts.googleapis.com/css?family=PT+Sans:400,700italic);
/* Fonts */
.nav-menu{
font-family: Acme, serif;
color: #9C3F97;
text-transform:uppercase;
}
.entry-header .entry-title {
color: black;
font-size: 1.57143rem;
font-family: Acme, serif;
}
.entry-content p, .entry-summary p, .comment-content p, .mu_register p {
color: black;
line-height: 1.71429;
margin: 0 0 1.71429rem;
}
/*Navigation*/
.main-navigation li a {
border-bottom: 0 none;
color: #9C3F97;
font-size: 13px;
line-height: 3.69231;
text-transform: uppercase;
white-space: nowrap;
}
.main-navigation ul.nav-menu, .main-navigation div.nav-menu > ul {
border-bottom: 0 solid #EDEDED;
border-top: 0 solid #EDEDED;
display: inline-block !important;
margin-left: 28%;
margin-top: 0;
text-align: left;
width: 100%;
}
/* Header */
.site-header h2 {
color: #9C3F97;
font-size: 17px;
font-weight: normal;
line-height: 1.84615;
font-style: italic;
font-family: Acme;
}
.site-header h1 {
font-size: 1.85714rem;
line-height: 1.84615;
margin-left: -27px;
}
.site-header {
padding: 1% 0;
}
/*Navigation*/
.main-navigation {
margin-top: 0.714rem;
text-align: center;
}
/* Content Background */
.site{
background: rgba(255,255,255, .5); /* Works on all modern browsers */
}
body .site {
box-shadow: 37px 24px 51px rgba(205, 100, 100, 0.3);
margin-bottom: 3.42857rem;
margin-top: 3.42857rem;
padding: 0 2.85714rem;
}
/*Footer Credits*/
.site-info{
float:right;
}
/* Font Colors */
.site-header h2 {
color: #9C3F97;
font-size: 17px;
font-style: italic;
font-weight: normal;
line-height: 1.84615;
}
/* Logo Image */
.fplogoimg{
height:130px;
}
/* Site Description*/
.site-description {
margin-left: 5%;
position: relative;
}
/*Content Margins*/
.site-content {
margin: 0;
}
.widget-area {
margin: 0;
}
body .site {
box-shadow: 37px 24px 51px rgba(205, 100, 100, 0.3);
margin-bottom: 1.429rem;
margin-top: 1.429rem;
padding: 0 2.85714rem;
}
N.B. I am not good into HTML, CSS. I just find ways to hack a solution.
Kindly help.
A:
Try to edit the line "display:inline-block !important;" to "display:none;" and keep the rest as it is.
.main-navigation ul.nav-menu, .main-navigation div.nav-menu > ul {
border-bottom: 0 solid #EDEDED;
border-top: 0 solid #EDEDED;
display: none;
margin-left: 28%;
margin-top: 0;
text-align: left;
width: 100%;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Multi-column table sort with custom ordering for each column
I have a multi-column table of data, each row unique, and I want to know how to sort it based on multiple columns. If sorting alphabetically a solution is already described here. However, instead of alphabetical sorting, I need to sort each column based on a custom ordering stored in another list. For example, if my table is
mytable = [
('A1', 'B1', 'C1'),
('A1', 'B2', 'C2'),
('A2', 'B2', 'C1'),
('A2', 'B2', 'C2')
]
I might want the first column to be ordered ['A2','A1'], the second column to be ordered ['B1','B2'], and the third column to be ordered ['C2','C1']. The proper result would be
mytable = [
('A2', 'B2', 'C2'),
('A2', 'B2', 'C1'),
('A1', 'B1', 'C1'),
('A1', 'B2', 'C2')
]
A:
This will do what you're looking for:
orderings = (
('A2', 'A1'),
('B1', 'B2'),
('C2', 'C1')
)
orders = [dict([(v, i) for i, v in enumerate(o)]) for o in orderings]
mytable.sort(key=lambda r: tuple(o[c] for c, o in zip(r, orders)))
In practice, the columns might not all be sorted, or might be sorted in a priority other than left-right order. That could be solved by attaching an index to each and adapting the algorithm accordingly.
| {
"pile_set_name": "StackExchange"
} |
Q:
Webflux, How to intercept a request and add a new header
Using Webflux filter, I am trying to intercept the requests and check if the request is coming from certain URI then add a new Authorization header
The filter code is simple and straightforward
@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class AuthorizationFilter implements WebFilter {
@Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
return chain.filter(Optional.of(exchange)
.filter(serverWebExchange -> serverWebExchange.getRequest().getURI().getPath().endsWith("/callback"))
.map(serverWebExchange -> addNewHeader(serverWebExchange))
.orElse(exchange));
}
private ServerWebExchange addNewHeader(ServerWebExchange serverWebExchange) {
String authHeader=serverWebExchange.getRequest().getQueryParams().get("state").get(0);
if (authHeader == null) {
throw new BadRequestException("State not complete (access_token missing) for //callback");
}
try {
serverWebExchange.getRequest().getHeaders().setBearerAuth(authHeader);
}catch (Throwable t){
t.printStackTrace();
}
return serverWebExchange;
}
}
But it throws an exception
java.lang.UnsupportedOperationException
at org.springframework.http.ReadOnlyHttpHeaders.set(ReadOnlyHttpHeaders.java:99)
at org.springframework.http.HttpHeaders.setBearerAuth(HttpHeaders.java:774)
It seems the header map is read only.How can i overcome this issue and add the new Header ?
A:
You can mutate the ServerWebExchange and its ServerHttpRequest with their mutate() methods which returns a 'Builder' for each of them.
Example Java:
@Component
public class AuthorizationFilter implements WebFilter {
@Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
ServerHttpRequest mutatedRequest = exchange.getRequest().mutate().header(HttpHeaders.AUTHORIZATION, "Bearer " + authHeader).build();
ServerWebExchange mutatedExchange = exchange.mutate().request(mutatedRequest).build();
return chain.filter(mutatedExchange);
}
}
Example Kotlin:
@Component
class AuthorizationFilter : WebFilter {
override fun filter(exchange: ServerWebExchange, chain: WebFilterChain): Mono<Void> {
val mutatedRequest = exchange.request.mutate().header(HttpHeaders.AUTHORIZATION, "Bearer $authHeader").build()
val mutatedExchange = exchange.mutate().request(mutatedRequest).build()
return chain.filter(mutatedExchange)
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
External command not running from VBScript
I'm trying to execute an external program, with some variables when a certain condition is met. As far as I can tell, the command isn't attempting to run. I've tried just using notepad, or just the opcmon command itself, which should generate a usage message.
The only output I get is from the Echo, and that looks formatted properly. E.g.
Microsoft (R) Windows Script Host Version 5.812
Copyright (C) Microsoft Corporation. All rights reserved.
opcmon.exe "TEST-Goober"=151 -object "C:\Tools"
' Script Name: FileCount.vbs
' Purpose: This script will send a message to OM with the number
' of files which exist in a given directory.
' Usage: cscript.exe FileCount.vbs [oMPolicyName] [pathToFiles]
' [oMPolicyName] is the name of the HPOM Policy
' [pathToFiles] is Local or UNC Path
Option Explicit
On Error Resume Next
Dim lstArgs, policy, path, fso, objDir, objFiles, strCommand, hr
Set WshShell = CreateObject("WScript.Shell")
Set lstArgs = WScript.Arguments
If lstArgs.Count = 2 Then
policy = Trim(lstArgs(0))
path = Trim(lstArgs(1))
Else
WScript.Echo "Usage: cscript.exe filecount.vbs [oMPolicyName] [pathToFiles]" &vbCrLf &"[oMPolicyName] HPOM Policy name" & vbCrLf &"[pathToFiles] Local or UNC Path"
WScript.Quit(1)
End If
Set fso = WScript.CreateObject("Scripting.FileSystemObject")
If fso.FolderExists(path) Then
Set objDir = fso.GetFolder(path)
If (IsEmpty(objDir) = True) Then
WScript.Echo "OBJECT NOT INITIALIZED"
WScript.Quit(1)
End If
Set objFiles = objDir.Files
strCommand = "opcmon.exe """ & policy & """=" & objFiles.Count & " -object """ & path & """"
WScript.Echo strCommand
Call WshShell.Run(strCommand, 1, True)
WScript.Quit(0)
Else
WScript.Echo("FOLDER NOT FOUND")
WScript.Quit(1)
End If
A:
First step to any kind of VBScript debugging: remove On Error Resume Next. Or rather, NEVER use On Error Resume Next in the global scope. EVER!
After removing that statement you'll immediately see what's wrong, because you'll get the following error:
script.vbs(6, 1) Microsoft VBScript runtime error: Variable is undefined: 'WshShell'
The Option Explicit statement makes variable declarations mandatory. However, you didn't declare WshShell, so the Set WshShell = ... statement fails, but because you also have On Error Resume Next the error is suppressed and the script continues. When the execution reaches the Call WshShell.Run(...) statement, that too fails (because there's no object to call a Run method from), but again the error is suppressed. That's why you see the Echo output, but not the actual command being executed.
Remove On Error Resume Next and add WshShell to your Dim statement, and the problem will disappear.
| {
"pile_set_name": "StackExchange"
} |
Q:
Docker fedora hbase JAVA_HOME issue
My dockerfile on fedora 22
FROM java:latest
ENV HBASE_VERSION=1.1.0.1
RUN groupadd -r hbase && useradd -m -r -g hbase hbase
USER hbase
ENV HOME=/home/hbase
# Download'n extract hbase
RUN cd /home/hbase && \
wget -O - -q \
http://apache.mesi.com.ar/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz \
| tar --strip-components=1 -zxf -
# Upload local configuration
ADD ./conf/ /home/hbase/conf/
USER root
RUN chown -R hbase:hbase /home/hbase/conf
USER hbase
# Prepare data volumes
RUN mkdir /home/hbase/data
RUN mkdir /home/hbase/logs
VOLUME /home/hbase/data
VOLUME /home/hbase/logs
# zookeeper
EXPOSE 2181
# HBase Master API port
EXPOSE 60000
# HBase Master Web UI
EXPOSE 60010
# Regionserver API port
EXPOSE 60020
# HBase Regionserver web UI
EXPOSE 60030
WORKDIR /home/hbase
CMD /home/hbase/bin/hbase master start
As I understand when I set "FROM java:latest" my current dockerfile overlays on that one, so JAVA_HOME must be setted as it is in java:latest? Am I right? This Dockerfile is builded, but when I "docker run" image, It shows "JAVA_HOME not found" error. How can I properly set it up?
A:
use the ENV directive, something like ENV JAVA_HOME /abc/def the doc https://docs.docker.com/reference/builder/#env
| {
"pile_set_name": "StackExchange"
} |
Q:
c++ ternary operator conversion understanding
I cannot understand this ternary operator's conversion logic (here is an example):
#include <iostream>
#include <typeinfo>
#include <unistd.h>
#include <cxxabi.h>
#include <climits>
template<typename T>
struct singletime
{
private:
T value;
public:
T& operator()() {return this->value;}
operator const T& () const {return value;}
unsigned char flag_needed_for_all_types;
};
static void getvalue1 (uint64_t value, const char *call)
{
std::cout << call << ": \t" << value << std::endl << std::endl;
}
#define getvalue(x, str) \
std::cout << typeid(x).name() << std::endl; \
getvalue1(x, str);
int main (int argc, char *argv[])
{
bool flag = true;
singletime<uint64_t> singletime_64;
singletime_64() = INT_MAX+1lu;
uint64_t value_64 = singletime_64;
getvalue (flag ? singletime_64 : 0, "Ternary with singletime, > INT_MAX");
getvalue (singletime_64, "singletime w/o ternary, > INT_MAX");
getvalue (flag ? value_64 : 0, "Ternary with uint64_t, > INT_MAX");
getvalue (value_64, "uint64_t w/o ternary, > INT_MAX");
singletime_64() = INT_MAX;
uint64_t value_64_l = singletime_64;
getvalue (flag ? singletime_64 : 0, "Ternary with singletime, <= INT_MAX");
getvalue (singletime_64, "singletime w/o ternary, <= INT_MAX");
getvalue (flag ? value_64_l : 0, "Ternary with uint64_t, <= INT_MAX");
getvalue (value_64_l, "uint64_t w/o ternary, <= INT_MAX");
return 0;
}
I have a template class singletime<T>, which is a wrapper of any type, used for cases, not related to this question and has a conversion operator to T. The issue is when singletime<uint64_t> is used in a ternary operator expression.
This is the problematic line:
getvalue (flag ? singletime_64 : 0, "Ternary with singletime, > INT_MAX");
The 64-bit value is converted to int and if the value is above INT_MAX, it becomes incorrect.
The example prints some usage types of the ternary operator - with the resulting type of the expression and resulting value.
Here is the output of the example:
int
Ternary with singletime, > INT_MAX: 18446744071562067968
singletime<unsigned long>
singletime w/o ternary, > INT_MAX: 2147483648
unsigned long
Ternary with uint64_t, > INT_MAX: 2147483648
unsigned long
uint64_t w/o ternary, > INT_MAX: 2147483648
int
Ternary with singletime, <= INT_MAX: 2147483647
singletime<unsigned long>
singletime w/o ternary, <= INT_MAX: 2147483647
unsigned long
Ternary with uint64_t, <= INT_MAX: 2147483647
unsigned long
uint64_t w/o ternary, <= INT_MAX: 2147483647
The only problem is when the ternary operator is used with singletime<uint64_t> - it gets the value 18446744071562067968
As I understand, it tries to convert different types to one type.
As there is conversion operator from singletime<uint64_t> to uint64_t, it maybe uses it, but after that I don't understand why it converts both values to int, instead of uint64_t? In examples where uint64_t is used instead of singletime<uint64_t>, the int is converted to uint64_t and no values are lost
In the case of singletime<uint64_t> and int, there is also no compiler warning about cast to smaller type and potential data loss.
Tried with gcc 4.8.2 and gcc 5.2.0
A:
From standard, 5.16.
if the second and third operand have different types, and either has
(possibly cv-qualified) class type, an attempt is made to convert each
of those operands to the type of the other. The process for determining whether an operand expression E1 of type T1 can be converted to
match an operand expression E2 of type T2 is defined as follows:
If E2 is an rvalue, or if the conversion above cannot be done:
Otherwise (i.e., if E1 or E2 has a nonclass type, or if they both have
class types but the underlying classes are not either the same or one
a base class of the other): E1 can be converted to match E2 if E1 can
be implicitly converted to the type that expression E2 would have if
E2 were converted to an rvalue (or the type it has, if E2 is an
rvalue).
If the second and third operand do not have the same type, and either
has (possibly cv-qualified) class type, overload resolution is used to
determine the conversions (if any) to be applied to the operands
(13.3.1.2, 13.6).
So, here, 0 is rvalue and it has type int. Compiler will try to convert first argument to int and it will do that, cause that can be done, due to your conversion operator.
| {
"pile_set_name": "StackExchange"
} |
Q:
Use a generator to create a list of dictionaries with a set of labels
Say I have a function get_call_value(row, column) that returns the value of a spreadsheet cell. I have a set of labels that correspond to the columns. I want to build a list of dictionaries, the keys of which are the column labels. Here's what I have so far:
def _get_rows(self, start_key=1):
# List holding lists representing rows
out_data = []
# Get row count
row_count = num_rows()
column_count = num_columns()
def get_label_value(row, column):
"""Generator for getting the cell value at row, column of an Excel sheet
"""
labels = {
0: 'label_1',
1: 'label_2',
2: 'label_3',
}
yield (labels[column], get_cell_value(row, column))
return {key: value for (key, value) in get_label_value(xrange(start_key, row_count), xrange(column_count))}
My stack trace ends in an error along the lines of:
get_label_value
yield (labels[column], get_cell_value(row, column))
KeyError: xrange(31)
I'm obviously not understanding how the generator is supposed to work. Can someone tell me what I'm doing wrong? Thanks!
Edit:
I think further clarification is needed and I see an error in my logic concerning what I want to do. My expected result is a list of dictionaries. The keys in the dictionaries are the column labels and the values are the cell values at the (row, column), like so:
[
{'label_1': value_1,
'label_2': value_2,
'label_3': value_3,
},
{
'label_1': another_value_1,
...
}
]
With that in mind, the return statement in my should be this then?
return [{key: value for (key, value) in get_label_value(xrange(start_key, row_count), xrange(column_count))}]
A:
You're trying to pass an entire xrange object to the labels dict right now, which is why you're seeing that exception. What you actually want to do is iterate over the two xrange objects you're passing to get_label_value, so that you can build your desired list of dicts:
def _get_rows(self, start_key=1):
# List holding lists representing rows
out_data = []
# Get row count
row_count = num_rows()
column_count = num_columns()
def get_label_value(rows, columns):
"""Generator for getting the cell value at row, column of an Excel sheet
"""
labels = {
0: 'label_1',
1: 'label_2',
2: 'label_3',
}
# Yield a dict for each row in the table. The dict contains all the
# cells for a given row in the table, where the keys are column labels,
# each mapped to a given cell in the row.
for row in rows:
yield {labels[column] : get_cell_value(row, column) for column in columns}
return [cell_dict for cell_dict in get_label_value(xrange(start_key, row_count), xrange(column_count))]
| {
"pile_set_name": "StackExchange"
} |
Q:
Please ban the [bounty] tag
There are currently 7 questions tagged with this ridiculous tag. Because as if having your question featured on the "Featured" tab and having a blue bubble appear on the title isn't enough to tell anyone there's a bounty on this question, you just have to give it this tag...
Two of them are meta questions that have been migrated, the rest except one have or have previously had bounties placed on them. The only plausibly valid use of this is for this question, and even then it's a bit of a stretch.
A:
ok, it was already gone by the time I checked -- but I went ahead and ran destroy on it anyway so all traces of it were eradicated.
| {
"pile_set_name": "StackExchange"
} |
Q:
google-analytics - Missing Transactions
Since our upgrade to GA universal we have been missing some of our transactions. About 5-10 a day, which accounts to < 5% of all transactions.
Below is the code that is on our confirmation page and is wrapped in a document ready function. Within our review order page, I have a GA event that tracks the "Place Order" button click. We are tracking 100% of these events. The checkoutoutcomplete event is fairs a little better than our transaction counts.
Meaning if we have 100 place order click events showing in GA. I would see 95 transactions and 96 checkoutoutcomplete events.
It's possible there are other forces at play here that have not exposed themselves yet. Testing with large orders and in our dev enviroment works every time of course. I've tried wrapping the entire GA code in a try catch with logging, which resulted in no errors being captured.
Has anyone else experienced issues like this with missing revenue? Suggestions and comments welcome.
$.each(cartItems, function (key, value) {
ga('ec:addProduct', {
'id': this.StyleNumber.toUpperCase(), // Product ID
'name': this.StyleNumber.toUpperCase(), // Product name. Required.
'sku': this.SkuNumber, // SKU/code.
'brand': this.Brand, // Category or variation.
'price': this.Price, // Unit price.
'quantity': this.Qty // Quantity.
});
});
ga('ec:setAction', 'purchase', { // Transaction details are provided in an actionFieldObject.
'id': invoiceNumber, // (Required) Transaction id (string).
'affiliation': 'COS', // Affiliation (string).
'revenue': amount, // Revenue (currency).
'tax': taxAmount, // Tax (currency).
'shipping': shipAmount, // Shipping (currency).
'coupon': coupon // Transaction coupon (string).
});
ga('send', 'event', 'Checkout', 'Checkoutcomplete');
A:
Turns out this is a limit in GA. You can only send about 80KB per call. As this site is a B2B which has very large carts.
Google Analytics error in ga("send", "pageview") on certain pages
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there a JRuby-Rack Sinatra Warbler Project archtype?
Is there a project archetype (or whatever the ruby community calls it) for a jruby + rack + Sinatra project that creates a WAR deployment file with all required dependencies all ready to go?
What I want is the equivalent to "rails appname" that creates a ready to go project with ant/rake scripts and a basic directory hierarchy all ready to go.
Does such a beast exist?
A:
I found this which shows how rack can be used with sinatra
| {
"pile_set_name": "StackExchange"
} |
Q:
problems with declaring variables in classes c++
Lately I was working on a simple game and the game structure required me to declare many types of objects... and to make working with functions easier, I made a parent class for all of the other classes. this is a part of the entire code(simplified):
int q=500;
struct ship
{
int x,y;
bool dec=0;
};
struct enemysol : public ship
{
int life=100,y=0,x;
bool dec=0;
void declare()
{
dec=1;
x=10+rand()%(getmaxx()-20);
life=100;
y=0;
}
};
int next(ship main[]) //finding next undeclared sol
{
int i=1;
while(main[i].dec)
{
i++;
if(i==q)
return -1;
}
return i;
}
The problem is that the next function will return i even if enemysol.dec=1
this code worked when I hadn't declared ship, but the project would have been very confusing and large if I didn't declared it..
A:
You use the wrong way to initialize the member variables of your enemysol class.
When you write:
int life=100,y=0,x;
bool dec=0;
you declare new member variables, which have the same name than the x, y and dec that you already have in ship. So everytime you use x, y or dec in your enemysol class, you don't refer to the ship variables as these are hidden.
The right way of doing it would be something like:
struct enemysol : public ship
{
int life; // define only additional member variables not already in ship
enemysol() // constructor
: y(0), dec(false), life(100) // init members
{
}
void declare()
{
dec=1;
x=10+rand()%(getmaxx()-20);
life=100;
y=0;
}
};
| {
"pile_set_name": "StackExchange"
} |
Q:
EJBCA 'Ubuntu quick start' fails to deploy on JBoss
I followed quick start steps described in EJBCA documentation (http://www.ejbca.org/docs/installation.html)
'ant deploy' eventually failed with the following error
...
...
set-paths-not-jboss7:
set-paths:
jee:deployServicesJBoss5:
jee:assert-runJBoss7:
[echo] Checking if JBoss 7 is up and running...
[exec] Result: 1
BUILD FAILED
/home/pa/ejbca-setup/ejbca_ce_6_0_3/build.xml:635: The following error occurred while executing this line:
/home/pa/ejbca-setup/ejbca_ce_6_0_3/bin/jboss.xml:380: The requested action requires that JBoss 7 is up and running.
A:
My problem was that JBoss wasn't starting. The documentation did point out that if 'ant deploy' fails, most likely caused is JBoss not running.
Since JBoss did not show any error, I thought it was up, however upon second look I understood that it was stuck in 'starting' state. There were no errors in log files. My Java version was 8. Installing Java 7 fixed the problem. Be sure to recompile EJBCA after switching Java.
| {
"pile_set_name": "StackExchange"
} |
Q:
Expressing the Riemann Zeta function in terms of GCD and LCM
Is the following claim true: Let $\zeta(s)$ be the Riemann zeta function. I observed that as for large $n$, as $s$ increased,
$$
\frac{1}{n}\sum_{k = 1}^n\sum_{i = 1}^{k} \bigg(\frac{\gcd(k,i)}{\text{lcm}(k,i)}\bigg)^s \approx \zeta(s+1)
$$
or equivalently
$$
\frac{1}{n}\sum_{k = 1}^n\sum_{i = 1}^{k} \bigg(\frac{\gcd(k,i)^2}{ki}\bigg)^s \approx \zeta(s+1)
$$
A few values of $s$, LHS and the RHS are given below
$$(3,1.221,1.202)$$
$$(4,1.084,1.0823)$$
$$(5,1.0372,1.0369)$$
$$(6,1.01737,1.01734)$$
$$(7,1.00835,1.00834)$$
$$(9,1.00494,1.00494)$$
$$(19,1.0000009539,1.0000009539)$$
Note: This question was posted in MSE. It but did not have the right answer.
A:
Let me denote your LHS by $f(n,s)$. For fixed even $n$ I shall show that $f(n,s)-1\sim\zeta(s+1)-1$ as $s\to\infty$, that is,
$$\lim_{s\to\infty}\frac{f(n,s)-1}{\zeta(s+1)-1}=1.$$
This result nicely expresses your numerical observations, which show that the parts after the decimal point seem to be asymptotically the same.
On one hand, we have $\zeta(s+1)-1=2^{-s-1}+3^{-s-1}+\dots$. The terms after the second can be estimated from above by the integral $\int_2^\infty x^{-s-1}dx=\frac{2^{-s}}{s}$, so we see that $\zeta(s+1)-1\sim 2^{-s-1}$.
On the other hand, among pairs $(k,i)$ with $1\leq k\leq n,1\leq i\leq k$, the expression $\frac{\gcd(k,i)}{\operatorname{lcm}(k,i)}$ is equal to $1$ for exactly $n$ pairs $(k,k)$, and is equal to $2^{-1}$ for exactly $n/2$ pairs $(2k,k)$. All other terms, of which there are certainly fewer than $n^2$, are at most $3^{-1}$. Therefore we find
$$f(n,s)=\frac{1}{n}\left(n\cdot 1+\frac{n}{2}\cdot 2^{-s}+O(n^23^{-s})\right)=1+2^{-s-1}+o(2^{-s})$$
proving $f(n,s)-1\sim 2^{-s-1}$. It follows that $f(n,s)-1\sim\zeta(s+1)-1$, as we wanted.
Let me emphasize that in the above calculation it was crucial that $n$ was even. If $n$ is odd, then we instead only get $\frac{n-1}{2}$ pairs $(2k,k)$ and the asymptotics get slightly skewed - we then get $f(n,s)-1\sim\frac{n-1}{n}(\zeta(s+1)-1)$. For large $n$ the difference is however, pretty negligible.
A:
A variety of formulas of this type (in the sense of a relation between $\zeta(s)$ and a sum over gcd or lcm) has been derived by Titus Hilberdink and László Tóth in On the average value of the least common multiple of k positive integers (2016), see also On the distribution of the greatest common divisor by Diaconis and Erdȍs. I quote
$$\sum_{i,k=1}^n \big(\text{lcm}(k,i)\big)^s=\frac{\zeta(s+2)}{\zeta(2)}\frac{n^{2s+2}}{(s+1)^2}+{\cal O}(n^{2s+1}\log n),$$
$$\sum_{i,k=1}^n \big(\gcd(k,i)\big)^s=\left(\frac{2\zeta(s)}{\zeta(s+1)}-1\right)\frac{n^{s+1}}{s+1}+{\cal O}(n^{s}\log n),$$
$$\sum_{i_1,i_2,\ldots i_s=1}^n \gcd(i_1,i_2,\ldots i_s)=\frac{\zeta(s-1)}{\zeta(s)}n^s+{\cal O}(n^{s-1}),\;\;s\geq 4.$$
The earliest reference for such series is Ernest Cesàro,
Étude moyenne du plus grand commun diviseur de deux nombres (1885).
| {
"pile_set_name": "StackExchange"
} |
Q:
Reading Intention for exoskeletons motion estimation
I'm interested in exoskeletons and wearable rehabilitation robotics. I wonder how we can estimate/predict the intention of human body/part motion. I want to prevent the exoskeleton from interfering with human movements. Intention reading is the process of predicting how the movement will take place and how it will happen at the beginning of any movement.
There is an exoskeleton example (https://www.youtube.com/watch?v=BdoblvmTixA) which detects muscle activation with EMG and generates artificial muscle attraction. But this is only a open and close action. And it will begins after the movement. Also EMG system has a lot of disadvantage like sliding probes, affecting from other/crossing muscles. I want to estimate every motion like turn,twist, amount of contraction. I'm open to suggestions or issues (The troubles you have experienced.)
This matlab webinar which is about "Signal Processing and Machine Learning Techniques for Sensor Data Analytics" shows how to classify different actions. But this example predicts kind of motion after the motion completed. I need to know motion information at very first.
I want know how can I estimate different motions at the beginning of limb action. Which system (EMG,EEG,IMU,etc.) and processing technique will be better or which combination should I use.
A:
You have many options: encoders at exo joints to provide absolute angles, IMU on limbs for estimating motions, force sensing shoe soles for force distribution, force sensors at joints, crutches usually used by pilots wearing rehab exo can also have those sensors, e.g. at the bottom of the crutch, at hand handles, etc.
| {
"pile_set_name": "StackExchange"
} |
Q:
HTML + Javascript form submission question
I have authored a html + javascript form which displays a set of images along with a submit button. The user can select the images he or she is interested in and when the user clicks the image, I change the image border to indicate image selection.
And when the user clicks on the submit button, I would like my server side script to receive the image uris associated with the selected images for further processing.
I am a JS newbie. I would like to know how best to do this. Will I need a global javascript array which gets populated with the selected image URIs? And this array can then be submitted as part of HTTP POST?
Any tips/pointers will be helpful.
A:
If I were you, I’d make sure it works without JavaScript too. E.g.
<form action="" method="POST">
<label for="image_1">
<img src="image_1_uri" alt="Image 1" />
<input type="checkbox" name="images" id="image_1" value="image_1_uri" />
</label>
<label for="image_2">
<img src="image_2_uri" alt="Image 2" />
<input type="checkbox" name="images" id="image_2" value="image_2_uri" />
</label>
<input type="submit" />
</form>
Then adapt your border-adding JavaScript to work when the label is clicked on, rather than the image.
Hide the checkboxes via CSS if you’re not keen on them being there. (You’ll need to add a class to the checkboxes to do this in older versions of Internet Explorer.)
| {
"pile_set_name": "StackExchange"
} |
Q:
wifi not working after upgrade from 12.04 to 14.04
Hardline working fine; just no WiFi.
$ lspci -v | grep -iA7 network
0b:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01)
Subsystem: Dell Wireless 1395 WLAN Mini-Card
Flags: bus master, fast devsel, latency 0, IRQ 17
Memory at fe7fc000 (64-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: b43-pci-bridge
$ lsmod | grep b43-pci-bridge
$ lsmod | grep b43
b43 387371 0
bcma 52096 1 b43
mac80211 626489 1 b43
cfg80211 484040 2 b43,mac80211
ssb 62379 1 b43
$ nm-tool
NetworkManager Tool
State: connected (global)
- Device: eth0 [Wired connection 1] -------------------------------------------
Type: Wired
Driver: sky2
State: connected
Default: yes
HW Address: 00:21:9B:ED:A6:84
Capabilities:
Carrier Detect: yes
Speed: 100 Mb/s
Wired Properties
Carrier: on
IPv4 Settings:
Address: 192.168.1.8
Prefix: 24 (255.255.255.0)
Gateway: 192.168.1.1
DNS: 192.168.1.1
A:
Please do:
sudo apt-get purge b43-fwcutter firmware-b43-installer
sudo apt-get update
sudo apt-get install linux-firmware-nonfree
reboot
| {
"pile_set_name": "StackExchange"
} |
Q:
Opposite Universe
Let us assume we live in a shared Universe which can be fully described mathematically, and that all mathematical variables have an opposite. 2 is opposite to -2, and true is opposite to false. When I was a child we played "opposite day", where we interacted as usual but everything had to be expressed and interpreted as its opposite. If everything is opposite, how will the world look? Will everything ultimately be the same, because the function of event handlers are reversed along with it's input?
A:
In light of Robert's comment, I'm extending this answer in one respect. The thought experiment can be understood in two ways.
Way 1: Linguistic opposites: We are merely changing the meaning of words. "2" means -2. "-2" means 2. The utterance "true" now means false and vice versa. This particular thought experiment is completely uninteresting, because it merely means that the language of this opposite world is somehow coincidentally reversed for all descriptive terms.
Way 2: Metaphysical opposites: Here, the claim is that what is true in our world is false in this world, and what is false in this world is true in that world. In other words, it's not that they say "2" when they mean -2, it's that they use language in the same way but the objects that populate their world are opposite.
But this is a self-defeating thought experiment, because when we move past trivial elements (like positional functions), not everything can admit of opposites in the way you're describing and that's what basically kills the thought experiment. Sure 2 and -2 can be opposites, but true and false differ not just as poles but as functions that relate to reality. E.g., I am bunny and I am a tarantula are both false. If you reverse the meanings of true and false, then they both become true which is self-contradictory.
What this does show us, however, is that the units for position functions are arbitrary. (We can put the origin (0,0,0,...) wherever we want and just move everything from there). Mass functions and many other types of evaluative functions are not. To give an example:
Is is true that I am wearing a shirt and it is white?
Is is true that I am wearing a shirt?
Is is true that I am wearing a white shirt?
Consider if I am wearing a blue shirt under normal and opposite evaluation. Under normal, evaluation: false, true, false. Under opposite evaluation? it's not at all clear. Do statements containing and become true if either term is true in opposite evaluation? Do we apply the opposite afterward fully evaluating? (i.e. if we fully evaluate and then opposite: we get true, false, true. If we evaluate each piece and then opposite without changing the logical operators: false, false, true)
| {
"pile_set_name": "StackExchange"
} |
Q:
Dynamically generated display object as a gradient mask
I just want to create a gradient mask –which is generated dynamically through AS3 code– for an object (a pentagon in my following example,) and I simply, cannot! [The SWF file of what I've tried.]
The code works just fine unless it ignores the alpha gradient of the dynamically created Sprite for being used as a gradient mask (it's being treated as a solid mask), while the exact code acknowledges the "on-stage" created object (through the user interface) for being a gradient mask!
I think the runtime just cannot cache the object as bitmap and hence the ignorance! However, I'm stuck at making that happen! So please, shed some lights on this, any help is greatly appreciated in advance :)
var X:Number = 100, Y:Number = 35, W:Number = 350, H:Number = 150;
var mat:Matrix = new Matrix();
mat.createGradientBox(W, H, 0, X, Y);
var gradientMask:Sprite = new Sprite();
gradientMask.graphics.beginGradientFill(GradientType.LINEAR, [0, 0], [1, 0], [0, 255], mat);
gradientMask.graphics.drawRect(X, Y, W, H);
gradientMask.graphics.endFill();
pentagon1.cacheAsBitmap = true;
pentagon2.cacheAsBitmap = true;
onStageGradient.cacheAsBitmap = true;
gradientMask.cacheAsBitmap = true;
pentagon1.mask = gradientMask;
pentagon2.mask = onStageGradient;
stage.addEventListener(Event.ENTER_FRAME, _onEnterFrame);
function _onEnterFrame(e:Event):void {
pentagon1.x += 7;
pentagon2.x += 7;
if (pentagon1.x > 500) {
pentagon1.x = 0;
pentagon2.x = 0;
}
}
A:
You have to add the gradientMask to the display list to have it in effect.
pentagon1.parent.addChild(gradientMask);
| {
"pile_set_name": "StackExchange"
} |
Q:
How to import ViewPagerIndicator in android studio?
I want to use ViewPagerIndicator in my app. So I need its library. I find these 2 links for downloading, Link1 and Link2.
But I don't know how can I add it to my project?
this is my build gradle file:
apply plugin: 'com.android.application'
android {
repositories {
maven { url 'http://repo1.maven.org/maven2' }
maven { url "http://dl.bintray.com/populov/maven" }
jcenter()
mavenCentral()
}
compileSdkVersion 23
buildToolsVersion "23.0.1"
defaultConfig {
applicationId "standup.maxsoft.com.standup"
minSdkVersion 13
targetSdkVersion 23
versionCode 1
versionName "1.0"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
testCompile 'junit:junit:4.12'
compile 'com.viewpagerindicator:library:2.4.1@aar'
compile 'com.android.support:appcompat-v7:23.1.1'
compile 'com.android.support:support-v4:23.1.1'
compile 'com.android.support:design:23.1.1'
compile 'com.google.android.gms:play-services:8.3.0'
compile 'com.viewpagerindicator:library:2.4.1@aar'
compile 'com.google.android.gms:play-services-ads:8.3.0'
compile 'com.google.android.gms:play-services-identity:8.3.0'
compile 'com.google.android.gms:play-services-gcm:8.3.0'
}
I just copy 2 files in libs.folder. Would you please help me steplly? Thanks
A:
I do it like this:
Download the library from here
Copy the file in libs folder of Android studio
Add this line in your project gradle.build : compile project(':library-2.4.1')
| {
"pile_set_name": "StackExchange"
} |
Q:
Populating a field in wpf
I have a database of finance info and i want to check that supplied totals 'add up'. I have added fields to the database for the check data and am using data binding via the Entity Framework. How do i populate these 'check' fields while the user is adding data to the record?
Eg The form contains SubtotalA, SubtotalB and TotalAB textboxes. The database has these fields plus CheckTotalAB. The user keys in SubtotalA, SubtotalB and TotalAB from a hard copy form. I want to populate CheckTotalAB with the sum of SubtotalA and SubtotalB to compare against the provided TotalAB.
I first tried getting the data from the textboxes. Unfortunately txtSubtotalA.Value doesn't exist.
I then thought I'd have to go to the entity itself. Unfortunately I don't know how to access the current record/entity being entered and if I did, how would I access the value of fields that haven't been saved yet.
Can someone point me in the right direction?
tia
mcalex
A:
Accessing the entity was the answer. This was accomplished through use of an entity property in the datacontext that i set to equal a class member i added to my form class.
After that getting to the entity's fields including my calculated fields was just a case of getting/setting the member's properties.
| {
"pile_set_name": "StackExchange"
} |
Q:
How intelligent would a real life fairy be?
Because fairies are rather small, I'm assuming they wouldn’t be able to be all that intelligent. what I’m wondering, however, is just how intelligent? Let’s say that they’re about the size of a hummingbird. Logically, how intelligent can a hummingbird-sized being be? Assume no magic.
A:
Size doesn't correlate directly to intelligence. Crows, for example, are among the smartest creatures in the world apart from humans, having intelligence approximating that of a seven year old child (link).
Little is actually understood about the advantages associated with brain size, but some studies show that larger brains improve performance under pressure (link) and neurogenesis (link). Better neurogenesis means that adults are still able to learn quickly, it means that individuals are capable of specializing in a more broad and various set of behaviors (as opposed to being limited to only being good at a few things), and it means that we're able to retain more information in memory.
That doesn't mean that a small creature couldn't compensate for short memory and poor potential by simply being very clever; it's conceivable that a minimal working set of cognitive functions, specialized in communication and rapid inference, would enable a small creature to quickly generate reasonable assumptions on moderate sets of data, and then selectively discard the data in favor of keeping the result. Such a creature could closely emulate a human, while simply seeming naive, and appearing to have little regard for detail. While we might remember a significant event because it seems important, your fairies might only draw and remember impulsive conclusions based on the event (i.e. without remembering how they learned the information, a fairy might remember: "person X is untrustworthy, and person Y has lots of treasure in his house"; or perhaps more broadly, they might forget the people altogether and remember, "persons wearing red are untrustworthy, and there's treasure over there"), and then discard most information about the event itself, causing them to seem careless or lacking in mature values.
| {
"pile_set_name": "StackExchange"
} |
Q:
Does the English word "have" have a closer relationship with German "haben" or French "avoir"?
I see the English auxiliary verb have is very similar to Romance counterparts like Portuguese haver, Spanish haber or Italian avere and it appears to me that they have some historical relationship. It also adds to that guess that English has a predominant portion of Old French (avoir).
On the other hand, taking the Germanic origin of English and my belief that such a basic verb can't have a foreign origin, I also think that have has Germanic history, as seen by its old conjugation table closer to German (thou hast / du hast etc., but German uses werden more instead).
So my question is, which one is closer to English have? Germanic origin or Romance (Latin) origin?
A:
According to Online Etymology Dictionary the Old English origin of "have" was "habban" and that it originated from Proto-Germanic:
have (v.) Old English habban "to own, possess; be subject to,
experience," from Proto-Germanic *habejanan (source also of Old Norse
hafa, Old Saxon hebbjan, Old Frisian habba, German haben, Gothic haban
"to have"), from PIE root *kap- "to grasp." Not related to Latin
habere, despite similarity in form and sense; the Latin cognate is
capere "seize. Source
Note the sentence that starts "Not related to Latin habere", and its continuation.
So English "have" came from Old English "habban", which in turn came from the Proto-Germanic *habejanan, which came from Proto-Indo-European root *kap- "to grab". The PIE root *kap- being the origin of Old English "habban" is also attested in Wikipedia's Indo-European vocabulary article in the chart.
So English "have" doesn't at all come from a Romance language from what I've seen, as you rightly suspected. The Italian "avere" and Spanish "haber" do.
Italian avere
Spanish haber
Latin "habere" from the two sources I checked originated from a different PIE root *gʰabʰ- or *ghabh-, though how it came to be "habere", I have no idea.
The following explains the shifts of sounds from a PIE to Proto-Germanic some time in the first millennium BC, so you can see how it got from *kap to *habejanan.
Proto-Germanic Wikipedia article
So to answer your question, "have" is closer to German "haben" because there is no relation with "habere" or "avoir"; "habere" has a different PIE origin. Proto-Germanic *habejanan is the source of both German "haben" and English "have".
| {
"pile_set_name": "StackExchange"
} |
Q:
Creating a grid of images that connect with each other
I'm trying to make a grid of images that connect nicely with each other.
Here is my grid:
http://www.yannickluijten.be/test2
Not every image has the same height so this is the problem:
I want the 4th image (gray) to appear below the first image (green) and I don't want to work with 3 columns. How can I do this?
.img1 {
width: 300px;
height: 200px;
float: left;
background: green;
}
.img2 {
width: 300px;
height: 400px;
float: left;
background: blue;
}
.img3 {
width: 300px;
height: 300px;
float: left;
background: yellow;
}
.img4 {
width: 300px;
height: 400px;
float: left;
background: gray;
}
A:
With CSS floats, there's no way to control when variable-height floated elements should "wrap around" to the left edge, without causing unwanted vertical spacing to appear between some of them.
A mosaic plug-in like jQuery Masonry is geared for this sort of thing. Not sure if it lets you control which photo appears in which column, but it may work adequately.
You could use CSS columns, but it offers limited control over which photo should appear in which column, and it doesn't work in IE9 or earlier.
| {
"pile_set_name": "StackExchange"
} |
Q:
Django .only() causing maximum recursion depth exceeded error?
I was using Django's .only() construct to query only the columns I need.
ex.
Trainer.objects.filter(id__in=ids).only('id', 'lives')
Here is a quick example I wrote to reproduce this:
class BaseTrainer(models.Model):
mode = models.IntegerField(help_text="An integer")
def __init__(self, *args, **kwargs):
super(BaseTrainer, self).__init__(*args, **kwargs)
self._prev_mode = self.mode
class Trainer(BaseTrainer):
name = models.CharField(max_length=100, help_text="Name of the pokemon")
trainer_id = models.CharField(max_length=10, help_text="trainer id")
badges = models.IntegerField(help_text="Power of the pokemon (HP)")
lives = models.PositiveIntegerField(default=sys.maxsize)
unique_together = ("name", "trainer_id")
def __init__(self, *args, **kwargs):
super(Trainer, self).__init__(*args, **kwargs)
self._temp = self.lives
@classmethod
def test(cls):
t = Trainer.objects.only('trainer_id').all()
print(t)
I needed only id and name fields, that's certain.
And what was the reason for maximum recursion depth exceeded?
A:
It turned out that the reason for maximum recursion depth exceeded was because of inheriting from a model and constructor overriding.
As you can see in the code,
self._prev_mode = self.mode
we try to access mode in the constructor of the super model class.
So, even if we don't need this field for our use, we still have to include this in .only() in every such call for this model.
But, according to docs, .only() should make another database query to get the field value in worst case, why recursion?
Well, note that this field was being accessed in the constructor of the parent model. And that was the catch. Each time, the value was couldn't be read in the constructor and thus was attempted to fetch from the database.
That called the constructor again and the recursion cycle continued until the python stopped it.
Anyway, fixed this by adding the mode in the .only() call too.
Trainer.objects.filter(id__in=ids).only('id', 'name', 'mode')
| {
"pile_set_name": "StackExchange"
} |
Q:
OpenDaylight Flow statistics using RPC call
I'm trying to get statistics using the following RPC call and not via the default statistics-manager.
POST / restconf / operations / opendaylight - flow - statistics: get - all - flows - statistics - from - all - flow - tables {
"input": {
"node": "/opendaylight-inventory:nodes/opendaylight-inventory:node[opendaylight-inventory:id=\"openflow:1000\"]"
}
}
However, the response of this request is just transaction-id. While I can see the OpenFlow Flow Stat Request and Flow Stat Reply messages are exchanged between the controller and the switch, the operational datastore seems not to be updated as a result of calling the above RPC. I check the operational datastore using:
GET /restconf/operational/opendaylight-inventory:nodes/node/openflow:1000/table/0
My question is:
How can I get the flow statistics sent to the controller by the switch as a result of the above RPC (get-all-flows-statistics-from-all-flow-tables)? And why is the operational datastore is not updated?
Thanks! Michael.
A:
Using Boron, what you're trying to use is deprecated, hence you should use the following:
Install odl-openflowplugin-flow-services
Connect your switch
Send the following request:
POST /restconf/operations/opendaylight-direct-statistics:get-node-connector-statistics
Host: localhost:8181
Content-Type: application/json
Authorization: Basic YWRtaW46YWRtaW4=
{
"input":
{
"node" : "/opendaylight-inventory:nodes/opendaylight-inventory:node[opendaylight-inventory:id=\"openflow:187811733353539\"]" ,
"store-stats" : false
}
}
Set store-stats to true if you want to keep them in the datastore.
You can also get the stats only for a specified port, but while trying, it appears not to be working well
Add this to the above payload:
"node-connector-id" : "/opendaylight-inventory:nodes/opendaylight-inventory:node[opendaylight-inventory:id=\"openflow:187811733353539\"]/opendaylight-inventory:node-connector[opendaylight-inventory:id='openflow:187811733353539:LOCAL']",
Instead of LOCAL, specify the port you want.
Hope this helps,
Alexis
| {
"pile_set_name": "StackExchange"
} |
Q:
¿Cómo llegó "coscarse" a signifcar "darse cuenta"?
Coscarse es un verbo pronominal que significa "darse cuenta" o "concomerse". De acuerdo al DRAE su etimología viene del latín coxicare, de coxa, 'cadera'.
Coxa, aparte de "cadera" puede significar el "primer segmento de la pata de los artrópodos".
¿Cómo llegó esta palabra entonces a ser usada como equivalente de "se va a dar cuenta"?
A:
No conocía el término, así que ofrezco esto solo como posibilidad. (Coscarse no es una palabra habitual en Chile, en ninguno de sus sentidos).
La onomatopeya 'cosc' imita el ruido del golpe dado a un objeto duro. De ella derivarían (al menos según la etimología de Corominas, veo que la RAE piensa otra cosa) varias palabras relacionadas con el ruido de golpes: cuesco ("hueso de la fruta", "pedo"), coscurro ("pan duro"), escoscar ("pelar frutos secos"), cosquillas (aquí veo difícil encontrar una relación... mis disculpas), y coscorrón ("golpe de nudillos en la cabeza", al que los chilenos le decimos coscacho).
Estas palabras son de origen ibérico. Probablemente otras, las que tienen las raíces calc- y casc-, también sean onomatopéyicas, aunque son muy anteriores, las heradamos del griego y el latín.
Lo que yo veo es que coscarse, "darse cuenta de algo", puede surgir como un juego expresivo, de darte un golpe en la cabeza con la nueva idea.
| {
"pile_set_name": "StackExchange"
} |
Q:
HTML button communicating with Node server + Socket.io
When a user clicks an html button (#new) I want to store their socket.id into an array (userQueue) on my node server but I'm having trouble figuring out how to do this. Do I need to set up a post request or is there a way through socket.io?
App.js (Server):
// App setup
var express = require('express'),
socket = require('socket.io'),
app = express(),
bodyParser = require("body-parser");
var server = app.listen(3000, function() {
console.log('listening to requests on port 3000');
});
var io = socket(server);
app.use(bodyParser.urlencoded({extended: true}));
app.use(express.static('public'));
// Room Logic
var userQueue = [];
io.on('connection', function(socket) {
console.log('made socket connection', socket.id);
socket.on('chat', function(data) {
io.sockets.emit('chat', data);
});
socket.on('typing', function(data) {
socket.broadcast.emit('typing', data);
});
});
Chat.js (client side):
// Make connection
var socket = io.connect('http://localhost:3000');
// Query DOM
var message = document.getElementById('message');
handle = document.getElementById('handle'),
btn = document.getElementById('send'),
btnNew = document.getElementById('new'),
output = document.getElementById('output'),
feedback = document.getElementById('feedback');
// Emit events
btn.addEventListener('click', function(){
socket.emit('chat', {
message: message.value,
handle: handle.value
});
});
message.addEventListener('keypress', function() {
socket.emit('typing', handle.value);
});
// Listen for events
socket.on('chat', function(data) {
feedback.innerHTML = ""
output.innerHTML += '<p><strong>' + data.handle + ':</strong>' + data.message + '</p>'
});
// Emit 'is typing'
socket.on('typing', function(data) {
feedback.innerHTML = '<p><em>' + data + ' is typing a message...</em></p>'
});
Index.html:
<!DOCTYPE html>
<html>
<head>
<title>WebSockets 101</title>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.0.3/socket.io.js"></script>
<link rel="stylesheet" type="text/css" href="/styles.css">
</head>
<body>
<div id="mario chat">
<div id="chat-window">
<div id="output"></div>
<div id="feedback"></div>
</div>
<input id="handle" type="text" placeholder="Handle">
<input id="message" type="text" placeholder="Message">
<button id="send">Send</button>
<button id="new">New</button>
</div>
<script type="text/javascript" src="/chat.js"></script>
</body>
</html>
A:
I believe that a post request will probably work as well, but if you want to simply work with socket.io you can consider doing something similar to your chat event by adding this in your chat.js:
btnNew.addEventListener('click', function() {
socket.emit('new user', socket.id);
});
And on the server side, app.js:
socket.on('new user', function(id) {
userQueue.push(id);
});
And it should be stored in the array. Hope this helps!
| {
"pile_set_name": "StackExchange"
} |
Q:
Proving relationship between lowest common denominators of common multiples
prove that $l.c.m.(ab,ad)=a[l.c.m.(b,d)]$.
my work so far:
I know $l.c.m.(ab,ad)=a^2bd/g.c.d.(ab,ad)$ and ∃ $x,y\in \mathbb{Z}$ | $g.c.d.(ab,ad)=abx+ady$
∴ we now have $l.c.m.(ab,ad)=abd/(bx+dy)$
So we know that ∃ $x,y\in \mathbb{Z}$ | $bx+dy=c$ for some $c\in \mathbb{Z}$. If $c=g.c.d.(b,d)$ we are done. But, how do we know that the $x$ and $y$'s are | $c=g.c.d.(b,d)$?
here are the relevant corollaries:
1) if $d=g.c.d.(a,b)$, then ∃ $x,y\in \mathbb{Z}$ | $ax+by=d$
2)In order that ∃ $x,y\in \mathbb{Z}$ | $ax+by=c$ it is necessary and sufficient that d|c, where $d=g.c.d.(a,b)$.
Thank you in advance.
A:
$ {\rm lcm}(ab,ad)\mid n\color{#c00}\iff ab,ad\mid n\iff\begin{align} a &\mid n\\ b,d &\mid n/a\end{align}$ $\color{#c00}\iff \begin{align} a &\mid n\\ {\rm lcm}(b,d) &\mid n/a\end{align} \iff a\,{\rm lcm}(b,d) \mid n $
Remark $\ $ Above we used $\ x,y\mid z\color{#c00}\iff {\rm lcm}(x,y)\mid z,\,$ the universal property of $\,\rm lcm$
See here for a few proofs of the gcd distributive law.
| {
"pile_set_name": "StackExchange"
} |
Q:
MSSQL - refresh / duplicate set of tables from another database as a single transaction
Ignoring any reasons why I shouldn't be doing this ...
But what would be the easiest way to refresh a set tables from another MSSQL database as a single transaction?
Context:
10 tables
DDL won't change
Refresh is 100%!
~100Megs (relatively small)
I would want to do this as a script (TSQL or SQL), and avoid any advanced Server changes (replications, etc).
Will a simple SELECT * INSERT INTO , wrapped in a transaction be the best thing to do??
A:
If your sole question is to migrate those tables from a different database within the same server then there are multiple ways
You can run a insert into select from statement like
insert into db1.db1table
select * from db2.table;
You can create a DB dump (using SSMS) which will script the table schema along with all the data and that *.sql file you can run against your another database. probably you can use SQLCMD or SSMS if you prefer.
Third option is to, do a full DB backup and restore the same.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to put several strings into a list in a for loop?
I'm using a for loop to search a list of protein IDs in NCBI protein database, and try to convert these IDs into the description. Here's an example:
import pandas as pd
from Bio import Entrez
from Bio import SeqIO
df2=pd.read_csv('ID.txt', header=None)
df.columns = ['protein_ID'] # put a header 'protein_ID' to the dataframe
lists=df.protein_ID.tolist() #convert the column into a list of protein IDs.
description = ''
for num, line in enumerate(lists):
handle = Entrez.efetch(db="protein", id=line, rettype="gb", retmode="text")
record = SeqIO.read(handle, "genbank")
description += record.description
description
It returns one huge string:
'hypothetical protein UR61_C0009G0014 [candidate division WS6 bacterium GW2011_GWE1_34_7]ATPase [candidate division WS6 bacterium GW2011_GWE2_33_157]hypothetical protein UR96_C0034G0007 [candidate division WS6 bacterium GW2011_GWC1_36_11]phosphoenolpyruvate synthase [Candidatus Komeilibacteria bacterium RIFOXYC1_FULL_37_11]'
What I want is a list of strings with new line breaks, like this:
[
'hypothetical protein UR61_C0009G0014 [candidate division WS6 bacterium GW2011_GWE1_34_7]',
'ATPase [candidate division WS6 bacterium GW2011_GWE2_33_157]',
'hypothetical protein UR96_C0034G0007 [candidate division WS6 bacterium GW2011_GWC1_36_11]',
'phosphoenolpyruvate synthase [Candidatus Komeilibacteria bacterium RIFOXYC1_FULL_37_11]'
]
How to achieve this? Thank you very much!
A:
What I want is a list of strings
description = []
for num, line in enumerate(lists):
....
description.append(record.description)
with new line breaks
By default, lists are not printed this way, use pprint
import pprint
# you original code here
pprint.pprint(description)
| {
"pile_set_name": "StackExchange"
} |
Q:
Understanding _count_vocab method in sklearn.feature_extraction.text's CountVectorizer class
I'm using the fit_transform method in CountVectorizer and I'm reading through the code to try and understand what it's doing. I'm a bit confused in _count_vocab method in CountVectorizer, specifically under the nested for loop. For raw documents, I have a list of sentences, and fixed_vocab = False.
def _count_vocab(self, raw_documents, fixed_vocab):
"""Create sparse feature matrix, and vocabulary where fixed_vocab=False"""
if fixed_vocab:
vocabulary = self.vocabulary_
else:
# Add a new value when a new vocabulary item is seen
vocabulary = defaultdict(None)
vocabulary.default_factory = vocabulary.__len__
analyze = self.build_analyzer()
j_indices = _make_int_array()
indptr = _make_int_array()
indptr.append(0)
for doc in raw_documents:
for feature in analyze(doc):
try:
j_indices.append(vocabulary[feature])
except KeyError:
# Ignore out-of-vocabulary items for fixed_vocab=True
continue
indptr.append(len(j_indices))
if not fixed_vocab:
# disable defaultdict behaviour
vocabulary = dict(vocabulary)
if not vocabulary:
raise ValueError("empty vocabulary; perhaps the documents only"
" contain stop words")
# some Python/Scipy versions won't accept an array.array:
if j_indices:
j_indices = np.frombuffer(j_indices, dtype=np.intc)
else:
j_indices = np.array([], dtype=np.int32)
indptr = np.frombuffer(indptr, dtype=np.intc)
values = np.ones(len(j_indices))
X = sp.csr_matrix((values, j_indices, indptr),
shape=(len(indptr) - 1, len(vocabulary)),
dtype=self.dtype)
X.sum_duplicates()
return vocabulary, X
Here vocabulary is an empty defaultdict object. Hence j_indices will not append elements since vocabulary is empty so vocabulary[feature] returns an error and the error is ignored, continuing to the next for loop iteration. It will continue to do this for all doc in raw_documents and all feature in the tokens returned by analyze(doc). In the end of this j_indices and indptr are empty array.array objects.
I thought _count_vocab would create its own object of vocabulary and append values when a new vocab word was encountered, but it doesn't look like it.
In this case, should I provide it my own list of vocabulary? Since I don't have one, where can I get a dictionary of words?
Thanks for the help.
A:
vocabulary[feature] returns an error and the error is ignored
There's no error since vocabulary is a defaultdict. What happens is
>>> vocabulary = defaultdict(None)
>>> vocabulary.default_factory = vocabulary.__len__
>>> j_indices = []
>>> analyzed = ["foo", "bar", "baz", "foo", "quux"]
>>> for feature in analyzed:
... j = vocabulary[feature]
... print("%s %d" % (feature, j))
... j_indices.append(j)
...
foo 0
bar 1
baz 2
foo 0
quux 3
with the results
>>> dict(vocabulary)
{'bar': 1, 'foo': 0, 'baz': 2, 'quux': 3}
>>> j_indices
[0, 1, 2, 0, 3]
So this code works correctly. The KeyError catching is there for the case fixed_vocab=True.
| {
"pile_set_name": "StackExchange"
} |
Q:
Do any Linux distributions have a folder sync option (like briefcase in windows)?
I want to keep my local folder synchronized with a network folder. Is there any folder synchronization utility or feature in Linux?
A:
There are numerous options for programs or even file systems that handle synchronization. I still use the ridiculously old unison program to keep some of my home directories in sync. There are other programs similar to this as well. For easier situations that only require one way coping, rsync does the job nicely.
For cross platform synchronization, the ever popular dropbox is always an options, although I would also look into more open alternatives such as cloudfs.
Another thing you really ought to consider is version control. At first it might not seem that it's suitable, but if you really analyze your synchronization problem, you might find that version control is just the ticket. This gives you far more freedom to change things in multiple places without breaking the synchronization (two way sync is always a challenge). The ability to track and merge different sets of changes can be invaluable. You might consider a distributed system like git or a central one like subversion depending on your application, although in all likelihood if you can get your head around the distributed model it will prove better in the long run.
| {
"pile_set_name": "StackExchange"
} |
Q:
Issues plotting dose-response curves with ggplot and glm
I currently trying to use a glm to create a dose response curve. I am able to create the curve using the bio nominal family and probit function within glm but would like to plot the curves using ggplot and not R's basic plotting functions. When comparing the basic plot to the ggplot the curve produced by ggplot is not correct and I am unsure how make it the same as the basic plot. Additional the confidence intervals are not correct when plotting the curve with ggplot. Thanks for any help.
library(ggplot2)
library(Hmisc)
library(plyr)
library(MASS)
create dataframe:
#1) column is exposure concentration
#2) column is the total number of organism died over 12 h of exposure to the
corresponding concentration
#3) column is the total number that survived over 12 h to the corresponding
concentration
#4) column is the total number of organism exposed to the corresponding
concentration
#5) fifth is the percentage of organism that survived exposure at the
corresponding concentration
conc <- c(0.02, 0.45, 0.46, 0.50, 0.78, 0.80, 0.80, 0.92, 0.93, 1.00, 1.16,
1.17, 1.17, 1.48,1.51, 1.55, 1.88, 1.90, 2.02)
dead <- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 7, 11, 4, 14, 14, 12, 12, 18, 17)
survive <- c(15, 16, 15, 15, 15, 16, 14, 14, 10, 15, 12, 5, 12, 0, 1, 3, 0, 0, 0)
total <- c(15, 16, 15, 15, 15, 16, 14, 14, 10, 16, 19, 16, 16, 14, 15, 15, 12, 18, 17)
perc <- c(1.00, 1.00, 1.00, 1.00, 1.00,1.00, 1.00, 1.00, 1.00, 0.94,0.63,
0.31,0.75,0.00, 0.07, 0.20, 0.00, 0.00,0.00)
data<-data.frame(conc,dead,survive,total,perc)
head(data)
attach(data)
#create matrix of dead and survival
y = cbind(dead,survive)
#create binomial glm (probit model)
model.results = glm(data = data, y ~ conc,binomial(link="probit"))
summary(model.results)
#use function from MASS to calculate LC
dose.p(model.results,p=0.5)
dose.p(model.results,p=c(0.1,0.25,0.5,0.99))
#plot curve
plot(conc,(survive/(survive+dead)), ylab = "Percent Survival",
xlab="Concentration ")
#To make function use the Estimate parameters from the binomial glm
used above
logisticline <- function(z) {eta = -6.7421 + 5.4468 * z;1 / (1 +
exp(eta))}
x <- seq(0,200.02,0.01)
lines(x,logisticline(x),new = TRUE)
#plot using ggplot
ggplot(data, aes(x = conc, y = perc)) +
geom_point() +
geom_smooth(method="glm",method.args = list(family = "binomial"))
A:
You can draw the fitted line with ggplot2 by making predictions from the model or by fitting the model directly with geom_smooth. To do the latter, you'll need to fit the model with the proportion dead as the response variable with total as the weights instead of using the matrix of successes and failures as the response variable.
Using glm, fitting a model with a proportion plus weights looks like:
# Calculate proportion
data$prop = with(data, dead/total)
# create binomial glm (probit model)
model.results2 = glm(data = data, prop ~ conc,
family = binomial(link="probit"), weights = total)
You can predict with the dataset you have or, to make a smoother line, you can create a new dataset to predict with that has more values of conc as you did.
preddat = data.frame(conc = seq(0, 2.02, .01) )
Now you can predict from the model via predict, using this data.frame as newdata. If you use type = "response", you will get predictions on the data scale via the inverse link. Because you fit a probit model, this will use the inverse probit. In your example you used the inverse logit for predictions.
# Predictions with inverse probit
preddat$pred = predict(model.results2, newdata = preddat, type = "response")
# Predictions with inverse logit (?)
preddat$pred2 = plogis( predict(model.results2, newdata = preddat) )
To fit the probit model in ggplot, you will need to use the proportion as the y variable with weight = total. Here I add the lines from the model predictions so you can see the probit model fit in ggplot gives the same estimated line as the fitted probit model. Using the inverse logit gives you something different, which isn't surprising.
ggplot(data, aes(conc, prop) ) +
geom_smooth(method = "glm", method.args = list(family = binomial(link = "probit") ),
aes(weight = total, color = "geom_smooth line"), se = FALSE) +
geom_line(data = preddat, aes(y = pred, color = "Inverse probit") ) +
geom_line(data = preddat, aes(y = pred2, color = "Inverse logit" ) )
| {
"pile_set_name": "StackExchange"
} |
Q:
Hibernate criteria orderby
Criteria crit=hbSession.createCriteria(S1.class)
.add(Restrictions.between("s1Docdt",startDate, endDate))
.add(Restrictions.eq("s1BranchCode",branchCode))
.add(Restrictions.eq("s1AccountingYear",year));
crit.addOrder(Order.asc("s1Docdt","s1Dcno","s1Tc");
I have created a session and tried to add the restrictions and got an error so can anyone help me in this.
A:
Accordingly to Hibernate Javadoc Order.asc(String) method, your code seems wrong to me. Try this:
Criteria crit = hbSession.createCriteria(S1.class)
.add(Restrictions.between("s1Docdt",startDate, endDate))
.add(Restrictions.eq("s1BranchCode",branchCode))
.add(Restrictions.eq("s1AccountingYear",year));
crit.addOrder(Order.asc("s1Docdt"));
crit.addOrder(Order.asc("s1Dcno"));
crit.addOrder(Order.asc("s1Tc"));
Hope this helps.
| {
"pile_set_name": "StackExchange"
} |
Q:
Web images from cgm
I am trying to use computer graphics metafile (cgm) in my web pages. I am using PHP and Javascript. So I am trying to find the structure of the file to be able to read it and draw its images. I am also searching for an open source software that can be embedded in web pages, not a stand alone application. I have read all the papers on Webcgm, but I couldn't find what I need. If anyone has suggestions or advice on how to implement cgm that will be great. Thanks in advance!
A:
Is a Java applet an option for you? E.g. http://www.bdaum.de/howto.htm describes how to use one. See http://www.coderanch.com/t/259381/Applets/java/display-CGM-images-iingn-web for more discussion.
As an alternative, you could convert your CGM files to SVG or PDF on the server. You could do that either in advance, or on-the-fly, depending on your requirements. E.g. http://www.cgmlarson.com/convert_CGM_to_PDF_aerospace.html has a server license. (I don't know anything about the suitability or price of the product.)
| {
"pile_set_name": "StackExchange"
} |
Q:
How to pass compiler options during Linux kernel compilation?
For reasons, I need to compile the Linux kernel (currently 4.7.10) passing some simple and innocent additional command line options (e.g. -pipe -Wsomething etc) to the C compiler.
How do I do it?
More specifically, how do I enforce these compiler flags during plain make as well as during make menuconfig and similar, i.e. so that they are always passed to the C compiler whenever the latter is executed.
A:
From Linux kernel's makefile:
# Add any arch overrides and user supplied CPPFLAGS, AFLAGS and CFLAGS as the
# last assignments
KBUILD_CPPFLAGS += $(ARCH_CPPFLAGS) $(KCPPFLAGS)
KBUILD_AFLAGS += $(ARCH_AFLAGS) $(KAFLAGS)
KBUILD_CFLAGS += $(ARCH_CFLAGS) $(KCFLAGS)
So, passing additional options for Kbuild uses usual environment/makefile variables but with K prefix:
make "KCFLAGS=-pipe -Wsomething"
| {
"pile_set_name": "StackExchange"
} |
Q:
Golang logrus - how to do a centralized configuration?
I am using logrus in a Go app. I believe this question is applicable to any other logging package (which doesn't offer external file based configuration) as well.
logrus provides functions to setup various configuration, e.g. SetOutput, SetLevel etc.
Like any other application I need to do logging from multiple source files/packages, it seems you need to setup these options in each file with logrus.
Is there any way to setup these options once somewhere in a central place to be shared all over the application. That way if I have to make logging level change I can do it in one place and applies to all the components of the app.
A:
You don't need to set these options in each file with Logrus.
You can import Logrus as log:
import log "github.com/Sirupsen/logrus"
Then functions like log.SetOutput() are just functions and modify the global logger and apply to any file that includes this import.
You can create a package global log variable:
var log = logrus.New()
Then functions like log.SetOutput() are methods and modify your package global. This is awkward IMO if you have multiple packages in your program, because each of them has a different logger with different settings (but maybe that's good for some use cases). I also don't like this approach because it confuses goimports (which will want to insert log into your imports list).
Or you can create your own wrapper (which is what I do). I have my own log package with its own logger var:
var logger = logrus.New()
Then I make top-level functions to wrap Logrus:
func Info(args ...interface{}) {
logger.Info(args...)
}
func Debug(args ...interface{}) {
logger.Debug(args...)
}
This is slightly tedious, but allows me to add functions specific to my program:
func WithConn(conn net.Conn) *logrus.Entry {
var addr string = "unknown"
if conn != nil {
addr = conn.RemoteAddr().String()
}
return logger.WithField("addr", addr)
}
func WithRequest(req *http.Request) *logrus.Entry {
return logger.WithFields(RequestFields(req))
}
So I can then do things like:
log.WithConn(c).Info("Connected")
(I plan in the future to wrap logrus.Entry into my own type so that I can chain these better; currently I can't call log.WithConn(c).WithRequest(r).Error(...) because I can't add WithRequest() to logrus.Entry.)
| {
"pile_set_name": "StackExchange"
} |
Q:
Sorting by using the value of array (VB.NET)
The input :
i = 768
arrayInt(i) = 258
arrayIntCombine(i) = 256
arrayByte(i) = 32
i = 1632
arrayInt(i) = 256
arrayIntCombine(i) = 112
arrayByte(i) = 97
i = 1824
arrayInt(i) = 259
arrayIntCombine(i) = 32
arrayByte(i) = 112
i = 1889
arrayInt(i) = 257
arrayIntCombine(i) = 97
arrayByte(i) = 112
i = 2016
arrayInt(i) = 260
arrayIntCombine(i) = 256
arrayByte(i) = 110
..... (more input)
I would like an output like this (text or messagebox):
No. 256 : 112 and 97
No. 257 : 97 and 112
No. 258 : 256 and 32
No. 259 : 32 and 112
No. 260 : 256 and 110
...... (more output)
I've tried array.sort from Sorting an array numerically (VB.NET) but it doesnt work
A:
You could use Tuples:
Dim DictionaryOfTuples As New Dictionary(Of Integer, Tuple(Of Integer, Integer, Integer))
DictionaryOfTuples(768) =
New Tuple(Of Integer, Integer, Integer)(258, 256, 32)
DictionaryOfTuples(1632) =
New Tuple(Of Integer, Integer, Integer)(256, 112, 97)
DictionaryOfTuples(1824) =
New Tuple(Of Integer, Integer, Integer)(259, 32, 112)
DictionaryOfTuples(1889) =
New Tuple(Of Integer, Integer, Integer)(257, 97, 112)
DictionaryOfTuples(2016) =
New Tuple(Of Integer, Integer, Integer)(260, 256, 110)
Dim output As New System.Text.StringBuilder
For Each sortedTuple In DictionaryOfTuples.Values.OrderBy(Function(t) t.Item1)
output.AppendLine(
String.Format(
"No. {0} : {1} and {2}",
sortedTuple.Item1,
sortedTuple.Item2,
sortedTuple.Item3))
Next
MsgBox(output.ToString)
| {
"pile_set_name": "StackExchange"
} |
Q:
NSNotificationCenter in Swift Not Behaving Correctly (only calls certain things in the function)
I'm using NSNotificationCenter to trigger tableView.reloadData() upon receiving data via an http request. 99% of the time, the tableView does not reload. Only after terminating the app, deleting, cleaning, and again running does it operator...but only the first time.
MainVC.swift
import UIKit
class MainVC: UIViewController, UITableViewDelegate, UITableViewDataSource {
let needMetNotificationKey = "kNeedMetKey"
@IBOutlet var tableView: UITableView!
override func viewDidLoad() {
super.viewDidLoad()
NSNotificationCenter.defaultCenter().addObserver(self, selector: "needMet", name: needMetNotificationKey, object: nil)
}
func needMet() {
startConnectionAt(url)
tableView.reloadData()
println("executed")
}
func startConnectionAt(urlPath: String){
var url: NSURL = NSURL(string: urlPath)
var request: NSURLRequest = NSURLRequest(URL: url)
var connection: NSURLConnection = NSURLConnection(request: request, delegate: self, startImmediately: false)
connection.start()
}
AcceptVC.swift
This is the VC that presents my form and makes the http request, whereafter returning to the MainVC.
import UIKit
class AcceptVC: UIViewController, UITextFieldDelegate {
@IBOutlet var receiveName: UITextField!
@IBOutlet var receiveEmail: UITextField!
@IBOutlet var receivePhone: UITextField!
override func viewDidLoad() {
super.viewDidLoad()
@IBAction func signupForNeed() {
var URL: NSURL = NSURL(string: "http://www.domain.com/json.php")
var request: NSMutableURLRequest = NSMutableURLRequest(URL:URL)
request.HTTPMethod = "POST"
var needName = receiveName
var needEmail = receiveEmail
var needPhone = receivePhone
var signup: String = "id=\(passedID)&name=\(needName.text)&email=\(needEmail.text)&phone=\(needPhone.text)"
request.HTTPBody = signup.dataUsingEncoding(NSUTF8StringEncoding)
NSURLConnection.sendAsynchronousRequest(request, queue: NSOperationQueue.mainQueue()) {
(response, data, error) in println(NSString(data: data, encoding: NSUTF8StringEncoding))
}
NSNotificationCenter.defaultCenter().postNotificationName(needMetNotificationKey, object: nil, userInfo: nil)
navigationController?.presentingViewController?.dismissViewControllerAnimated(true, completion: {})
}
I've added the println("executed") to the end of the needMet() function to verify that it does make it to the end of function. Reliably, "executed" is always printed.
When I utilize startConnectionAt(url) and tableView.reloadData() anywhere else it behaves as it should. Why does it not operate as it should here?
A:
You're posting your notification immediately after your request is sent, not once it's finished. You need to move the postNotificationName call inside your completion closure on sendAsynchronousRequest:
NSURLConnection.sendAsynchronousRequest(request, queue: NSOperationQueue.mainQueue()) {
(response, data, error) in
println(NSString(data: data, encoding: NSUTF8StringEncoding))
NSNotificationCenter.defaultCenter().postNotificationName(needMetNotificationKey, object: nil, userInfo: nil)
}
Also, you need to make sure that you do all your UI work on the main queue, so in your needMet function, you should use dispatch_async to dispatch the work to the main queue:
func needMet() {
startConnectionAt(url)
dispatch_async(dispatch_get_main_queue()) {
tableView.reloadData()
println("executed")
}
}
Note: I'd recommend reading up on Concurrency Programming to learn about dispatch_async, why you need use it, and some alternatives.
You also seem to have defined signupForNeed inside viewDidLoad. That should really be a function in the class itself and not a nested function.
| {
"pile_set_name": "StackExchange"
} |
Q:
Java root and power of
I'm having trouble figuring this out:
Write a fragment that uses a for statement to set the double variable sum to the value of:
Here's what I tried:
class thing
{
public static void main (String [] args)
{
double sum = 1;
for (int i = 1; i<=25; i++)
{
sum += Math.pow(i,1.0/i) ;
System.out.println(sum);
}
}
}
I know this is wrong because it does not end with the proper calculation of 1.137411462.
Any help is appreciated! :)
A:
To add to the other replies above, that sum must start with 0, the calculation as you described isn't accurate.
The value of 25√25 is 1.137411462, not the sum from 1 to 25, in which case if you start with
int sum = 0;
You end up with the total: 30.85410561309813 which is the correct total that you want.
| {
"pile_set_name": "StackExchange"
} |
Q:
Powershell Set-DhcpServerv4Reservation
I was wondering it is possible to use Set-DhcpServerv4Reservation in order to change the IP Address of a reservation. If not, is the only way to do so completely deleting the reservation and then recreating it with the new settings?
Thanks in advance.
A:
You have to first delete the reservation and then re add it using the preferred settings.
$newip="132.4.5.214"
$query = Get-DhcpServerv4Reservation -ScopeId $scopeid -ComputerName $serverName | WHERE {$_.IPAddress.IPAddressToString-eq $testip}
if($query.Name){
$ip = $query.IPAddress.IPAddressToString
Remove-DhcpServerv4Reservation -ComputerName $serverName -IPAddress $ip
Add-DhcpServerv4Reservation -ScopeId $scopeid -ComputerName $serverName-IPAddress $newIP -ClientId $query.ClientId -Name $query.Name
}
| {
"pile_set_name": "StackExchange"
} |
Q:
create array from selected items in an array of Fruit items
I have an array with items and an array with indices to delete from the first array:
var array = ["a", "b", "c", "d", "e", "f", "g", "h", "i"]
let indicesToDelete = [4, 8]
let reducedArray = indicesToDelete.reverse().map { array.removeAtIndex($0) }
reducedArray // prints ["i","e"]
What if my array looks like this:
class Fruit{
let colour: String
let type: String
init(colour:String, type: String){
self.colour = colour
self.type = type
}
}
var arrayFruit = [Fruit(colour: "red", type: "Apple" ),Fruit(colour: "green", type: "Pear"), Fruit(colour: "yellow", type: "Banana"),Fruit(colour: "orange", type: "Orange")]
let indicesToDelete = [2,3]
If I just use the above code I get an error.
let reducedArray = indicesToDelete.reverse().map { arrayFruit.removeAtIndex($0) }////// error here
My questions is that the fruitArray is made out of objects and I do not know how to adjust the code in above line.
A:
The reduced array is not the result of map but the original array, i.e. arrayFruit. I would suggest not using map but forEach, and write it like this:
class Fruit{
let colour: String
let type: String
init(colour:String, type: String){
self.colour = colour
self.type = type
}
}
var arrayFruit = [Fruit(colour: "red", type: "Apple" ),Fruit(colour: "green", type: "Pear"), Fruit(colour: "yellow", type: "Banana"),Fruit(colour: "orange", type: "Orange")]
let indicesToDelete = [2,3]
indicesToDelete.sort(>).forEach { arrayFruit.removeAtIndex($0) }
arrayFruit // [{colour "red", type "Apple"}, {colour "green", type "Pear"}]
| {
"pile_set_name": "StackExchange"
} |
Q:
Apache's ProxyPass and ProxyPassReverse equivalent in IIS
Is there Apache's mod_proxy equivalent in IIS?
I have following configuration in my Apache's httpd.conf (mod_proxy enabled):
Header add Set-Cookie "ROUTEID=hej.%{BALANCER_WORKER_ROUTE}e; path=/;" env=BALANCER_ROUTE_CHANGED
<Proxy balancer://openfire>
BalancerMember http://server2:7070/http-bind/ route=1
ProxySet stickysession=ROUTEID
</Proxy>
ProxyPass /project1/http-bind balancer://openfire nofailover=Off
ProxyPassReverse /project1/http-bind balancer://openfire
I need to do similar config for IIS. I tried ARR (Application Request Routing) but could not get it working.
Can someone help me to achieve this?
Thanks.
A:
I believe you are looking for Application Request Routing. For examples, there are many on StackOverflow already, such as this one.
| {
"pile_set_name": "StackExchange"
} |
Q:
Как добавить новое поле в модель User при использовании django-allauth
Хочу добавить новое поле subscriber в модель User (таблица auth_user). В проекте используется django-allauth версии 0.24.1. Джанга 1.9.1
Структура проекта:
project
├── config
| ├── settings
| | └── base.py
| ├── urls.py
| └── wsgi.py
├── core
| ├── models.py
| ├── urls.py
| └── views.py
├── members
| ├── migrations
| ├── static
| ├── templates
| ├── forms.py
| ├── models.py
| ├── urls.py
| └── views.py
└── templates
├── pages
| └── register.html
└── base.html
В config/base.py
AUTH_USER_MODEL = 'members.MyUser'
ACCOUNT_AUTHENTICATION_METHOD = 'email'
ACCOUNT_SIGNUP_PASSWORD_VERIFICATION = True
ACCOUNT_USER_MODEL_USERNAME_FIELD = None
ACCOUNT_EMAIL_VERIFICATION = 'none'
ACCOUNT_USERNAME_REQUIRED = False
SOCIALACCOUNT_QUERY_EMAIL = True
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_LOGOUT_ON_GET = True
ACCOUNT_UNIQUE_EMAIL = True
В members/model.py
from django.contrib.auth.base_user import AbstractBaseUser
from django.contrib.auth.models import UserManager, PermissionsMixin
from django.utils.translation import ugettext_lazy as _
from django.utils import timezone
from django.db import models
class MyUserManager(UserManager):
def create_user(self, email, password=None, **kwargs):
user = self.model(email=email, **kwargs)
user.set_password(password)
user.save()
return user
def create_superuser(self, email, password, **kwargs):
user = self.model(email=email, is_staff=True, is_superuser=True, **kwargs)
user.set_password(password)
user.save()
return user
class MyUser(AbstractBaseUser, PermissionsMixin):
email = models.EmailField(_('email address'), blank=False, unique=True)
first_name = models.CharField(_('first name'), max_length=40, blank=True, null=True, unique=False)
last_name = models.CharField(_('last name'), max_length=40, blank=True, null=True, unique=False)
is_staff = models.BooleanField(
_('staff status'),
default=False,
help_text=_('Designates whether the user can log into this admin '
'site.')
)
is_active = models.BooleanField(
_('active'),
default=True,
help_text=_('Designates whether this user should be treated as '
'active. Unselect this instead of deleting accounts.')
)
date_joined = models.DateTimeField(_('date joined'), default=timezone.now)
# extend the base model here
is_subscriber = models.BooleanField(_('subscriber'), default=False)
objects = MyUserManager()
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = []
class Meta:
verbose_name = _('user')
verbose_name_plural = _('users')
db_table = 'auth_user'
abstract = False
def get_full_name(self):
full_name = '%s %s' % (self.first_name, self.last_name)
return full_name.strip()
def get_short_name(self):
return self.first_name
В members/forms.py
from django import forms
from django.contrib.auth import get_user_model
from allauth.account.forms import SignupForm
class MySignupForm(SignupForm):
# additional fields
subscribe = forms.BooleanField()
def __init__(self, *args, **kwargs):
super(MySignupForm, self).__init__(*args, **kwargs)
class Meta:
model = get_user_model()
fields = (
'email',
'password1',
'password2',
'subscribe',
)
def save(self, user):
# user.is_subscriber = self.cleaned_data['subscribe']
user.is_subscriber = 1 # PERMANENT SET TO 1 JUST FOR TEST
user.save()
return user
В members/views.py
from django.shortcuts import redirect
from django.views.generic import TemplateView
from members.forms import MySignupForm
class Index(TemplateView):
template_name = 'pages/index.html'
class Register(TemplateView):
template_name = 'pages/register.html'
def get(self, request, *args, **kwargs):
context = super(Register, self).get_context_data(**kwargs)
if self.request.user.is_authenticated():
return redirect('core:home')
context.update({
'signup_form': MySignupForm(),
})
return self.render_to_response(context)
И в pages/register.html
<form id="signup_form" action="{% url 'account_signup' %}" method="post" enctype="multipart/form-data">
{% csrf_token %}
{{ signup_form.email }}
{{ signup_form.password1 }}
{{ signup_form.password2 }}
{{ signup_form.subscribe }}
<button type="submit" class="btn btn-primary">REGISTER</button>
</form>
Все поля, включая необходимый мне subscribe отображаются на сайте нормально. При сабмите все данные пишутся в базу, за исключением subscribe ...
Отладчик в def save(self, user) формы даже не заходит. Что я делаю не так?
A:
Решил проблему заходом с другой стороны.
Добавил вызов
ACCOUNT_SIGNUP_FORM_CLASS = 'members.forms.SignupForm'
и переделал SignupForm не от allauth.account.forms.SignupForm, а через forms.Forms
members/forms.py
class SignupForm(forms.Form):
email = forms.EmailField(max_length=255, label="Email")
password1 = forms.CharField(widget=forms.PasswordInput(), label="Password")
password2 = forms.CharField(widget=forms.PasswordInput(), label="Password again")
subscribe = forms.BooleanField()
class Meta:
model = MyUser
fields = [
'email',
'password1',
'password2',
'subscribe',
]
def signup(self, request, user):
user.is_subscriber = True # <-- JUST FOR TEST
user.save()
| {
"pile_set_name": "StackExchange"
} |
Q:
Changing deployment target post launch
I released an iPhone app with deployment target of ios8.2 accidentally, I wanted it to work with ios7.
I have two questions
1. Is there any way to easily change this or do I need to release a new app version
2. If a new version needs to be released do I need to test this on a real iPhone running iOS7 i.e. do I need to downgrade my test phone before I can submit this update?
A:
If the app isn't approved yet, you can remove it from review in the app's page in iTunes Connect, and submit a new build.
If the app is approved already, you have to create a new version to submit a new build.
In testing, you can test the app on the simulator, and if needs some real iPhone components, you can test on iOS devices 7.0 or above. You don't have to downgrade since the app will work on iOS 7.0 and above. However, there can be some bugs in older iOS versions, where devices with older versions can be useful (but you can't downgrade your devices - Apple prevents it). To test it on older versions, you need a device which hasn't been updated yet, or older versions of the simulator which Apple provides in Xcode (Check Xcode Preferences -> Downloads)
| {
"pile_set_name": "StackExchange"
} |
Q:
Heroku error during deployment of mern application
I just tried to deploy my first heroku application, but unfortunately I am getting an error.
The command heroku logs --tail throws the following error:
2020-02-14T16:35:34.990792+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/"host=agile-wildwood-52268.herokuapp.com request_id=5edfe87f-b4c7-42ee-a856-03ab806613ec fwd="88.68.64.245" dyno= connect= service= status=503 bytes= protocol=https
2020-02-14T16:35:37.110579+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=agile-wildwood-52268.herokuapp.com request_id=4e0aeeee-f828-4f85-a18c-25d54bfe3221 fwd="88.68.64.245" dyno= connect= service= status=503 bytes= protocol=https
I don't know where this error might be caused, any help is appreciated. Thank you
A:
So I figured out the problem.
I didn't have my environment variables set on the heroku dashboard.
If you get a similar problem follow those steps:
Go to https://dashboard.heroku.com/apps
Click on your application
Go to settings and click on "reveal config vars"
Enter all your environment variables
Hope it helps
| {
"pile_set_name": "StackExchange"
} |
Q:
mysql sort string number
I have a column of type varchar that stores many different numbers. Say for example there are 3 rows: 17.95, 199.95 and 139.95.How can i sort that field as numbers in mysql
A:
Quickest, simplest? use * 1
select *
from tbl
order by number_as_char * 1
The other reasons for using * 1 are that it can
survive some horrendous mishaps with underflow (reduced decimal precision when choosing what to cast to)
works (and ignores) columns of purely non-numeric data
strips numeric portions of alphanumeric data, such as 123A, 124A, 125A
A:
Use a CAST or a CONVERT function.
A:
If you need to sort a char column containing text AND numbers then you can do this.
tbl contains: 2,10,a,c,d,b,4,3
select * from tbl order by number_as_char * 1 asc, number_as_char asc
expected output: 2,3,4,10,a,b,c,d
If you don't add the second order by argument only numbers will be sorted - text actually gets ignored.
| {
"pile_set_name": "StackExchange"
} |
Q:
Place specific code in FE or BE of split MS Access database
A while back I asked this question about splitting an MS Access application, and possibly leaving some of the non-table functionality in the BE. Well, I'm at it again... :)
Some of my tables will be such that they are never updated by the user. The data feed to these tables will be a fairly intensive code process, run daily, that extracts from Oracle, majorly massages the data & then writes to my tables (very different structure from Oracle).. There's no practical way to make it a live link to Oracle. All of the code for this will be in Modules/Class Modules, none in Forms. It absolutely would need to be changed if the schema of either the Access file or the Oracle server changes.
Given the foregoing, FE or BE?
A:
I would put the code modules in a FE so that you can re-link a copy of the FE to a testing/development BE as the need arises. The code FE needn't be the same application FE you distribute to your users.
| {
"pile_set_name": "StackExchange"
} |
Q:
Use functions and parameters stored in list-cols - Purrr
I have the following tibble:
# A tibble: 18 × 6
id columnFilter modelName model train.X train.Y
<int> <chr> <chr> <list> <list> <list>
1 1 groupedColumns.donr boostModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
2 2 groupedSquaredColumns.donr boostModel <fun> <tibble [3,984 × 28]> <fctr [3,984]>
3 3 groupedTransformedColumns.donr boostModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
4 4 ungroupedColumns.donr boostModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
5 5 ungroupedSquaredColumns.donr boostModel <fun> <tibble [3,984 × 28]> <fctr [3,984]>
6 6 ungroupedTransformedColumns.donr boostModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
7 7 groupedColumns.donr ldaModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
8 8 groupedSquaredColumns.donr ldaModel <fun> <tibble [3,984 × 28]> <fctr [3,984]>
9 9 groupedTransformedColumns.donr ldaModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
10 10 ungroupedColumns.donr ldaModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
11 11 ungroupedSquaredColumns.donr ldaModel <fun> <tibble [3,984 × 28]> <fctr [3,984]>
12 12 ungroupedTransformedColumns.donr ldaModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
13 13 groupedColumns.donr logitModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
14 14 groupedSquaredColumns.donr logitModel <fun> <tibble [3,984 × 28]> <fctr [3,984]>
15 15 groupedTransformedColumns.donr logitModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
16 16 ungroupedColumns.donr logitModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
17 17 ungroupedSquaredColumns.donr logitModel <fun> <tibble [3,984 × 28]> <fctr [3,984]>
18 18 ungroupedTransformedColumns.donr logitModel <fun> <tibble [3,984 × 17]> <fctr [3,984]>
As you can see, modelName is the name of the model, stored as a function in model.
What I want to do is for each row, call the function stored in model, pass it train.X and train.Y as parameters, and store the function's output into a new column.
Conceptually, something like:
df %>% mutate(result = pmap(train.X,train.Y,model)
I've been trying to use pmap(), but to no success.
Need some guidance here.
A:
invoke_map should work after you combine train.X and train.Y into a list. Here's a basic example in a similar situation that could be tested. tib mimics your situation in that x and y are parameters you need to provide the function. In the example, I use the runif function which takes the parameters plus n. I use map2 to get x and y wrapped in a list column called "params". Then I use the invoke_map() function to iteratively apply functions to the params.
library(tidyverse)
# Basic example
tib <- tribble(
~fun, ~x, ~y,
runif, -1, 1,
runif, -10, 10,
runif, -3,3
)
tib
#> # A tibble: 3 × 3
#> fun x y
#> <list> <dbl> <dbl>
#> 1 <fun> -1 1
#> 2 <fun> -10 10
#> 3 <fun> -3 3
tib %>%
mutate(params = map2(x, y, list)) %>%
mutate(result = invoke_map(fun, params, n = 5))
#> # A tibble: 3 × 5
#> fun x y params result
#> <list> <dbl> <dbl> <list> <list>
#> 1 <fun> -1 1 <list [2]> <dbl [5]>
#> 2 <fun> -10 10 <list [2]> <dbl [5]>
#> 3 <fun> -3 3 <list [2]> <dbl [5]>
Now we just need to apply the same procedure to your example. This should work.
df %>%
mutate(params = map2(train.X, train.Y, list)) %>%
mutate(result = invoke_map(model, params))
| {
"pile_set_name": "StackExchange"
} |
Q:
JS Replace each HEX color with a new random HEX color
Recently I've asked on how to replace a thousand of #FFF with a random HEX color.
The solution was to run this code:
var fs = require('fs')
fs.readFile('stars.css', 'utf8', function (err,data) {
if (err) {
return console.log(err);
}
var result = data.replace(/#FFF/g, () => '#' + ("000000" + Math.random().toString(16).slice(2, 8).toUpperCase()).slice(-6));
fs.writeFile('stars.css', result, 'utf8', function (err) {
if (err) return console.log(err);
});
});
I'm looking for a way to detect any HEX color within the file, and replace it with a new random HEX color.
Here's what I tried:
var result = data.replace(/^#[0-9A-F]{6}$/i.test('#AABBCC'), () => '#' + ("000000" + Math.random().toString(16).slice>
Also, ("000000" + Math.random().toString(16).slice(2, 8).toUpperCase()).slice(-6) is the only way for me to get HEX color, as Math.floor(Math.random()*16777215).toString(16) method throws an error on my webpage
A:
Replace data.replace(/^#[0-9A-F]{6}$/i.test('#AABBCC'), () => '#' + ("000000" + Math.random().toString(16).slice(2, 8).toUpperCase()).slice(-6)); with:
data.replace(/#[0-9A-F]{3,6}/ig, () => `#${Math.floor(Math.random()*16777215).toString(16)}`);
I added global flag to your regex and found a shorter way to generate a random color from here, besides removing unnecessary .test and deleting ^/$ (matches at the start of the string)
| {
"pile_set_name": "StackExchange"
} |
Q:
Can't we use other than fetch method we set as default in ATTR_DEFAULT_FETCH_MODE?
I Set PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC, as default in connection and now I am having problem with fetchAll problem.
The problem I am having is :
Notice: Undefined offset: 0 in C:\wamp\www\a\test.php on line 59
And line 59 is :
if($sql->fetchAll()[0][0] !== '0' && $level !== 0){
Here is my connection type:
$host = 'localhost';
$db = 'demo';
$user = 'root';
$pass = '';
$charset = 'utf8mb4';
$dsn = "mysql:host=$host;dbname=$db;charset=$charset";
$options = array(
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
PDO::ATTR_EMULATE_PREPARES => TRUE,
);
try {
$pdo = new PDO($dsn, $user, $pass, $options);
} catch (\PDOException $e) {
throw new \PDOException($e->getMessage(), (int)$e->getCode());
}
And Here is the query I am trying :
if I remove ATTR_DEFAULT_FETCH_MODE it works fine, is there a way to use both ?
function bootstrap_menu($pdo, $parent_id, $level = null) {
$stmt = $pdo->prepare("SELECT * FROM categories WHERE parent_id =:parent_id");
$stmt->bindParam(":parent_id", $parent_id, PDO::PARAM_INT);
$stmt->execute();
foreach ($stmt->fetchAll() as $row) {
$sql = $pdo->prepare("SELECT count(*) FROM categories WHERE parent_id =:cat_id");
$sql->bindParam(":cat_id", $row['cat_id'], PDO::PARAM_INT);
$sql->execute();
if($sql->fetchColumn()[0][0] !== '0' && $level !== 0){
echo "<li class=\"nav-item dropdown\">\n";
echo "<a class=\"nav-link dropdown-toggle\" href=\"".htmlspecialchars($row['seo_url'])."\" id=\"navbarDropdown\" role=\"button\" data-toggle=\"dropdown\"
aria-haspopup=\"true\" aria-expanded=\"false\">\n";
echo $row['cat_name'];
echo "</a>\n";
echo "<ul class=\"dropdown-menu\" aria-labelledby=\"navbarDropdown\">\n";
bootstrap_menu($pdo, $row[0], $level - 1);
echo "</ul>\n";
echo "</li>\n";
}elseif($level == 0){
$my_class = htmlspecialchars($row['cat_name']);
if($my_class == 'Ana Sayfa'){
echo "<li class=\"nav-item active\">\n";
echo "<a href=\"".htmlspecialchars($row['seo_url'])."\" class=\"nav-link \">\n";
echo htmlspecialchars($row['cat_name']);
echo "</a>\n";
echo "</li>\n";
}else{
echo "<li class=\"nav-item\">\n";
echo "<a href=\"".htmlspecialchars($row['seo_url'])."\" class=\"nav-link\">\n";
echo htmlspecialchars($row['cat_name']);
echo "</a>\n";
echo "</li>\n";
}
}
else {
echo "<li class=\"dropdown-item\">";
echo "<a href=\"".htmlspecialchars($row['seo_url'])."\" class=\"dropdown-item\">\n";
echo htmlspecialchars($row['cat_name']);
echo "</a>\n";
echo "</li>\n";
}
unset($sql);
}
unset($stmt);
}
A:
$sql->fetchAll()[0][0] means simply first column of the first row. There already is a method for that in PDO fetchColumn().
You just need:
if($sql->fetchColumn() !== '0' && $level !== 0)
or even better
if($sql->fetchColumn() && $level !== 0)
If for whatever reason you still want to use fetchAll() then you can pass the fetch style as an argument. e.g. $sql->fetchAll(\PDO::FETCH_BOTH)
Also in the outer loop foreach ($stmt->fetchAll() as $row) is not really needed. Simply foreach ($stmt as $row)
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.