text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
View Azure Blob Metadata Online
Is there a way to examine an Azure blob's metadata through a web interface or the Azure portal?
I'm running into a problem where I set metadata on a blob programmatically, without any problems, but when I go back to read the metadata in another section of the program there isn't any. So I'd like to confirm that the metadata was, in fact, written to the cloud.
A:
One of the simplest ways to set/get an Azure Storage Blob's metadata is by using the cross-platform Microsoft Azure Storage Explorer, which is a standalone app from Microsoft that allows you to easily work with Azure Storage data on Windows, macOS and Linux.
Just right click on the blob you want to examine and select Properties, you will see the metadata list if they exist.
Note: Version tested - 0.8.7
| {
"pile_set_name": "StackExchange"
} |
Q:
There is a number, the second digit of which is smaller than its first digit by 4, and if the number
There is a number, the second digit of which is smaller than its first digit by 4, and if the number was divided by the digit's sum, the remainder would be 7.
Actually I know the answer is 623
I found it by using computer program which checks the conditions for all numbers but I wanted to know if there is a mathematical way to slove this problem.
A:
One-digit case is impossible, since $4\not \equiv7 \mod 4$
Two-digit case: write number as $10(a+4)+a$.
$$10(a+4)+a \equiv 7 \mod (2a+4)$$
$$11a\equiv -33 \mod 2a+4$$
$$a+3 \equiv 0 \mod 2a+4$$
$$2(a+3) \equiv 2 \not \equiv 0 \mod 2a+4$$
Therefore, two-digit is impossible.
Three-digit case: Write number as $100(a+4)+10a+b$.
$$100(a+4)+10a+b\equiv 7 \mod (2a+b+4)$$
$$110a+b+400 \equiv 7 \mod (2a+b+4)$$
$$108a+396\equiv7 \mod (2a+b+4)$$
$$108a+389\equiv 0\mod (2a+b+4)$$
When $a=1$,
$$497\equiv 0 \mod b+6$$
Since $497=7\times 71$, $b+6=7$. However, we "do not want that", since the modulo must be greater than 7. "Remainder = 7" $\implies$ "Modulo > 7"
When $a=2$,
$$605 \equiv 0 \mod b+8$$
$605 = 5 \times 11^2$. Therefore, we can take $b+8=11 \implies b=3$.
Therefore our final answer is $\fbox{623}$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Knockout JS Checkbox binding
I am having an issue with a checkbox binding and a computed observable while jquery is in the page.
self.Guaranteed = ko.computed(function() {
var result;
result = self.IsGuaranteed() && (self.IsAllowed() || self.IsAllowed2());
console.log("evaluated");
if (result) {
self.IsGuaranteed(false);
}
return result;
}, self);
jsfiddle of runnable code
So I get expected behavior when I don't include jquery in the page, but when jquery is on the page, dependencies do not get correctly registered in the computed observable and the guaranteed checkbox will never uncheck when the condition is met. The conditional text may or may not show up, depending on if the computed recalculates. I can't just get rid of jquery form the page unfortunately, any other ideas?
EDIT:
This is the same example as above with expected behavior, but no jQuery loaded: working jfiddle
Also, here are the lines in knockout that are resetting the checkbox back:
// For click events on checkboxes, jQuery interferes with the event handling in an awkward way:
// it toggles the element checked state *after* the click event handlers run, whereas native
// click events toggle the checked state *before* the event handler.
// Fix this by intecepting the handler and applying the correct checkedness before it runs.
var originalHandler = handler;
handler = function(event, eventData) {
var jQuerySuppliedCheckedState = this.checked;
if (eventData)
this.checked = eventData.checkedStateBeforeEvent !== true;
originalHandler.call(this, event);
this.checked = jQuerySuppliedCheckedState; // Restore the state jQuery applied
};
A:
I'm not quite sure why you would be getting the "correct" behavior considering this pattern, within a computed, needs special consideration. The problem is that you are modifying the computed dependency self.IsGuaranteed inside the computed. See Note: Why circular dependencies aren’t meaningful. If you 'must' use this setup, then take into consideration the advice within the link of using peek.
This may be a simplified example of a more complex setup, but if the desired functionality is to prevent guaranteed from being selected, why not just disable the element?
Model
self.isGuaranteeable = ko.computed(function(){
return !(self.IsAllowed() || self.IsAllowed2());
});
Html
<input data-bind="checked: IsGuaranteed, enable: isGuaranteeable" id="IsGuaranteed" name="IsGuaranteed" type="checkbox" value="true"> <span>Guaranteed</span>
Explanation
Have a look at your example with some debug capabilities added.
Case 1: You select either Allowed and then select Guaranteed. Next click to print the isGuaranteed observable value.
[09:45:00.623] Computing (computed)
[09:45:00.623] IsGuaranteed: (assigning) false
[09:45:00.623] IsGuaranteed: (assigning) true
[09:45:38.430] Actual value: false
When you entered the computed the value of IsGuaranteed was being set to true. You encountered the state you are trying to prevent and set the observable to false. The problem however, "Knockout will not restart evaluation of a computed while it is already evaluating". The span rendered on the "first" pass of the computed and the reassignment of the computed dependency did not trigger a re-evaluation. Thus, the text stays on the screen. You interrupted the original value setting of true which resolves last leaving the checkbox selected but the actual value behind the scenes is false.
Case 2: Select Guaranteed, then either Allowed followed by a print of the actual value.
[09:58:01.367] Computing (computed)
[09:58:01.367] IsGuaranteed: (assigning) true
[09:58:02.199] Computing (computed)
[09:58:02.199] IsGuaranteed: (assigning) false
[09:58:03.054] Actual value: false
The first selection triggers everything appropriately. The selection of the Allowed forces the computed to evaluate and the "bad" state triggers the reassignment of IsGuaranteed. Since that value was not the original trigger of the computed, it assigns the value correctly, however, the backend is now in a state that should hide the warning text but it did not due to the circular dependency.
| {
"pile_set_name": "StackExchange"
} |
Q:
Prove Schwarz inequality in $R^2$
Can someone please show me how you would prove the following in $R^2$
$\int f(x)* g(x) dx \leqslant \int f(x)^2 dx * \int g(x)^2 dx $
starting from
$\int [\lambda*f(x) - g(x)]^2 dx \geqslant 0$, where $\lambda$ is real
Also, can someone confirm that $\int f(x)^2 dx = \int f(x) dx \int f(x) dx$
A:
We have that $\int_{a}^{b}\left(\lambda f(x)-g(x)\right)^2dx\ge0$ for all $\lambda$, so
$\displaystyle\lambda^{2}\left(\int_{a}^{b}(f(x))^2dx\right)-2\lambda\left(\int_{a}^{b}f(x)g(x)dx\right)+\int_{a}^{b}(g(x))^2dx\ge0$ for all $\lambda$.
Therefore $b^2-4ac=\displaystyle4\left(\int_{a}^{b}f(x)g(x)dx\right)^2-4\left(\int_{a}^{b}(f(x))^2dx\right)\left(\int_{a}^{b}(g(x))^2dx\right)\le0$,
so $\;\;\;\displaystyle\left(\int_{a}^{b}f(x)g(x)dx\right)^2\le\left(\int_{a}^{b}(f(x))^2dx\right)\left(\int_{a}^{b}(g(x))^2dx\right)$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Передача текста из своего приложения на Youtube
Как передать текст через неявный интент, чтобы он попадал прямиком в поисковой запрос в приложение Youtube?
A:
Попробуйте так:
Intent intent = new Intent(Intent.ACTION_SEARCH);
intent.setPackage("com.google.android.youtube");
intent.putExtra("query", "some query string");
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(intent);
Только не забудьте обернуть startActivity(intent) в
try {
...
} catch(ActivityNotFoundException e) {
...
}
на случай, если приложение YouTube не установлено на устройстве.
| {
"pile_set_name": "StackExchange"
} |
Q:
Stop GlassFish wrapping Jersey exceptions
Here's a JAX-RS end-point:
@Path("logout")
@POST
public void logout(@HeaderParam("X-AuthToken") String apiToken) {
try {
authenticationService.logout(apiToken);
} catch (ExpiredApiTokenException exc) {
throw new BadRequestException("API token has expired");
} catch (InvalidApiTokenException exc) {
throw new BadRequestException("API token is not valid");
} catch (ApplicationException exc) {
throw new InternalServerErrorException();
}
}
When one of these BadRequestExceptions (HTTP 400) are thrown, GlassFish returns its own error page in the response body instead of the error message in my code. The response contains the correct HTTP code. Just the body is replaced.
I have tried creating an ExceptionMapper:
@Provider
public class ExceptionMapperImpl implements ExceptionMapper<Throwable> {
@Override
public Response toResponse(Throwable exception) {
if (exception instanceof WebApplicationException) {
return ((WebApplicationException) exception).getResponse();
} else {
logger.log(Level.SEVERE, "Uncaught exception thrown by REST service", exception);
return Response.status(Response.Status.INTERNAL_SERVER_ERROR).build();
}
}
private static final Logger logger = Logger.getLogger(ExceptionMapperImpl.class.getName());
}
And I tried adding the following ejb-jar.xml:
<?xml version="1.0" encoding="UTF-8"?>
<ejb-jar xmlns="http://xmlns.jcp.org/xml/ns/javaee"
version="3.2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/ejb-jar_3_2.xsd">
<assembly-descriptor>
<application-exception>
<exception-class>javax.ws.rs.WebApplicationException</exception-class>
<rollback>true</rollback>
</application-exception>
</assembly-descriptor>
</ejb-jar>
all to no avail.
What am I missing?
A:
In the toResponse method, I changed this
return exception.getResponse();
to this
return Response.status(exception.getResponse().getStatus())
.entity(exception.getMessage())
.build();
The problem is the when I do this:
throw new BadRequestException("Something is wrong!");
it doesn't populate the exception's Response's body with "Something is wrong!". GlassFish sees that the response body is empty and its status code is an error code, so it inserts its own response body. By populating the response body (a.k.a. entity) in the provider, this problem goes away.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I find out if a link is "Clean" or "Dirty"?
Today I received a link in an email. Is there any way to be sure it will not lead to a virus before I open it? I googled it and came up with no hits.
EDIT: I have sent a message to the sender querying if they sent it. I'll update when I get an answer.
EDIT: Nov. 1 - I sent a message to the sender asking if they had sent it. No response to date. I am deleting it.
A:
You can never be sure, but there are websites which rate the safety of other sites, eg Site Advisor and Web of Trust.
A:
You can check the link manually by using any web proxy tunnels (e.g. ktunnel.com). Make sure that you check the "Remove scripts" option.
| {
"pile_set_name": "StackExchange"
} |
Q:
Rock, Paper, Scissors game made in python
I added Rock, Paper, Scissors to one of my programs. I'm new to python and PEP 8, so I want to see how to improve my work.
def rock_paper_scissors(name):
play_rps = True
while play_rps:
rps = ["rock", 'paper', 'scissors']
player_rps = input("Rock, Paper, or Scissors: ").lower()
com_rps = rps[random.randint(0,3)]
print(com_rps)
if com_rps == player_rps:
print('Tie')
if com_rps == 'rock' and player_rps == "scissors":
print("Chatter Bot Wins!")
if com_rps == 'scissors' and player_rps == "paper":
print("Chatter Bot Wins!")
if com_rps == 'paper' and player_rps == "rock":
print("Chatter Bot Wins!")
if player_rps == 'rock' and com_rps == "scissors":
print(f"{name} Wins!")
if player_rps == 'sicssors' and com_rps == "paper":
print(f"{name} Wins!")
if player_rps == 'paper' and com_rps == "rock":
print(f"{name} Wins!")
yn = input("Do you want to play again. Y/N: ").lower()
if yn == 'n' or yn == 'no':
play_rps = False
A:
Here are some extra suggestions:
Considering that this is a piece of a much larger system, I'd suggest that you make it a class. In fact, make it two classes: the game, and the input/output mechanism. That should facilitate writing unit tests. I'm not going to assume a class for my remaining suggestions, but the point stands.
As @Peilonrayz pointed out, you need some more functions. I'll suggest that you focus on creating functions for general-purpose interaction with the user. This would let you re-use those same functions in other games, other parts of your bot, etc.
player_rps = input("Rock, Paper, or Scissors: ").lower()
This is a potential bug. What happens if I don't spell my answer right, or if I enter "Gecko" instead? So, write a function to get a choice from the user. Allow abbreviations. Something like:
def input_choice(choices, default=None):
""" Presents a list of choices to the user and prompts her to enter one. If
default is not None, an empty input chooses the default. Abbreviations are
allowed if they identify exactly one choice. Case is ignored.
Returns the chosen value as it appears in the choices list (no abbreviations).
"""
pass
Use random.choice to choose a random value from a sequence. You don't need to pick a number and then index with it:
rps = ["rock", 'paper', 'scissors']
player_rps = input("Rock, Paper, or Scissors: ").lower()
com_rps = rps[random.randint(0,3)]
becomes:
rps = "Rock Paper Scissors".split()
player_rps = input_choice(rps)
com_rps = random.choice(rps)
You can merge your data to simplify your code. There are many paragraphs that look like this:
if com_rps == 'rock' and player_rps == "scissors":
print("Chatter Bot Wins!")
This is really a 2-inputs -> 1-output function, but you can use a dictionary for this just by concatenating the inputs:
i_win = "Chatter Bot wins!"
u_win = f"{name} wins!"
winner = {
"Rock:Scissors": i_win,
"Scissors:Paper": i_win,
"Paper:Rock": i_win,
"Scissors:Rock": u_win,
"Paper:Scissors": u_win,
"Rock:Paper": u_win,
}
contest = f"{com_rps}:{player_rps}"
print(winner.get(contest, "It's a tie!"))
Edit:
@Peilonrayz pointed out that tuples were a better choice than strings, and he's right. So here's a slightly different version:
winners = { ('Rock', 'Scissors'), ('Scissors', 'Paper'), ('Paper', 'Rock'), }
result = "Chatter Bot wins!" if (com_rps, player_rps) in winners
else f"{name} wins!" if (player_rps, com_rps) in winners
else "It's a tie!"
print(result)
You need more functions:
yn = input("Do you want to play again. Y/N: ").lower()
if yn == 'n' or yn == 'no':
play_rps = False
This should be a function, like:
def input_yn(prompt, default=None):
""" Prompts the user for a yes/no answer. If default is provided, then an
empty response will be considered that value. Returns True for yes,
False for no.
"""
pass
| {
"pile_set_name": "StackExchange"
} |
Q:
Meta-Tag Module that can use existing fields
On my node, I have several existing fields, such as field_introduction and field_book_tags. These fields were created by me and do not come with Drupal by default.
I would like to know if there are any modules, which will let me use these fields for the Meta Description and Key word Tags.
The fields I need use for the meta tags will vary between content type, so the Meta tags solution must allow local overrides.
Here are some of the modules I have looked at:
Meta Tag, Custom Meta - Allows use of default Drupal fields only
Meta Tag Quick - Allows default Drupal fields only. Doesn't give much flexibility for selecting fields.
Simple Meta - Doesn't support any existing fields.
Beanstag - This allows the use of existing fields (via tokens). But, in order to target content types, a "folder" name has to be used in the URL (e.g. mysite.com/news/my-news for "news" content types, mysite.com/info/my-story for page content types). Although the folder name can be anything, I prefer my urls to be "folder" free (e.g. mysite.com/my-news)
It looks like BeansTag will have to make-do, but I thought I would ask in case anyone has better suggestions.
A:
You have to go to:
/admin/structure/types/manage/[YOUR CONTENT TYPE]/display/token
Then make sure:
All fields are visible (not hidden)
The format should be "default",
but in some circumstances, you might need to set it to "plain".
| {
"pile_set_name": "StackExchange"
} |
Q:
LatLngBounds in Google Earth JS API?
I'm looking for something similar to google maps' "LatLngBounds" in Google Earth API , but I can't find one in the Google earth API Refrence.
I Need this to center the camera view on all the markers/placemarks placed and zoom to a proper level.
Is There is such thing? or do I need to do it myself?
A:
You might check out the Earth API Utility Library, specifically the computeBounds() function.
Note that it walks the KML DOM and sums over the bounds, so it might be rather slow depending on the number of objects you want to have in view, and it will be conservative when calculating that view, so the bounds may end up larger than is optimal. If you want to take what's there and customize it, though, the source code is right there for the taking.
| {
"pile_set_name": "StackExchange"
} |
Q:
Nginx server block not working
Disclaimer: Just to clarify, I'm completely new to Linux, but I've configured everything via google searches and personal research.
I've got a Debian Wheezy server with LEMP stack that I intend to use as the host of a domain.
I got the DNS working so that when I enter the domain I get the "welcome to nginx" page. The trouble arises because I have already created the directory that will host the site and populated it with the files of the site (index.php is the main page), and I have also configured the server block (/etc/nginx/sites-available/example.com) like so:
UPDATED Server block
server {
listen 80;
server_name example.com www.example.com;
root /var/www/example.com;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/example.com;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
with a symbolic link to the sites-enabled directory. But even after restarting the service and/or restarting the machine the domain still displays the "welcome to nginx" message. I've tried out editing other lines with the answers of similar problems in the site and so far there's been no change, and the error logs show nothing. What could be causing the fault in the configuration?
Thanks in advance for the answers
UPDATE: Here's the nginx.conf file. In between experimenting while commenting and uncommenting some lines in the sites-available and trying to copy the file to the "sites-enabled" directory the domain now just plain refuses to load giving a "No data received" error
UPDATED nginx.conf
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
UPDATE 2: After rechecking my configurations I noted that the permissions were set incorrectly, and that the default was somehow overriding the site even when it was not on the sites-enabled directory. I backed up and deleted the default and just for good measure changed the working directory in the server block, moving the files of the site to the one I captured. It worked and now the site loads. Now I don't know if I should just add this to the update or put it in another question, but here goes: The problem now is that the site loads the html and CSS, but for some reason not the PHP, any ideas with that?. I'll update according to what I find or if you guys require more info, Thanks!
FINAL UPDATE: It took a bit, but I managed to know where the php error lied, I started digging through the logs and found out I needed to install php5-curl, the problem got fixed after that and now the site completely works.
A:
Are you using php CGI or FPM? You only need one fastcgi_pass directive, but both are uncommented.
Try making the index file index.html, and put such a file in the server root. Reload nginx and see if it serves it. Adding PHP complicates things, so make sure nginx itself works first. service nginx configtest is very helpful too.
Update:
It sounds like you are using PHP-FPM on port 9000. I would leave the unix socket line commented out, and make sure that the server can talk to its own port 9000 (TCP). As far as nginx is concerned, it sounds like it is working, but PHP may not be. By default, it will have it's own log file for the FPM daemon and one for the PHP script (probably in /var/log/ somewhere). Dive into /etc/php5/fpm and start looking through the .ini files.
| {
"pile_set_name": "StackExchange"
} |
Q:
Get label pop up on active layer only
I have this layer from Geoserver which I overlay in PNG Format on my leaflet map:
///// Geoserver Layers in WMS format
var pop_tot = L.tileLayer.wms("http://localhost:8081/geoserver/cite/wms", {
layers: 'cite:vnm_polbnda_adm3_2014_pdc',
format: 'image/png',
transparent: true,
version: '1.1.0',
attribution: "test"
});
I do have also this part of code to display the labels when I click on my polygons.
///// Add base layers + layers geoserver
L.control.layers(baseLayers,overlays).addTo(map);
/// Popup limite_inter_district
var owsrootUrl = 'http://localhost:8081/geoserver/cite/wms';
var defaultParameters = {
service : 'WFS',
version : '2.0',
request : 'GetFeature',
transparent: false,
typeName : 'cite:vnm_polbnda_adm3_2014_pdc',
outputFormat : 'json',
format_options : 'callback:getJson',
SrsName : 'EPSG:4326'
};
var parameters = L.Util.extend(defaultParameters);
var URL = owsrootUrl + L.Util.getParamString(parameters);
var ajax = $.ajax({
url : URL,
dataType : 'json',
jsonpCallback : 'getJson',
success : function (response) {
L.geoJson(response, {
style: function(feature) {
return {stroke: false, fillOpacity: 0.0};
},
onEachFeature: function (feature, url) {
popupOptions = {maxWidth: 250};
url.bindPopup("<b>Adm3 Name:</b> " + feature.properties.adm3_name
+ "<br><b>Total Population: </b>" + feature.properties.pop
,popupOptions);
}
}).addTo(map);
}
});
I would like the Pop Up to be active only when my layer pop_tot is selected. How I can do that?
A:
If I understand the logic of your app correctly, you have GeoJSON layer only for displaying popups. In this case the easiest solution would be to add this layer to the map when pop_tot layer is added, and remove it when pop_tot layer is removed.
For this purpose pop_tot layer's add and remove events can be used. First one is fired when layer is added to the map and the second one when it is removed. Those events can then be used to add/remove GeoJSON layer when pop_tot layer is added/removed.
So your $.ajax call can then look something like this:
var geoJSON;
var ajax = $.ajax({
url: URL,
dataType: 'json',
jsonpCallback: 'getJson',
success: function (response) {
geoJSON = L.geoJson(response, {
style: function(feature) {
return {stroke: false, fillOpacity: 0.0};
},
onEachFeature: function (feature, url) {
popupOptions = {maxWidth: 250};
url.bindPopup("<b>Adm3 Name:</b> " + feature.properties.adm3_name
+ "<br><b>Total Population: </b>" + feature.properties.pop
,popupOptions);
}
}).addTo(map);
pop_tot.on('add', function(evt) {
if (!map.hasLayer(geoJSON)) map.addLayer(geoJSON);
});
pop_tot.on('remove', function(evt) {
if (map.hasLayer(geoJSON)) map.removeLayer(geoJSON);
});
}
});
| {
"pile_set_name": "StackExchange"
} |
Q:
CFG for reverse polish notation
I need to create a CFG for reverse polish notation with operators +-*/ and then write out the right derivation and create an abstract syntax tree.
I understand how to create the derivation and the syntax tree but I don't really understand how to create a CFG given a set of rules.
I've done a lot of research online and I am only able to find out how to use a CFG but not how to create one with a given set of rules.
If someone could point me in the right direction or explain a different example of this that would be awesome. Thanks!
A:
Not sure what you are referring to with a given set of rules...? Isn't the grammar just
X -> X X o
X -> n
Where o is an operator and n a number?
| {
"pile_set_name": "StackExchange"
} |
Q:
How to make the data to only row instead of multiple rows for account-number
Below is the result what i'm getting and what i'm supposed to get is to be in one row
AccountNumber Name UnitNumber AdmStatus AdmitDate Insurance1 Insurance2 Insurance3 Insurance4 Insurance5
V000123456 FERG M000123456 DIS IN 11/11/2019 ins1
V000123456 FERG M000123456 DIS IN 11/11/2019 ins2
but i need the result to be in row with insurance 2
AccountNumber Name UnitNumber AdmStatus AdmitDate Insurance1 Insurance2 Insurance3 Insurance4 Insurance5
V000123456 FERG M000123456 DIS IN 11/11/2019 ins1 ins2
here is the query i have attached
SELECT
AccountNumber
,case when bio.InsuranceOrderID ='1' then bio.InsuranceID ELSE '' END AS 'Insurance1'
,case when bio.InsuranceOrderID ='2' then bio.InsuranceID ELSE '' END AS 'Insurance2'
,case when bio.InsuranceOrderID ='3' then bio.InsuranceID ELSE '' END AS 'Insurance3'
,case when bio.InsuranceOrderID ='4' then bio.InsuranceID ELSE '' END AS 'Insurance4'
,case when bio.InsuranceOrderID ='5' then bio.InsuranceID ELSE '' END AS 'Insurance5'
FROM BarVisits bv
left outer join BarInsuranceOrder bio on bio.SourceID = bv.SourceID and bv.VisitID = bio.VisitID
WHERE AccountNumber ='1213456'
group by AccountNumber
,bio.InsuranceOrderID
,bio.InsuranceID
A:
Use aggregation functions for the columns and fix the GROUP BY:
SELECT AccountNumber,
MAX(case when bio.InsuranceOrderID ='1' then bio.InsuranceID ELSE '' END) AS Insurance1,
MAX(case when bio.InsuranceOrderID ='2' then bio.InsuranceID ELSE '' END AS Insurance2,
MAX(case when bio.InsuranceOrderID ='3' then bio.InsuranceID ELSE '' END AS Insurance3,
MAX(case when bio.InsuranceOrderID ='4' then bio.InsuranceID ELSE '' END) AS Insurance4
MAX(case when bio.InsuranceOrderID ='5' then bio.InsuranceID ELSE '' END) AS Insurance5
FROM BarVisits bv LEFT JOIN
BarInsuranceOrder bio
ON bio.SourceID = bv.SourceID AND bv.VisitID = bio.VisitID
WHERE AccountNumber = 'V000123456'
GROUP BY AccountNumber
| {
"pile_set_name": "StackExchange"
} |
Q:
How to convert a DateTime value to dd/mm/yyyy in jQuery?
I am having a datetime object whose value is: /Date(1475173800000)/ in jQuery. I want it to be displayed in dd/mm/yyyy in jQuery. Is there any way to achieve it?
A:
You can use new Date() with parameter being universal time variable, Date.prototype.toJSON(), String.prototype.slice(), String.prototype.split() with parameter "-", Array.prototype.reverse(), Array.prototype.join() with parameter "/"
var time = 1475173800000;
var date = new Date(time).toJSON().slice(0, 10).split("-").reverse().join("/");
console.log(date);
| {
"pile_set_name": "StackExchange"
} |
Q:
java.lang.IndexOutOfBoundsException thrown in ANTLR4
I was writing my parser and I am getting an awkward error.
One of my rules is like the following:
operatorTerm[ReadOptions Options, int priority] returns [Term t]
@init
{
int p = priority;
Term t2 = null;
Term t = null;
CompoundTermTag f = null;
}
: {testFY($Options, $priority)}?
a=op[$Options, p] {f = $a.tag;}
b=term[$Options, p] {$t = $b.t; $t = createTerm(f, $t); }
| {testFX($Options, $priority)}?
d=op[$Options, p] {f = $d.tag;}
e=term[$Options, p-1] {$t = $e.t; $t = createTerm(f, $t);}
;
This will throw the following error an using the alias ANTLR4
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 199, Size: 199
at java.util.ArrayList.rangeCheck(ArrayList.java:604)
at java.util.ArrayList.get(ArrayList.java:382)
at org.antlr.v4.automata.ATNSerializer.serialize(ATNSerializer.java:201)
at org.antlr.v4.automata.ATNSerializer.getSerialized(ATNSerializer.java:375)
at org.antlr.v4.codegen.model.SerializedATN.<init>(SerializedATN.java:46)
at org.antlr.v4.codegen.model.Parser.<init>(Parser.java:96)
at org.antlr.v4.codegen.ParserFactory.parser(ParserFactory.java:92)
at org.antlr.v4.codegen.OutputModelController.parser(OutputModelController.java:165)
at org.antlr.v4.codegen.OutputModelController.buildParserOutputModel(OutputModelController.java:114)
at org.antlr.v4.codegen.CodeGenerator.generateParser(CodeGenerator.java:169)
at org.antlr.v4.codegen.CodeGenPipeline.process(CodeGenPipeline.java:73)
at org.antlr.v4.Tool.processNonCombinedGrammar(Tool.java:422)
at org.antlr.v4.Tool.process(Tool.java:384)
at org.antlr.v4.Tool.processGrammarsOnCommandLine(Tool.java:343)
at org.antlr.v4.Tool.main(Tool.java:190)
Now if I chope off a few lines from the code this would compile. Making this new rule has thrown this exception. Also, If I chope off a few lines from this rule itself the exception won't be thrown.
Is it because the size of my grammar has become very large ?
Unable to figure this error out.
Also, I am experiencing no such error even if I trim off some other independent rule. Is there a limit to the size of a grammar file ?
My grammar file is around 800 lines.
A:
This sounds like a bug in a specific version of the ANTLR 4 tool. You can report bugs to the project issue tracker on GitHub:
https://github.com/antlr/antlr4/issues
Note: I cannot reproduce this issue in the latest source code, which will be available as ANTLR 4.1 released at the end of this month.
| {
"pile_set_name": "StackExchange"
} |
Q:
Filter subform by using combo box
I have a combo box on a form that I want to filter a subform (SubSearchMaster_frm).
I am receiving:
Runtime error 3464: data type mismatch in expression.
The code is below:
Private Sub CboNIIN_AfterUpdate()
Me.SubSearchMaster_frm.Form.Filter = "[NIIN] = " & Me.CboNIIN
Me.SubSearchMaster_frm.Form.FilterOn = True
End Sub
The subform is a query.
I have also tried:
Private Sub CboNIIN_AfterUpdate()
Dim sql As String
sql = "Select * from SubSearchMaster_frm where ([NIIN] = " & Me.CboNIIN & ") From subsearchmaster_frm"
Me.SubSearchMaster_frm.Form.RecordSource = sql
Me.SubSearchMaster_frm.Form.Requery
End Sub
But I'm getting an error on that too.
A:
Since you have stated that the NIIN field is of Text datatype, you will need to surround the filter value with single or double quotes else you will receive the familiar datatype mismatch error message.
For example:
Me.SubSearchMaster_frm.Form.Filter = "[NIIN] = '" & Me.CboNIIN & "'"
Without the quotes, a numerical value is being supplied, thus resulting in a data type mismatch.
| {
"pile_set_name": "StackExchange"
} |
Q:
The real 2019 solution to disable chrome autofill
I know this question has a lot of answers. I have looked through all the solutions to disable google autocomplete(the drop down of suggestions), like using autocomplete=0ff or autocomplete=false, but nothing has solved the issue.
I have created an MVC app that has views with dropdown lists and HTML EditorFor.
One solution to add a name to the HTML editor for, helped to remove autocomplete, however since I changed the name of the HTML EditorFor, I had an issue posting back the value.
<div class="col-md-10">
@Html.EditorFor(model => model.Address, new {htmlAttributes = new { @class = "form-control", @id = "show_address", Name = Guid.NewGuid().ToString(), autocomplete = "noped" } })
</div>
Does anybody have a solution for 2019 to disable the google autocomplete?
Update:
I tried using html.textboxfor(as given in the first solution below), however I have realised that autocomplete=off only works if there is one other textboxfor in the view. If there is multiple textboxfor in the same view, using autocomplete=off on any of the Html Textboxfor will not work for any of them to disable autocomplete! Can anyone please help?
A:
I think I have found the answer. I give credit for @Mangesh Ati for this solution. I just wanted to summarise the solution for anyone else interested.
autocomplete=off works to disable google autosuggestions on all @htmlTextBoxFor, besides for the part of model called address, instead use autocomplete=randomn_string
Important:
If you are using a jquery autocomplete on the textboxfor..its important to add the attribute of autocomplete=randomn_string on .focus like below:
$('#show_address').autocomplete({
}).focus(function () {
$(this).attr('autocomplete', 'some_random_value');
});
| {
"pile_set_name": "StackExchange"
} |
Q:
What do these metrics mean for Spark Structured Streaming?
spark.streams.addListener(new StreamingQueryListener() {
......
override def onQueryProgress(queryProgress: QueryProgressEvent): Unit = {
println("Query made progress: " + queryProgress.progress)
}
......
})
When StreamingQueryListener is added to Spark Structured Streaming session and output the queryProgress continuously, one of the metrics you will get is durationMs:
Query made progress: {
......
"durationMs" : {
"addBatch" : 159136,
"getBatch" : 0,
"getEndOffset" : 0,
"queryPlanning" : 38,
"setOffsetRange" : 14,
"triggerExecution" : 159518,
"walCommit" : 182
}
......
}
Can anyone told me what do those sub-metrics in durationMs meaning in spark context? For example, what is the meaning of "addBatch 159136".
A:
https://www.waitingforcode.com/apache-spark-structured-streaming/query-metrics-apache-spark-structured-streaming/read
This is an excellent site that addresses the aspects and more, passing the credit to this site therefore.
| {
"pile_set_name": "StackExchange"
} |
Q:
Calculation {member_field} * {channel_field}
Is there a simple way to do this calculation on a page without using PHP on output?
I have a member_field with a value and a channel_field with a value and want to output
{member_field} * {channel_field} =
A:
There is no direct way. I suggest you to install this module
Edit
If incase you have values to pass with comma separated (i.e., 5,00,000.05), You need to edit pi.math.php file to add this code on line number 54.
// remove commas
$formula = str_replace("," , "" , $formula);
| {
"pile_set_name": "StackExchange"
} |
Q:
Is zero (Integer) treated as blank or empty in Django?
I tried integer zero in a input field (IntegerField), however that zero never appears to be show after submit.
This is part of my template:
{{user_date_table.june_11}}
{% if not user_date_table.june_11_steps %}
<form action="/steps_count/index/{{ username }}/table/" method="post">
{% csrf_token %}
<input type="text" autocorrect="off" autocapitalize="off" name="june_11_steps"/>
<input type="submit" value="Add"/>
</form>
{% else %}
{{user_date_table.june_11_steps}}
{% endif %}
I think that if not user_date_table.june_11_steps throws "True", i.e. it is blank if I use zero as input. Is this correct, and can I overcome this?
A:
In python, all of the following are evaluated as False:
'', "", 0, [], {}, (), False, None
So, user_date_table.june_11_steps is 0, then {% if not user_date_table.june_11_steps %} will be evaluated True and that block will be executed.
UPDATE: For solution,
If user_date_table do not take place in your context while you display the form, or if it is empty, you can use
{% if not user_date_table%}
Pass another value to your template, like display_result, and use that to check which fieldset will be displayed.
views.py:
if request.POST: #saving the post data
display_result = True
template.html
{% if not display_result %}
<form action="/steps_count/inde...
But the first approach is always better.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does within set column to NA instead of 0
Taking the dataset from here:
how to insert a new column in a dataset with values if it satisfies a statement
df1 <- read.table(header=TRUE, text = "
Chr start end num.mark seg.mean id
1 68580000 68640000 8430 0.7 gain
1 115900000 116260000 8430 0.0039 loss
1 173500000 173680000 5 -1.7738 loss
1 173500000 173680000 12 0.011 loss
1 173840000 174010000 6 -1.6121 loss")
Why does the following within statement result in NA's in the Occurance column?
within(df1, {Occurance <- 0
Occurance[seg.mean >= 0.5 & id == "gain"] <- 1
Occurance[seg.mean <= -0.5 & id == "loss"] <- -1})
Result:
Chr start end num.mark seg.mean id Occurance
1 1 68580000 68640000 8430 0.7000 gain 1
2 1 115900000 116260000 8430 0.0039 loss NA
3 1 173500000 173680000 5 -1.7738 loss -1
4 1 173500000 173680000 12 0.0110 loss NA
5 1 173840000 174010000 6 -1.6121 loss -1
If i do it in two steps:
df2 <- within(df1, Occurance <- 0)
within(df2, {Occurance[seg.mean >= 0.5 & id == "gain"] <- 1;
Occurance[seg.mean <= -0.5 & id == "loss"] <- -1})
I do get the hoped-for result
Chr start end num.mark seg.mean id Occurance
1 1 68580000 68640000 8430 0.7000 gain 1
2 1 115900000 116260000 8430 0.0039 loss 0
3 1 173500000 173680000 5 -1.7738 loss -1
4 1 173500000 173680000 12 0.0110 loss 0
5 1 173840000 174010000 6 -1.6121 loss -1
A:
This has to do with how vectors are initialized and extended in R. For example
a <- 0
a[1:10>5] <- 2
# [1] 0 NA NA NA NA 2 2 2 2 2
When you first create a, it has length 1. When you assign to indexes that don't exist, R created those indexes and fills in missing values with NA values. That's basically what's happening in your example. R doesn't merge that your new columns to the data.frame till your code block is complete.
Your step-method works because the single element vector of 0 is recycled to the full length of the data.frame after the first within() ends.
Why not use a more vectorized approach.
within(df1, {Occurance <-
ifelse(seg.mean >= 0.5 & id == "gain", 1,
ifelse(seg.mean <= -0.5 & id == "loss", -1, 0))
})
or you can just initialize Occurance to the correct length
within(df1, {Occurance <- rep(0, length( seg.mean))
Occurance[seg.mean >= 0.5 & id == "gain"] <- 1
Occurance[seg.mean <= -0.5 & id == "loss"] <- -1
})
| {
"pile_set_name": "StackExchange"
} |
Q:
Filtering to Duplicates makes Close-Votes Queue a Haven for Robo-Reviewers
I recently started filtering to duplicates in the close votes queue and I've noticed that every review will always have a Duplicate tab allowing you to tab between the question and proposed duplicate.
Whenever presented with an audit, I know immediately because the duplicate tab disappears.
Upon searching for information regarding audit filtering I found Audits bug in the filtered review queue which told me that audits are not filtered but the tags are faked such that a user could not just watch the tags and only pay attention if their filtered tags do not show up, this doesn't help with this situation though.
As far as I can tell, you can currently robo-review the close votes queue indefinitely without ever reading a question:
Assuming I am not mistaken, should something be done about this?
Related (outdated?) but not duplicate: There's no review audits for duplicate
A:
It is a pity that SE dev team doesn't invest effort into these audits because per my understanding, designing reliable and realistic audits for dupe closure is not really difficult.
For example, an audit where user is expected to Close to pass can simulate a fairly routine case of a migrated cross-post for an otherwise good question:
Take any "known good" audit item for close queue - it will fake a "master" question
For a question to be closed, take the very same audit item and fake it a little to make it look like a typical migrated cross-post: shift posting date for half an hour and attach fake migration notice
...that's all! This way leaves no reason for a responsible reviewer to vote Leave Open.
Similarly, for audits requiring Leave Open to pass, another real-life scenario makes a good fit: when an inexperienced low rep user, maybe just testing "how things work", blindly flags an otherwise good question as a dupe of a crappy one:
Take any "known bad" audit item for close queue - it will fake a "master" question
For a question to be closed, take any "known good" audit item
...voila, no reasonable reviewer should vote Close in cases like that.
The only limitation of above is that of audits composition/selection without a “human factor”, but that's another (sad) story.
| {
"pile_set_name": "StackExchange"
} |
Q:
Products for withdrawing from a home's equity?
I am a current homeowner with a property that has no mortgage (everything has been paid off). A family member is in need of some cash to consolidate some credit card debt and I would like to help them out. However, the amount of money that they need is in the upwards of 100k.
Using my home's equity to open a secured loan, what I've found so far are two main types of secured loans: HELOC and home equity loan.
With a HELOC, I feel like this is more of a credit card that uses my home as collateral. I rather have one lump-sum payment now and pay back a loan at a fixed rate. So my issue with this is that since the rates are variable in relation to the prime rate (which I believe will rise in the upcoming years), I don't want uncertainty when repaying the HELOC.
With a home equity loan, most banks that I've checked (Citi bank, Chase bank, 5/3 bank, many others) only allow a home equity loan to be opened as a 2nd lien mortgage and I must have a current mortgage. Since I don't have a 1st lien mortgage, I do not qualify (according to the banks that I checked).
My question is, besides these two products, do I have any other option/product to request a one-time, lump-sum loan that I can pay back (at a fixed rate, preferably) over the next 10/15 years? And are there any banks that allow home equity loans as a 1st lien mortgage? If not, am I only stuck with a HELOC?
I tried searching online for others who had the same problem but most forums (and banks) are trying to push HELOC above everything else. As mentioned above, I rather not pay variable rate loans. The closest forum that I found that describes my issue is here: https://www.nerdwallet.com/community/t/take-out-loan-against-paid-off-house/8000
However, the community suggested opening a HELOC.
Let me know if you have any thoughts, I appreciate any help. Thanks.
Disclaimer - I trust this family member to help pay me back. I understand the risks of what will happen if I fail to make payments on any loan that uses my home as collateral.
A:
Yes, the product you want is a fixed rate loan. As rates were dropping in the late '90s, I went from a "Mortgage" to a "Home Equity Loan." The latter had a fixed rate, 15 year term, and a crediting structure for payments that ran by the day. i.e. unlike a mortgage whose amortization is unchanged if you make every payment 10 days early vs 5 days late, this product lent itself to prepayments, or early full payments. That aside, it had no closing costs, but a higher rate. So refinancing from a 6% fixed mortgage to a 5.5% HEL actually made sense. As did another refi to 5% a few years later.
TL:DR - The product is called a Home Equity Loan, and you should shop around to find one that suits you.
Mandatory disclaimer - You should not do this. Not unless you have the assets to take this on yourself, and say goodbye to the $100K. Years ago, I 'lent' my sister in law $10,000, and I told my wife that I never expected to see it again. Which I haven't. The $10K paid off a card she defaulted on, and set her up for a refinanced mortgage. She stuck to my one condition, that she take $400/mo from the reduced payments, and deposit to her matched 401(k). There's over $50K in that account from this deal. I offer this long anecdote in case the family member will be a burden to you in the future and somehow this will still benefit you by lessening that future issue.
| {
"pile_set_name": "StackExchange"
} |
Q:
if file exists php
I use following code to delete old image from ftp...
unlink(getcwd() . '/images/' . $row['pic']);
But it throws errors in case there is not image, so i have tried using file_exists() but that didn't work too, so how can i check if there is an image before trying to delete. thanks.
if(file_exists(getcwd() . '/images/pics/' . $row['pic']))
{
unlink(getcwd() . '/images/pics/' . $row['pic']);
}
A:
often hosters use different user for ftp and apache… could it be, that you don't have chmodded your images, so the www-user can't delete them?
edit:
maybe is_writable is the better solution for you:
if(is_writable(dirname(__FILE__) . '/images/pics/' . $row['pic'])){
unlink(dirname(__FILE__) . '/images/pics/' . $row['pic']);
}
A:
First check what getcwd() is returning. Probably false, so that means you do not have the right permissions set. Therefore your path is not correctly constructed. Check getcwd() docs, change permissions or contact your system administrator. Also take a snap at dirname(__FILE__)
A:
First you can store the image path in a $path variable. Then you may test if the image is there and it's writable, and only then, delete it.
$path = getcwd() . '/images/pics/';
if( file_exists( $path . $row['pic'] ) && is_writable( $path . $row['pic'] ) ){
unlink( $path . $row['pic'] );
}
If you are using PHP 5 and want to know more about any exception that may be thrown, in the meantime you can surround the whole expression with a try...catch statement:
try{
$path = getcwd() . '/images/pics/';
if( file_exists( $path . $row['pic'] ) && is_writable( $path . $row['pic'] ) ){
unlink( $path . $row['pic'] );
}
} catch (Exception $exc) {
echo $exc->getMessage();
echo $exc->getTraceAsString();
}
This probably won't solve the problem, but may help clarifying why it is happening.
| {
"pile_set_name": "StackExchange"
} |
Q:
Get the model instance from within a lambda
I have a nested form that, upon update, creates duplicate entries for all the nested attributes. In order to prevent this, I want to use the reject_if option for accepts_nested_attributes. The code might look something like this:
accepts_nested_attributes_for :implicants, :reject_if => lambda { |a| a.is_a_duplicate? }
Unfortunately, a here is an ActionController::Parameter, not the instance of the class I'm working with. I tried this:
accepts_nested_attributes_for :implicants, :reject_if => lambda { |a| self.is_a_duplicate?(a) }
But this results in a call to the class, as opposed to the instance. Is there any way refer to the instance from within a reject_if lambda?
A:
I would suggest that this isn't something that the :reject_if should have responsibility for. :reject_if is meant to reject groups of parameters for very obvious and basic reasons, such as not having a first name or an email filled in. If it's somewhat more complex then you should just do the validations on the model that is being created. And if you're consistently creating duplicates when you don't want to then you should go further up the chain and prevent these params from ending up in your controller at all by eg. fixing the views.
| {
"pile_set_name": "StackExchange"
} |
Q:
Split feature maps (3D arrays) into 2D arrays
Let's say I have a feature map (i.e. a 3D array) of shape (32, 32, 96)
In [573]: feature_map = np.random.randint(low=0, high=255, size=(32, 32, 96))
Now, I want to visualize each of the feature maps individually. So, I'd like to extract each of the frontal slices (i.e. a 2D array of shape (32, 32)) so that should give 96 such feature maps.
How to get these arrays, possibly not as a copy to be memory efficient? Since this is only for visualization, a view is enough!
A:
You can use np.transpose and slicing operations (not creating copies of an array):
feature_map = np.random.randint(low=0, high=255, size=(32, 32, 96))
feature_map = np.transpose(feature_map, axes=[2, 0, 1])
for i in range(feature_map.shape[0]):
print(feature_map[i].shape) # a view of original array. shape=(32, 32)
... or do just slicing:
for i in range(feature_map.shape[2]):
print(feature_map[:, :, i].shape) # a view of original array. shape=(32, 32)
A:
I also realized that numpy.dsplit() can be leveraged for such 3D arrays since we're trying to split it along depth-wise. But, I've to additionally use np.squeeze() to eliminate the 3rd dimension. Also, as needed for my case, it also returns a view of the array.
# splitting it into 96 slices in one-go!
In [659]: np.dsplit(feature_map, feature_map.shape[-1])
In [660]: np.dsplit(feature_map, feature_map.shape[-1])[10].flags
Out[660]:
C_CONTIGUOUS : False
F_CONTIGUOUS : False
OWNDATA : False #<============== NO copy is made
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
In [661]: np.dsplit(feature_map, feature_map.shape[-1])[10].shape
Out[661]: (32, 32, 1)
# getting rid of unitary dimension with `np.squeeze`
In [662]: np.squeeze(np.dsplit(feature_map, feature_map.shape[-1])[10]).shape
Out[662]: (32, 32)
In [663]: np.squeeze(np.dsplit(feature_map, feature_map.shape[-1])[10]).flags
Out[663]:
C_CONTIGUOUS : False
F_CONTIGUOUS : False
OWNDATA : False #<============== NO copy is made
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
| {
"pile_set_name": "StackExchange"
} |
Q:
JavaScript localStorage for pacman game
I'm trying to learn how use the localStorage for a pacman game.
The idea is one pacman eats a pellet which deletes it from page and adds it to localStorage so you refresh the page to pellet are still gone.
My problem is that only getting last Key Id from key list back, but what I want is an array to store all numbers and get them all back.
My code is below:
for (var j in key_list) {
keyId = key_list[j].GetUserData().val;
stage.removeChild(pacdotsE[keyId]);
world.DestroyBody(key_list[j]);
//console.log(keyId);
localStorage.setItem("key_list",keyId);
}
key_list.length = 0;
function readStorage() {
console.log(localStorage.getItem("key_list"));
keyId = localStorage.getItem("key_list");
}
A:
The problem is that in localStorage you are only writing the latest keyId.
for (var j in key_list) {
...
localStorage.setItem("key_list",keyId);
}
What you can do is to save an array to string of the keys like so:
for (var j in key_list) {
...
keyIdArr.push(keyId)
localStorage.setItem("key_list",keyIdArr.toString());
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Warnings trying to read Spark 1.6.X Parquet into Spark 2.X
When attempting to load a spark 1.6.X parquet file into spark 2.X I am seeing many WARN level statements.
16/08/11 12:18:51 WARN CorruptStatistics: Ignoring statistics because created_by could not be parsed (see PARQUET-251): parquet-mr version 1.6.0
org.apache.parquet.VersionParser$VersionParseException: Could not parse created_by: parquet-mr version 1.6.0 using format: (.+) version ((.*) )?\(build ?(.*)\)
at org.apache.parquet.VersionParser.parse(VersionParser.java:112)
at org.apache.parquet.CorruptStatistics.shouldIgnoreStatistics(CorruptStatistics.java:60)
at org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:263)
at org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:567)
at org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:544)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:431)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:386)
at org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:107)
at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:109)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReader$1.apply(ParquetFileFormat.scala:369)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReader$1.apply(ParquetFileFormat.scala:343)
at [rest of stacktrace omitted]
I am running 2.1.0 release and there are multitudes of these warnings. Is there any way - short of changing logging level to ERROR - to suppress these?
It seems these were the result of a fix made - but the warnings may not yet be removed. Here are some details from that JIRA:
https://issues.apache.org/jira/browse/SPARK-17993
I have built the code from the PR and it indeed succeeds reading the
data. I have tried doing df.count() and now I'm swarmed with
warnings like this (they are just keep getting printed endlessly in
the terminal):
Setting the logging level to ERROR is a last ditch approach: it is swallowing messages we rely upon for standard monitoring. Has anyone found a workaround to this?
A:
For the time being - i.e until/unless this spark/parquet bug were fixed - I will be adding the following to the log4j.properties:
log4j.logger.org.apache.parquet=ERROR
The location is:
when running against external spark server: $SPARK_HOME/conf/log4j.properties
when running locally inside Intellij (or other IDE): src/main/resources/log4j.properties
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add multi levels, into responsive css menu?
Before you guys yell on me,
I know there are tons of questions about this and I saw a few nice complete menus too.
But mine looks very simple to me, Might be cause I am using it for a long time, same as gf.lol
At the moment my menu has 2 levels but I want to do it multi level
I tried several solution but couldnt make it work, I am very bad on css.
Here is the fiddle I do tests on there: https://jsfiddle.net/Netmaster/14gcz7bk/
Here my codes:
$("nav div").click(function() {
$("ul").slideToggle();
$("ul ul").css("display", "none");
});
// $("ul li").click(function(){
// $("ul ul").slideUp();
// $(this).find('ul').slideToggle();
// });
$('ul li').click(function() {
$(this).siblings().find('ul').slideUp();
$(this).find('ul').slideToggle();
});
$(window).resize(function() {
if ($(window).width() > 768) {
$("ul").removeAttr('style');
}
});
nav div {
padding: 0.6em;
background: #e3e3e3;
display: none;
cursor: pointer;
color: #292929;
font-size: 24px;
}
nav ul li i {
color: #292929;
float: right;
padding-left: 20px;
}
ul {
margin: 0px;
padding: 0px;
background: #e3e3e3;
list-style-type: none;
position: relative;
}
ul li {
display: inline-block;
}
ul li a {
padding: 15px;
color: #292929;
text-decoration: none;
display: block;
}
ul li:hover {
background: lightgrey;
}
ul ul {
position: absolute;
min-width: auto;
background: lightgrey;
display: none;
}
ul ul li {
display: block;
background: #e3e3e3;
}
ul li:hover>ul {
display: block;
}
@media (max-width: 768px) {
nav div {
display: block;
}
ul {
display: none;
position: static;
background: #e3e3e3;
}
ul li {
display: block;
}
ul ul {
position: static;
background: #e3e3e3;
}
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<nav>
<div>
<i class="fa fa-bars"></i>
</div>
<ul>
<li><a href="#">Home</a></li>
<li><a href="#">JavaScript <i class="fas fa-sort-down"></i></a>
<ul>
<li><a href="#">jQuery</a></li>
<li><a href="#">Vanilla JavaScript</a></li>
<li><a href="#">ReactJS</a></li>
<li><a href="#">VueJS</a></li>
</ul>
</li>
<li><a href="#">Graphic Design <i class="fas fa-sort-down"></i></a>
<ul>
<li><a href="#">Font</a></li>
<li><a href="#">PSD</a></li>
<li><a href="#">Illustration</a></li>
<li><a href="#">Texture</a>
</li>
</ul>
</li>
<li><a href="#">About</a></li>
<li><a href="#">Contact</a></li>
</ul>
</nav>
A:
You should call stopPropagation() method in order to prevent the first level to close, while closing the second level:
$('ul li').click(function(e) {
e.stopPropagation();
$(this).find('ul').slideToggle();
});
| {
"pile_set_name": "StackExchange"
} |
Q:
more secure? select/update from postgresql table
i have 4 users ... or even more - they works in custom companys.
Table A:
userName | companyName
---------|-------------
user1 | co1
-----------------------
user2 | co2
------------------------
...
And i have a lot of some tables:
tableB - with products, tableC with something else other... many, many tables.
Eeach table has a column called: companyName.
My questions:
Question 1: how to write?/exec a function to select/update/insert varoius variable-sized questions, so that each user can read/write update only records with own companyName.
I (think I )could use:
select * from tableC, userName, companyName from tableA ..... where tableC.companyName=tableA.companyName and tableA.userName=currentuser();
but what if some of users make a query (from PGAdmin or something like):
select * from tableC;
Question 2a:I would like to block this possibility on server level.
Question 2b: Is it any way to write function (each one for insert/update/delete query) for unknown list of arguments?
Question 2c:How to add full access for tableA...to TableX for this written function only, independ on executing user?
A:
https://www.postgresql.org/docs/current/static/ddl-rowsecurity.html
row security policies that restrict, on a per-user basis, which rows
can be returned by normal queries or inserted, updated, or deleted by
data modification commands. This feature is also known as Row-Level
Security. By default, tables do not have any policies, so that if a
user has access privileges to a table according to the SQL privilege
system, all rows within it are equally available for querying or
updating.
and as example
CREATE POLICY user_policy ON users
USING (true)
WITH CHECK (user_name = current_user);
will make select result to contain only rows where column user_name is equal to current_user
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use MasterPage for WebPage in SubDirectory?
Parser Error or Master Page error?
I have a website with a MasterPage in my Visual Studio 2010 project.
I have many WebForms located in SubDirectories, but for this question I will focus on the SubDirectory called /contact.
In VS2010, all of the WebForms in the /contact directory display as they are supposed to using this page directive code:
<%@ MasterPageFile="~/Site.Master" ... %>
It is my understanding that the ~/ is supposed to direct the page to the root folder.
Yet, when I go to a page in that folder, I get a Parser Error saying that the MasterPage does not exist because the page is attempting to load the MasterPage from here:
'/contact/Site.Master'
If I modify my VS2010 project so that the page directive tries to step back to the root level, the VS project give me Master Page errors.
Does not work:
<%@ MasterPageFile="../~/Site.Master" ... %>
Also does not work:
<%@ MasterPageFile="~/../Site.Master" ... %>
What is the trick here?
A:
something is probably wrong in the visualstdio at your end.
I am pretty sure what you are doing is correct.
I just tried creating an asp.net project,
added a folder called contact and then dragged a default.aspx page inside it
this is code in the markup file
<%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true"
CodeFile="Default.aspx.cs" Inherits="_Default" %>
Here is the folder structure.
Every things works at my end.
BTW if you were indeed trying to read masterpage from a page in contact folder and the framework somehow looks inside contact as the root then try
../Site.Master
| {
"pile_set_name": "StackExchange"
} |
Q:
MVC Url.Action with 2 parameters
I have the following in my controller where I am passing in 2 parameters:
url = Url.Action("ViewReq ", "ProgramT ", new System.Web.Routing.RouteValueDictionary(new { id = spid pgid = pid }), "http", Request.Url.Host);
When I view this, it shows up as:
http://localhost/Masa/ProgramT/ViewReq/20036?pgid=00001
I like it to show up as:
http://localhost/Masa/ProgramT/ViewReq?id=20036&pgid=00001
How do I modify the UrlAction to show this way?
A:
You could modify your default route registration in Global.asax so that the {id} token is not part of your urls. Remove it or something.
| {
"pile_set_name": "StackExchange"
} |
Q:
Finding the first instance of a variable
I have a dataset that is asking me to find the first time a registered online shopper purchased something and apply a 5% discount to that purchase.
The dataset has 28 columns but for the purpose of this question I will condense it to only what i think is relevant.
I need to create a new column that will tell me the first time someone purchased something. We can assume that purchases made on the same day are the same purchase but belonging to a different item.
Obs ID Trans_Date Order_Number Value Status
----------------------------------------------------------------
1874 866 30/07/2016 191 $4,217.90 Registered
1875 866 30/07/2016 191 $4,217.90 Registered
1876 866 31/07/2016 192 $2,422.75 Registered
1877 866 31/07/2016 192 $2,422.75 Registered
1878 . 31/07/2016 193 $4,162.66 Unregistered
1879 . 31/07/2016 193 $4,162.66 Unregistered
1880 344 31/07/2016 194 $4,405.51 Registered
1881 344 31/07/2016 194 $4,405.51 Registered
1882 . 31/07/2016 195 $2,114.76 Unregistered
1883 . 31/07/2016 195 $2,114.76 Unregistered
1884 250 31/07/2016 196 $3,310.72 Registered
1885 250 31/07/2016 196 $3,310.72 Registered
1886 . 31/07/2016 197 $4,633.48 Unregistered
1887 . 31/07/2016 197 $4,633.48 Unregistered
1888 . 31/07/2016 197 $4,633.48 Unregistered
1889 . 31/07/2016 197 $4,633.48 Unregistered
1890 . 31/07/2016 198 $6,224.43 Unregistered
1891 . 31/07/2016 198 $6,224.43 Unregistered
1892 . 31/07/2016 198 $6,224.43 Unregistered
1893 . 31/07/2016 198 $6,224.43 Unregistered
A:
Here's my 'first' dataset:
'obs' , 'id' , 'trans_date' , 'order_number' , 'value' , 'status'
1874 , 866 , 30/07/2016 , 191 , 4217.90 , Registered
1875 , 866 , 30/07/2016 , 191 , 4217.90 , Registered
1876 , 866 , 31/07/2016 , 192 , 2422.75 , Registered
1877 , 866 , 31/07/2016 , 192 , 2422.75 , Registered
1878 , 344 , 30/07/2016 , 193 , 4162.66 , Unregistered
1879 , 344 , 30/07/2016 , 193 , 4162.66 , Unregistered
1880 , 344 , 31/07/2016 , 194 , 4405.51 , Registered
1881 , 344 , 31/07/2016 , 194 , 4405.51 , Registered
1882 , 250 , 30/07/2016 , 195 , 2114.76 , Unregistered
1883 , 250 , 30/07/2016 , 195 , 2114.76 , Unregistered
1884 , 250 , 31/07/2016 , 196 , 3310.72 , Registered
1885 , 250 , 31/07/2016 , 196 , 3310.72 , Registered
1886 , 275 , 30/07/2016 , 197 , 4633.48 , Unregistered
1887 , 275 , 30/07/2016 , 197 , 4633.48 , Unregistered
1888 , 275 , 30/07/2016 , 197 , 4633.48 , Unregistered
1889 , 275 , 30/07/2016 , 197 , 4633.48 , Unregistered
1890 , 275 , 31/07/2016 , 198 , 6224.43 , Unregistered
1891 , 275 , 31/07/2016 , 198 , 6224.43 , Unregistered
1892 , 275 , 31/07/2016 , 198 , 6224.43 , Unregistered
1893 , 275 , 31/07/2016 , 198 , 6224.43 , Unregistered
and here's some proc sql:
proc sql noprint;
create table temp as
select *,min(trans_date) format=date9. as first
from first
group by id
order by order_number;
create table final as
select obs,id,trans_date,order_number,value,status,
case when first = trans_date then 'FIRST'
else 'NOT FIRST'
end as flag
from temp;
quit;
| {
"pile_set_name": "StackExchange"
} |
Q:
Create a sampling rate array
I have a df with 40 000 000 points that looks like this:
A
0 0.50
1 0.90
2 5.94
.
40 000 000 84.53
As the data does not have any time, I am trying to create a time array to the df but every time that I do it I get Memory Errors. Sampling time = 60 kHz
I tried shrinking the data by slicing it and instead of taking 40 000 000 points. I checked and the important data for me lay between 20000001:40000000. I have tried to take less data points e.g. 20 000 but still, whenever I create the Time array I get the memory error.
N = N.iloc[20000001:40000000] #Lock data
N = N[0 : len(N) : 1000] # Slice by 1000 increments
N['Time'] = np.arange(0, len(N), 1/60000)
How could I create a Time array without killing my memory? Am I doing something wrong?
A:
You may write a generator of floats similar to xrange (in python2) or range (python3). They lack of float support so we write it by ourselves:
def frange(end_number, fraction):
end_idx = end_number * fraction
idx = 0
while idx < end_idx:
yield float(idx) / fraction
idx += 1
a = frange(20, 3)
print([i for i in a]) # see how it works
b = frange(40000000, 60000) # no memory error
| {
"pile_set_name": "StackExchange"
} |
Q:
In Python, Given a list of tuple, generate a list whose elements are sum of elements of contained tuples
Given a list of tuple, generate a list whose elements are sum of elements of contained tuples.
E.g. Input: [(1, 7), (1, 3), (3, 4, 5), (2, 2)]
Output: [8, 4, 12, 4]
A:
This is a simple question and one with basic knowledge in python can do it.
a=input('Enter the list of tuples')
b=[]
for i in range(len(a)):
b.append(sum(a[i]))
I have not checked for simple answers. You can please check for them. And please do use python shell as you can easily find solutions of python codes in it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Which are new features added into SQL Server 2008 over 2005?
Main idea to put these question is get exact features of SQL Server 2008 and how can I use it in my project.
So, please help to understand or explain me about SQL Server 2008.
A:
If you already know what's in SQL Server 2005, might I suggest looking at Microsoft's "What's New (SQL Server 2008)". It lists the following as being new/updated:
Database Engine
Analysis Services - Multidimensional Database
Analysis Services - Data Mining
Integration Services
Replication
Reporting Services
Service Broker
I don't see mention of Intellisense, but that is one new feature of SQL Server 2008 vs SQL Server 2005 that is definitely nice.
For a similar question, see: StackOverflow: Advantages to MS SQL Server 2008 over MS SQL Server 2005
| {
"pile_set_name": "StackExchange"
} |
Q:
Dash.js - Trapping 404 errors from MPEG-DASH player
I've using the Dash.js player to play MPEG-DASH videos. The videos are pulled from a server. Every now and then there will be 404 errors due to server issues, I would like to retry the stream in the background by detecting the 404 error and acting accordingly.
The problem is I cannot catch the error, it's thrown from the line
req.send();
Which is in a file called FragmentLoader.js.
I've tried the following error handling:
window.addEventListener('error', function(e) {
console.log("Item: " + e.message);
}, true);
var oReq = new XMLHttpRequest();
oReq.addEventListener("error", function (e) {
console.log("xml item: " + e.message);
}, true);
$(document).ajaxError(function (event, xhr, ajaxOptions, errorThrown) {
alert("ajax erorr");
});
However none of these conditions catch the error. Is there any way to catch these errors thrown from the dash.js player?
A:
Dash.js has internal retry logic on a 404 - it will retry 3 times (so a total of 4 attempts) before giving up. There's some discussion about improving this behaviour further such as trying the other available representations, but that isn't there yet.
However, this depends on the 404 being detected by the page. There are some XHR errors that are completely silent in terms of what JavaScript can see, even though an error is logged to the console on the exact line of req.send(), which is behaviour I've seen here: https://github.com/Dash-Industry-Forum/dash.js/issues/1209
If the request is indeed throwing a 404 that Dash.js' error handling has handled, retried, and then gave up, then you can bind to its error event:
var url = "http://dash.edgesuite.net/envivio/Envivio-dash2/manifest.mpd";
var player = dashjs.MediaPlayer().create();
player.initialize(document.querySelector("#videoPlayer"), url, true);
player.on('error', function(e) {
if (e.error === 'download') {
// dash.js gave up loading something
// e.event.id will tell you what it failed to load (mpd, segment...)
// and e.event.url will have the URL that failed
}
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Undefined index in $row
Hello I am making a highscore webpage with tables. I've gotten two working so far and my last one is giving me trouble. The data is the time played and it is being stored in seconds. This is my query
Stuff I tried
<?php if( isset($this->attr['totaltime'])) { ?>
<td><?php echo $row['totaltime']; ?></td>
<?php } ?>
I've tried a few different variations of this but none has worked for me. I believe it's my lack of knowledge and nothing else. I am 90% i need to add an isset in there to perform a check but I just dont know how the isset works and all the examples I found are extremely complicated.. Like this one.
$o = [];
@$var = ["",0,null,1,2,3,$foo,$o['myIndex']];
array_walk($var, function($v) {
echo (!isset($v) || $v == false) ? 'true ' : 'false';
echo ' ' . (empty($v) ? 'true' : 'false');
echo "\n";
});
A:
Your second database column doesn't have a name, it's just the anonymous result of the Floor function.
You should use an alias for the column, e.g.
SELECT utime.playername, FLOOR(utime.totaltime / 3600) as totaltime FROM utime.utime ORDER BY utime.totaltime DESC LIMIT 10
This will then match the index name you're using to refer to it in the PHP
| {
"pile_set_name": "StackExchange"
} |
Q:
ECG healthcare monitoring system in android
i just need to do ECG healthcare monitoring system in android.I don't have any idea regarding that. what are the necessary things & how could i do that.?Pls help me.Its urgent.Thanks in advance.
Rgds,
Sudhir.
A:
I agree with the other contributors - you really need to research your area from the ground up. If I was tasked with this, I'd look at what an ECG is actually measuring, what the scales would be on the axes of the output graph (I'm guessing there's a requirement for a graph-type output) and I'd look at the specifications of the sensors - what do they output? I'm guessing you may need to write an interface (or get a hold of an interface, possibly from the sensor manufacturer's) - that might be a good place to start. Hope this helps.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to print to a text file if I'm using threads in java?
I don't know how to print to a text file when I'm using threads because every time it just creates another file, so I end up with just one result which is the last one, I have tried a lot of things and is always the same.
This is just a part of the code, besides printing to the file I have to print a graph too and I have the same problem as it creates one graph for each thread.
public class Adsda implements Runnable{
private int id=0;
public int number;
public String file="Time.txt";
private final PrintWriter outputStream;
public Adsda(int id) throws FileNotFoundException {
this.id=id+1;
this.outputStream=new PrintWriter(this.file);
}
public void run() {
int i,fact=1;
this.number=id;//It is the number to calculate factorial
long A=System.nanoTime();
for(i=1;i<=this.number;i++){
fact=fact*i;
}
long B=System.nanoTime();
long t=B-A;
double tt = (double)t / 1000000000.0;
System.out.println("Factorial of "+number+" is: "+fact+" Time: "+tt);
this.outputStream.println("Factorial of: "+this.number+" Time: "+tt);
this.outputStream.flush();
}
public static void main(String[] args) throws FileNotFoundException{
ExecutorService executor = Executors.newFixedThreadPool(2);//creating a pool of 2 threads
for(int i=0;i<5;i++){
executor.submit(new Adsda(i) );
}
executor.shutdown();
}
A:
You should create a single PrintWriter and share that with the threads by passing it in the constructor instead of having each thread create their own PrintWriter (and file). Although that will result in the file containing the results in weird order. If you want to have them in a specific order, you should have the threads output their results in their own buffers and when all threads are finished, write the buffers out to a file in sequence.
PrintWriter pw = new PrintWriter(filename);
for(int i=0;i<5;i++){
executor.submit(new Adsda(i, pw) );
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Any way to have gmaps4rails open the map in Street View?
I've tried changing the zoom level, but it always stays in the map mode. Couldn't find any info on the wiki either. Any help or other suggestions would be appreciated.
A:
Ended up just using Google's Javascript API. It went something like this:
pos = new google.maps.LatLng( <%= latitude %>, <%= longitude %> );
var div = document.getElementById('streetViewContainer');
var sv = new google.maps.StreetViewPanorama(div);
sv.setPosition( pos );
sv.setVisible( true );
// find the heading by looking from the google car pos to the venue pos
var service = new google.maps.StreetViewService();
service.getPanoramaByLocation( pos, 50, function(result, status) {
if (status == google.maps.StreetViewStatus.OK)
{
carPos = result.location.latLng;
heading = google.maps.geometry.spherical.computeHeading( carPos, pos );
sv.setPov( { heading: heading, pitch: 0, zoom: 1 } );
}
} );
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I find an object in image/video knowing its real physical dimension?
I have a sample of images and would like to detect the object among others in the image/video already knowing in advance the real physical dimensions of that object. I have one of the image sample (its airplane door) and would like to find the window in the airplane door knowing its physical dimensions(let we say it has inner radius of 20cm and out radius of 23cm) and its real world position in the door (for example its minimal distance to the door frame is 15cm) .Also I can know prior my camera resolution. Any matlab code or OpenCV C++ that can do that automatically with image processing?
Here is my image sample
And more complex image with round logos.
I run the code for second complex image and do not get the same results. Here is the image result.
A:
I wouldn't worry too much about the exact geometry and calibration and rather find the window by its own characteristics.
Binarization works relatively well, be it on the whole image or in a large region of interest.
Then you can select the most likely blob based on it approximate area and/or circularity.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is the result of sum doubled in this simple query?
mysql> create table a (
-> id varchar(10),
-> val int
-> );
Query OK, 0 rows affected (0.02 sec)
mysql> create table b (
-> id varchar(10),
-> val int
-> );
Query OK, 0 rows affected (0.02 sec)
mysql> insert into a values ('a', 1), ('b', 2), ('c', 3);
mysql> insert into b values ('a', 4), ('a', 5), ('b', 6), ('b', 7), ('c', 8), ('c', 9);
mysql> select a.id, sum(a.val), sum(b.val) from a inner join b on a.id = b.id group by a.id;
+------+------------+------------+
| id | sum(a.val) | sum(b.val) |
+------+------------+------------+
| a | 2 | 9 |
| b | 4 | 13 |
| c | 6 | 17 |
+------+------------+------------+
3 rows in set (0.00 sec)
My expected result was for sum(a.val) to present 1, 2 and 3, sum(b.val) to present 9, 13 and 17.
How should I rewrite the query to get the expected result?
A:
It is due to the JOIN with the other table.
If you just run the simple query with the JOINS but without the SUM, you will see that the records in Table A are being doubled, because you INNER JOIN with TABLE B.
A.ID A.val B.val
a 1 4
a 1 5
b 2 6
b 2 7
c 3 8
c 3 9
In order to get the expected result, you will have to query like this:
SELECT
A.ID,
A.VAL 'A Sum',
B.VAL 'B Sum'
FROM
(SELECT ID, SUM(VAL) AS 'VAL' FROM A GROUP BY ID) A INNER JOIN
(SELECT ID, SUM(VAL) AS 'VAL' FROM B GROUP BY ID) B ON A.ID = B.ID
Here is a SQLFiddle with how this query works.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does the length of an antenna, relative to the wavelength, matter?
I cannot understand following wikipedia illustration:
It shows the antenna at right angles to the direction of travel of the EM wave. The wavelength is measured in the direction of travel. So why does the relative length of antenna to wavelength matter?
A:
So why does the relative length of antenna to wavelength matter?
A monopole antenna (for instance) can be "short" and it will pick up a signal that is proportionately smaller AND look like an impedance that is highly capacitive to the receiver. The "resistive" part of the signal will also be very small too: -
Picture taken from here
This is a good thing for crystal radios because at (say) a length of 0.05\$\lambda\$ it can reactively tune with a coil and produce a decent Q factor in order to give good selectivity in the crystal radio.
On the other hand, for a transmitting antenna, this is problematic because you have to do two things: -
Counteract the capacitance (about 1000 ohms at 0.05\$\lambda\$) with a series inductor in order to be able to drive a decent current into said antenna
Drive a really low value resistor (the transformed impedance of free space at the electrical terminals of the antenna). It's also hard to find sub-1-ohm coax!
So, transmitting antennas are chosen to have a length that makes the electrical interface simpler. For instance, at 0.25\$\lambda\$ the impedance is purely resistive at about 37 ohms. You could even choose a length that is a bit short of 0.5\$\lambda\$ and get a resistance of over 2000 ohms with no reactive part.
If you go to bigger antenna lengths then you get a repeated pattern: -
Picture taken from here
The base of the graph is in MHz with a quarter wave being at 2.5 MHz. The reactive part of the impedance is blue and the resistive is red with both being in ohms along the y-axis. There are some discrepancies in the amplitudes between the two pictures but this isn't the point - the point is that antenna length affects impedance greatly and it steps and repeats as you go from an electrically short antenna to an electrically long antenna.
Regarding the antenna pattern, a dipole looks like this with the antenna vertical and at the centre: -
Picture taken from here
A:
Suppose you have a receiving, dipole, antenna. Ignore the existence of free space around the antenna — ignore that wavelength — and just think about the portion of the electromagnetic field immediately around the antenna. The field exerts a force on the electrons in the antenna, perpendicular to the propagation of the wave (or more precisely, in the same direction as the polarization of the wave). That is where the right angle comes from.
The wavelength of the wave in space is irrelevant, so far, because the antenna elements don't see it, they just see a locally oscillating field.
Now think about what happens in the antenna conductor. There is a force causing the electrons to move (a current). On the other hand, the antenna has ends, and current cannot flow out the end of a wire (outside of conditions that do not apply here).
Consider just the field impinging on some electrons near the middle of the antenna and ignore the rest of the field. They can start to move, and just like any other change in current in a conductor, it propagates, as a wave, along the conductor at a speed close to (but not equal to) the speed of light. When this change reaches one end of the wire, current can no longer flow there, so just like any wave hitting an obstacle it reflects back and reaches its starting point, and there are standing waves within the antenna.
You've indicated that you already understand the idea of sound waves and standing waves in a pipe, so I'll skip going into more detail there. Just note that the proper analogy purely from the perspective of analyzing standing waves is:
wire end: closed end of pipe — current node — voltage antinode
middle of dipole: middle of both-ends-closed pipe — current antinode — voltage node
The there is no obvious direct analogy for the interaction of the EM wave since it is spread out along the entire length — it's like you have a series of fans in the pipe, not like an outside pressure wave passing through an opening.
To summarize: the two lengths are similar not because the extent of the wave in free space maps somehow to the extent of the wire despite being at right angles, but rather because there are two wave phenomena of the same frequency and almost the same propagation speed.
| {
"pile_set_name": "StackExchange"
} |
Q:
MailMessage AlternateView in outlook 2010
I used the following example for creating AlternateView for mail message: http://www.systemnetmail.com/faq/2.5.aspx
I set up my outlook 2010 for receiving only plain text messages: http://www.addictivetips.com/microsoft-office/read-email-as-plain-text-in-outlook-2010/
But when I received the email: I get the html message and not the plain text message.
What Am I doing wrong?
How can I test my development?
Tnx
A:
Use windows live or thunderbird, you cant test using outlook
| {
"pile_set_name": "StackExchange"
} |
Q:
GridView Export to Excel is exporting entire aspx page
I am trying to export a gridview's contents to excel. I have some code:
public void ExcelDownload(object sender, EventArgs e)
{
Response.Clear();
Response.Buffer = true;
Response.Charset = "UTF-8";
Response.AppendHeader("Content-Disposition", "attachment;filename=MailingList.xls");
Response.ContentEncoding = System.Text.Encoding.GetEncoding("UTF-8");
Response.ContentType = "application/ms-excel";
this.EnableViewState = false;
System.Globalization.CultureInfo myCItrad = new System.Globalization.CultureInfo("EN-US", true);
System.IO.StringWriter oStringWriter = new System.IO.StringWriter(myCItrad);
System.Web.UI.HtmlTextWriter oHtmlTextWriter = new System.Web.UI.HtmlTextWriter(oStringWriter);
MailingList.RenderControl(oHtmlTextWriter);
Response.Write(oStringWriter.ToString());
}
which runs when a link is clicked. This works fine, except that it is exporting the entire webpage, instead of just the gridview (MailingList). Does anyone know how to fix this?
A:
Tehnically speaking this is not export to excel, but you send a html with a wrong headers to trick browser to open this content with excel. If you stick with this solution you have to render just GridView on separate aspx page.
This solution has may problems, excel it self will warn user that content is different from extension, becouse you send html and your response headers are saying that this is a excel file. And I can bet that some antimalware software on client or something similar on server, will block this response since serving different content than declared in headers is known malware behaviour.
Better use NPOI (xls) or / and EPPlus (xlsx) and fully control your excel export.
| {
"pile_set_name": "StackExchange"
} |
Q:
Does the order in which prizes are allocated to raffle tickets matter?
I am running a raffle. There are 10 prizes numbered 1-10 and 100 tickets. The raffle is run in two different ways:
A) Prize 1 is allocated to the first ticket drawn, prize 2 to the second and so on until 10 tickets are drawn. Tickets are not replaced.
B) Prize 10 is allocated to the first ticket drawn, prize 9 to the second and so on until 10 tickets are drawn. Tickets are not replaced.
Question: Is the overall probability of a ticket winning Prize 1 different in either scenario A or B?
Please do not complicate the answer with discussion of the change in probabilities during the draw.
I know the answer - but this is to resolve a very real PTA dispute... please upvote any correct answers to help resolve this pressing issue! There are similar questions, but I can't find any exactly like this, and I want this exact phrasing so the PTA is able to follow without confusion!
A:
In case A, of course $p=\frac{1}{100}$. Once the first ticket is drawn, no other ticket can win 1st prize.
In case B, you can calculate the probability of not winning prizes 10 to 2:
$$
\frac{99}{100}\cdot\frac{98}{99}\cdot\cdots\cdot\frac{91}{92} = \frac{91}{100}
$$
Now there are 91 tickets left, so the probability of winning the remaining prize is $\frac{1}{91}$.
Of course, multiplying the probability that your ticket was still in the box, by the probability that it is extracted at that step is
$$
\frac{91}{100}\cdot\frac{1}{91} = \frac{1}{100}
$$
as desired.
Maybe this very explicit argument will help.
A:
No - there is no difference in the probability of a particular ticket winning Prize 1.
In scenario A the probability is obviously $\dfrac{1}{100}$.
In scenario B it is $\dfrac{99}{100}\cdot\dfrac{98}{99}\cdot\dfrac{97}{98}\cdot\dfrac{96}{97}\cdot\dfrac{95}{96}\cdot\dfrac{94}{95}\cdot\dfrac{93}{94}\cdot\dfrac{92}{93}\cdot\dfrac{91}{92}\cdot\dfrac{1}{91}=\dfrac{1}{100}$
From personal experience Scenario A allows the top prize winner to choose their prize from the set of ten. It also allows people with several tickets have who won earlier prizes to waive further prizes if they have already won one, leading to a redraw. But it reduces the tension in the draw compared with Scenario B.
| {
"pile_set_name": "StackExchange"
} |
Q:
Decipher the location of the pirate treasure
Long ago, the pirate ship Wild Goose, led by Captain Oliver, sailed the coast of Nova Scotia. One night, according to legend, the Wild Goose was caught in a storm. Before she sank, Captain Oliver had the ship's stash of treasure put into a life raft. The surviving crew members steered the raft to the nearby Brier Island, and hid the treasure there.
You, a world famous treasure hunter, are headed to Brier Island in search of the lost treasure. When you arrive on the island, there is no trace of the treasure. You do however find a journal which is clearly very old. Most of the pages are too faded to read, but one of them contains some mysterious writing:
What is the treasure, and where is it hidden?
A:
This really is an addendum to Quark's answer, which has all the individual words deciphered. As commented by Julian, there is still one word missing and Quark's word order is not correct.
I think the correct word order is:
Fifteen million golden loonies are buried beneath the ...
where ... is the missing word. That word can be found
by taking the first letters of the encodings used for each word in the order of the final sentence. The encodings are the Caesar cipher, ASCII, Morse code, the Pigpen cipher, the Semaphore alphabet, Italian, the Telephone keypad and plain English.
So the complete hint for the location of the treasure is:
Fifteen million golden loonies are buried beneath the campsite.
A:
EDIT 1:
Referring to this link (https://en.wikipedia.org/wiki/Brier_Island), Brier Island is actually in western Nova Scotia (which is part of Canada, never knew that before). Being in Canada, it makes sense that the coins are loonies (https://en.wikipedia.org/wiki/Loonie). According to the first wiki page, Brier Island is well known for their ship wrecks.With all of this making up an accurate story so far, it's possible Wild Goose and Captain Oliver have some special meaning. It may be a coincidence but there is a Geocache hidden in Oliver's cove on Brier's Island (http://www.geocaching.com/geocache/GC13ACV_olivers-cove-brier-island-series?guid=72a041dd-752a-4c0a-8bff-bbb0adf65f3c). I would go there myself to dig and check if there are any golden loonies but it may be a wild goose chase. heh.
Here's a potential answer (unsure of the last one):
The codes are as follows:(1) 4D 49 4C 4C 49 4F 4FThis is ASCII for "million"(2) ILIWHHQRotate 23 for "fifteen"(3) I think the is just the.(4) SEPOLTI is Italian for buried(5) The morse code translates to "golden"(6) The icons are pigpen (https://en.wikipedia.org/wiki/Pigpen_cipher), this translates to "loonies"(7) The symbols are flag symbols (https://en.wikipedia.org/wiki/Flag_semaphore) that translate to "are"(8) 2363284This is the one I'm unsure of, how can a 7 digit number be translated to a word? If this was a modern day puzzle, then I'd immediately think of a cell phone number. If you type it out, it does spell "beneath"(http://phonespell.org/combo.cgi?n=2363284). It doesn't mean anything meaningful in any bases (base 36 is 1ENIS), it can't be broken up and converted to letters in any way, and I don't see any references online. Maybe "long ago" isn't so long?Final answer when unscrambled:The fifteen million golden loonies are buried beneath. (yes I know, anticlimactic)
A:
The treasure is
fifteen million golden loonies, and it is buried beneath the campsite.
Each line in the journal entry is a word.
This is encoded in ASCII. The text reads 'MILLION'.
This is encoded with a Caesar cipher. Shifting each letter three places back in the alphabet gives 'FIFTEEN'.
This is simply the English word 'THE'.
This is the Italian for 'BURIED'.
This is Morse code for 'GOLDEN'.
This is a Pigpen cipher. The text reads 'LOONIES'.
This is flag semaphor. The text reads 'ARE'.
When these numbers are entered on a phone keypad using T9 predictive text, the result is 'BENEATH'.
We can rearrange these words to get
FIFTEEN MILLION GOLDEN LOONIES ARE BURIED BENEATH THE...
There is a word missing, which would tell us what the golden loonies are buried beneath.
The names of the codes that were used (ASCII, Caesar, English, Italian, Morse, Pigpen, Semaphor, T9) are in alphabetical order. This is a clue that the names are important. If we write the names of the codes in the order the corresponding words are used in the sentence, we get
CaesarASCIIMorsePigpenSemaphorItalianT9English The initial letters of the code names spell the final word of the message: 'CAMPSITE'.
| {
"pile_set_name": "StackExchange"
} |
Q:
Prove that any rational can be expressed in the form $\sum\limits_{k=1}^n{\frac{1}{a_k}}$, $a_k\in\mathbb N^*$
Let $x\in\mathbb{Q}$ with $x>0$.
Prove that we can find $n\in\mathbb{N}^*$ and distinct $a_1,...,a_n \in \mathbb{N}^*$ such that $$x=\sum_{k=1}^n{\frac{1}{a_k}}$$
A:
Any rational number $0 < x < 1$ possesses a so-called Egyptian fraction decomposition, which writes $x$ as a sum of fractions with unit numerators. One constructive proof is to use a "greedy" algorithm, which keeps subtracting off the unit fractions of the form $\dfrac1{\lceil 1/x\rceil}$.
Here's some Mathematica code demonstrating the greedy approach for $\dfrac{21}{23}$:
f = 21/23; l = {};
While[f != 0, l = {l, p = Ceiling[1/f]}; f -= 1/p;];
Flatten[l]
{2, 3, 13, 359, 644046}
This says that
$$\frac{21}{23}=\frac12+\frac13+\frac1{13}+\frac1{359}+\frac1{644046}$$
This is a very naïve method, as there are other algorithms that yield unit fractions with tinier denominators. For the purposes of this question, it suffices to prove that the algorithm halts. As mentioned here (the proof is attributed to Fibonacci), if we let $m=\lceil 1/x \rceil$, then we always have the bracketing $\dfrac1{m} \leq x < \dfrac1{m-1}$. The numerator of $x-\dfrac1{m}$ can then be shown to be smaller than the numerator of $x$. Thus, the repeated application of the steps of reciprocation and subtraction results in numerators that go smaller and smaller, until the remainder after subtracting off unit fractions is zero.
See this for a nice bibliography of decomposition methods.
| {
"pile_set_name": "StackExchange"
} |
Q:
SSL - Null Prefix Attacks Still Possible?
At DEFCON 17 (2009) Moxie Marlinspike gave a talk were he was able to use a malformed certificate signing request to get SSL certs signed for domains he doesn't own. The gist of it was that for the common name put something like www.bank.com\0.iownthisdomain.com with \0 being a null byte.
Is this attack still generally possible, of course you can probably find CAs that are still vulnerable but are the clients, particularly Microsoft's SSL stack?
A:
Microsoft has issued the MS09-056 update which patched this vulnerability in CryptoAPI on almost all of their operating systems, that automatically fixes it for Internet Explorer, Chrome, Safari, and any other application that relies on the CryptoAPI. Mozilla has issued an updated quickly after Marlinspike's demonstration and Opera has patched it in version 10.00
| {
"pile_set_name": "StackExchange"
} |
Q:
How to access my google drive from php script without manual authorisation
I want to write a server php script which will access my google drive and copy a file there.
The server has to save credentials for my google drive and not ask for authorisation.
All the examples I saw describe web applications there various users can perform actions on their drives. For example here https://developers.google.com/drive/v3/web/quickstart/php
How can I save all the needed credentials on my server.
A:
After a long research and reading google documentation and examples I found a way that works for me.
You need a service account. This account will access the google drive data.
I work with google domain, therefore I needed to grant domain wide authority to this service account.
Here you can find how to create a service account and grant it domain wide authority however this link does not has PHP examples
Here you can find the same instructions with PHP examples
Please pay attention that you need JSON type key file when you create a service account.
I used Google Application Default Credentials.
Finally this is working code snippet:
<?php
require_once '/path/to/google-api-php-client/vendor/autoload.php';
putenv('GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json');
$client = new Google_Client();
$client->useApplicationDefaultCredentials();
$client->setScopes(['https://www.googleapis.com/auth/drive']);
$client->setSubject('email_of_account@you_want_to_work.for');
$service = new Google_Service_Drive($client);
//Create a new folder
$fileMetadata = new Google_Service_Drive_DriveFile(
array('name' => 'Invoices',
'mimeType' => 'application/vnd.google-apps.folder'));
$file = $service->files->create($fileMetadata, array('fields' => 'id'));
echo $file->id;
?>
A:
I recommend you look into using a service account. A service account is like a dummy user which is pre authenticated. This means that you can share a folder on your google drive account with the service account and the service account will be able to upload to it. I have an article on how Service accounts work. Google Developers console Service account
There are a few things you need to remember when working with service accounts though. Mainly permissions when the service account uploads a file by default it owns the file so you will need to grant your personal account permissions to the file after it is uploaded. Its just an extra step.
The PHP client library has a sample for using service account authentication but it doesn't have drive you will have to alter the books part to be google drive. Service-account.php
require_once __DIR__ . '/vendor/autoload.php';
// Use the developers console and download your service account
// credentials in JSON format. Place the file in this directory or
// change the key file location if necessary.
putenv('GOOGLE_APPLICATION_CREDENTIALS='.__DIR__.'/service-account.json');
/**
* Gets the Google client refreshing auth if needed.
* Documentation: https://developers.google.com/identity/protocols/OAuth2ServiceAccount
* Initializes a client object.
* @return A google client object.
*/
function getGoogleClient() {
return getServiceAccountClient();
}
/**
* Builds the Google client object.
* Documentation: https://developers.google.com/api-client-library/php/auth/service-accounts
* Scopes will need to be changed depending upon the API's being accessed.
* array(Google_Service_Analytics::ANALYTICS_READONLY, Google_Service_Analytics::ANALYTICS)
* List of Google Scopes: https://developers.google.com/identity/protocols/googlescopes
* @return A google client object.
*/
function getServiceAccountClient() {
try {
// Create and configure a new client object.
$client = new Google_Client();
$client->useApplicationDefaultCredentials();
$client->addScope([YOUR SCOPES HERE]);
return $client;
} catch (Exception $e) {
print "An error occurred: " . $e->getMessage();
}
}
This may also help my google drive samples
| {
"pile_set_name": "StackExchange"
} |
Q:
MongoDB - Rewind an $unwind nested array after $lookup using $group
MongoDB aggregation gets exponentially complicated by the minute!
I am in so far as to $unwind a nested array and then perform a $lookup by the _id of each object from the unwinded nested array. My final attempt is to reverse the unwinding with $group. However, I am unable to reconstruct the original embedded array, with its original property name, along with the rest of the original immediate properties of each document.
Here is my attempt so far:
db.users.aggregate([
{
$unwind: "$profile",
$unwind: {
path: "$profile.universities",
preserveNullAndEmptyArrays: true
}
},
{
$lookup: {
from: "universities",
localField: "profile.universities._id",
foreignField: "_id",
as: "profile.universities"
}
},
{
$group: {
_id: "$_id",
emails: { "$first": "$emails" },
profile: { "$first": "$profile" },
universities: { "$push": "$profile.universities" }
}
}
]).pretty()
What I get is something like this:
{
"_id" : "A_USER_ID",
"emails" : [
{
"address" : "AN_EMAIL_ADDRESS",
"verified" : false
}
],
"profile" : {
"name" : "NAME",
"company" : "A COMPANY",
"title" : "A TITLE",
"phone" : "123-123-1234",
"disabled" : false,
"universities" : [
{
"_id" : "ID_1",
"name" : "UNIVERSITY_NAME_1",
"code" : "CODE_1",
"styles" : {AN_OBJECT}
}
]
},
"universities" : [
[
{
"_id" : "ID_1",
"name" : "UNIVERSITY_NAME_1",
"code" : "CODE_1",
"styles" : {AN_OBJECT}
}
],
[
{
"_id" : "ID_2",
"name" : "UNIVERSITY_NAME_2",
"code" : "CODE_2",
"styles" : {AN_OBJECT}
}
]
]
}
There are 2 issues with this result:
The resulting universities is an array of arrays of one object each, since the $lookup returned a single element array for the original $profile.universities nested array. It should be just an array of objects.
The resulting universities should take its original place as nested under profiles. I am aware why the original profile.universities is the way it is, because I am using the $first operator. My intent behind this is to retain all of the original properties of profile, in junction with retaining the original nested universities array.
Ultimately, what I need is something like this:
{
"_id" : "A_USER_ID",
"emails" : [
{
"address" : "AN_EMAIL_ADDRESS",
"verified" : false
}
],
"profile" : {
"name" : "NAME",
"company" : "A COMPANY",
"title" : "A TITLE",
"phone" : "123-123-1234",
"disabled" : false,
"universities" : [
{
"_id" : "ID_1",
"name" : "UNIVERSITY_NAME_1",
"code" : "CODE_1",
"styles" : {AN_OBJECT}
},
{
"_id" : "ID_2",
"name" : "UNIVERSITY_NAME_2",
"code" : "CODE_2",
"styles" : {AN_OBJECT}
}
]
}
}
Is there another operator that I can use instead of $group to achieve this? Or am I understanding the purpose of $group incorrectly?
Edit: This is the original post, for context:
If Mongo $lookup is a left outer join, then how come it excludes non-matching documents?
A:
Because the $lookup operator produces an array field, you need to $unwind the new field before the $group pipeline to get the desired result:
db.users.aggregate([
{ "$unwind": "$profile" },
{ "$unwind": {
"path": "$profile.universities",
"preserveNullAndEmptyArrays": true
} },
{ "$lookup": {
"from": "universities",
"localField": "profile.universities._id",
"foreignField": "_id",
"as": "universities"
} },
{ "$unwind": "$universities" },
{ "$group": {
"_id": "$_id",
"emails": { "$first": "$emails" },
"profile": { "$first": "$profile" },
"universities": { "$push": "$universities" }
} },
{ "$project": {
"emails": 1,
"profile.name" : 1,
"profile.company": 1,
"profile.title" : 1,
"profile.phone" : 1,
"profile.disabled": 1,
"profile.universities": "$universities"
} }
]).pretty()
| {
"pile_set_name": "StackExchange"
} |
Q:
Alamofire request with cookies
I'm beginner and I can't figure out how to make a .GET request (however it requires autenthication) with Alamofire. I managed to do this with other web service (login) because it takes parameters argument:
parameters = [
"username" : username
"password" : password
]
Then:
Alamofire.request(.POST, loginUrl, parameters: parameters).responseJSON { (request, response, data, error) -> Void in
//handling the response
}
In response header I get some information:
[Transfer-Encoding: Identity, Server: nginx/1.4.1, Content-Type: application/json, P3P: policyref="http://www.somewebpage.com", CP="NON DSP COR CURa TIA", Connection: keep-alive, Date: Sun, 08 Mar 2015 13:49:20 GMT, Vary: Accept-Encoding, Cookie, Set-Cookie: sessionid=5xeff47e65f674a4cc5b2d54f344304b; Domain=.somedomain.com; Path=/, tbauth=1; Domain=.somedomain.com; Path=/, Content-Encoding: gzip]
It's of type [NSObject : AnyObject]
What should I do with that information to store it in NSURLDefaults and prepare valid request parameter (cookie)? Do I need all fields or just Set-Cookie?
I tried manually set parameter:
parameters = [
"Cookie" : "sessionid=5xeff47e65f674a4cc5b2d54f344304b; Domain=.somedomain.com; Path=/, tbauth=1; Domain=.somedomain.com; Path=/"
]
but it does return error NSURLErrorDomain -1017 (NSURLErrorCannotParseResponse)
Thanks for all responses.
A:
Ok, after 2 weeks I just found a solution:
let URL = NSURL(string: query)!
let mutableUrlRequest = NSMutableURLRequest(URL: URL)
mutableUrlRequest.HTTPMethod = "GET"
let prefs = NSUserDefaults.standardUserDefaults()
let cookie = prefs.valueForKey("COOKIE") as String
mutableUrlRequest.setValue(cookie, forHTTPHeaderField: "Cookie")
Alamofire.request(mutableUrlRequest).responseJSON { (request, response, data, error) -> Void in
//handling the response
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Magento CSRF protection
I am looking at custom forms in Magento. I saw these tutorials
http://fastdivision.com/2012/03/29/diy-magento-create-ajax-login-registration-forms-for-your-magento-theme/
http://inchoo.net/ecommerce/magento/magento-email/magento-custom-email-contact-form-with-notification-system/
I did not see any mention of CSRF prevention, like checking a client token with one stored in a user session. I also looked in the Magento Contact Us form, and saw this but I do not think it relates to CSRF:
<input type="text" name="hideit" id="hideit" value="" style="display:none !important;">
Does Magento have any default code for preventing CSRF? Does the $this->getRequest()->getParams() method of Mage_Core_Controller_Front_Action do anything automatically to prevent CSRF that I may be missing?
A:
There's actually a frontend CSRF token validation method in Magento you can use to add a unique session-based form key to your custom form and validate it in the controller's action.
To send a CSRF form key with the request when submitting a form insert the <?php echo $this->getBlockHtml('formkey') ?> code into the form's body.
This will generate an input like this: <input type="hidden" value="unique16codehere" name="form_key">.
To validate the key use the _validateFormKey() method in the respective controller's action.
A:
It's on the end programmer user to use their own CSFR/nonce protection scheme, unless they're creating a page/form in the backend admin console. The Magento admin console application has this protection for all its pages/urls by default.
Check out _validateSecretKey in app/code/core/Mage/Adminhtml/Controller/Action.php and the getSecretKey method in app/code/core/Mage/Adminhtml/Model/Url.php. This could easily be extended to your own forms on the frontend.
| {
"pile_set_name": "StackExchange"
} |
Q:
Proof that the gamma function is an extension of the factorial function
I've already proved that $$\Gamma (n)= (n-1)!$$ but I don´t really know what else to do to verify that $\Gamma$ is an extension of the factorial function for real numbers (positive) Thank you! And I´m sorry for my language, I am Spanish, so thank you again for trying to understand me.
A:
Consider $$\Gamma (z)=\int_0^\infty e^{-t}t^{z}{dt\over t}.$$ We can rewrite this as $$\Gamma (z)=\int_0^1 e^{-t}t^{z}{dt\over t}+\int_1^\infty e^{-t}t^{z}{dt\over t}.$$ In the first term of this sum we see that the power series representation of $e^{-t}$ converges uniformly which implies that the series can be integrated term by term. So, $$\Gamma (z)=\int_0^1\sum_{n=0}^\infty {(-1)^n\over n!}t^{z+n}{dt\over t}+\int_1^\infty e^{-t}t^{z}{dt\over t}=\sum_{n=0}^\infty {(-1)^n\over n!(z+n)}+\int_1^\infty e^{-t}t^{z}{dt\over t}.$$ We can see that the series converges for $z\neq 0, -1, -2,...$ which is a meromorphic function. Its poles are simple poles at the non-positive integers. The residue at $-n$ is $(-1)^n\over n!$. The last integral extends as an entire function of z.
Thus $\Gamma (z)$ has been analytically continued to the entire complex plane except for $z\neq0,1,2,...$.
A:
It is a result in Ahlfohrs, Complex Analysis, actually exercise 1 on page 196, that the factorial function can be extended in any way we like at a few non-integer points, and an entire holomorphic function can be constructed to fit those points. In particular, if real valued, we can make any real-analytic extension that we want. So, the fact that the gamma function is analytic is not the big restriction. It is a simpler:
this is the only log-convex extension of the factorial.
http://en.wikipedia.org/wiki/Logarithmic_convexity
http://en.wikipedia.org/wiki/Bohr%E2%80%93Mollerup_theorem
http://en.wikipedia.org/wiki/Gamma_function
| {
"pile_set_name": "StackExchange"
} |
Q:
how to sort an array of string which contains special characters and white spaces
I Want to sort an array of string which contains special characters and white spaces. While sorting i want to ignore special characters , so that sorting of array happens based on only characters and digits.
for example : array would be like:
["ibtp-17","personal (z)","personal (a)","(z)","yabcd","y(3)"]
just need smart logic to implement this. Any help would be appreciated. Thanks
so far i have tried using replace which gives me some times correct result and some times not.
["ibtp-17","personal (z)","personal (a)","(z)","yabcd","y(3)"]
.sort(function(a,b){
return a.replace(/[^A-Z0-9]/ig, "") > b.replace(/[^A-Z0-9]/ig, "")
})
A:
Since I'm not supposed to just post code (since the OP needs to attempt something by themselves first and all that jazz). Here's some logic:
Array.sort() takes a comparison function. In said comparison function you can put some regex based special character stripper like String.replace(/[^a-zA-Z0-9 ]/g, "");. and then compare the resultant strings. Cheers.
| {
"pile_set_name": "StackExchange"
} |
Q:
Angular NVD3: How to access the chart object defined in HTML
I have defined a multibar chart using the <nvd3> directive and passing the data and options to it defined in my controller:
<nvd3 options="vm.options" data="vm.data"></nvd3>
Now I want to somehow access the chart object created to do some manipulations, for example, to obtain the xAxis scaling function.
If the chart is defined within JavaScript I have that object:
var chart = nv.models.multiBarChart()
.stacked(false)
.showControls(false);
// and I can get these scaling functions
var yValueScale = chart.yAxis.scale();
var xValueScale = chart.xAxis.scale();
Is it possible to also get them if the chart is defined in HTML?
Thanks in advance.
A:
I'm not sure if you need this anymore (it's been almost a year) but I think I may have found something that could be a solution? Or lead you (or anyone else) to one if it's not what you need?
After messing with the object, if you just need it at the beginning, you can use the 'on-ready' option within the nvd3 tag.
<nvd3 options="yourOptions" data="yourData" on-ready="callbackFunction">
The scope will then be passed into the function you set in your controller.
See also: https://github.com/krispo/angular-nvd3/issues/445
It's possible to use the callback option in your options, which allows you to use the chart variable. So it'd be something along the lines of
callback: function(chart) {
*use chart here*
}
See also: http://krispo.github.io/angular-nvd3/#/lineChart and look at the options on the side for callback. You may be able to set callback within the html tag, but I haven't tried that out yet.
| {
"pile_set_name": "StackExchange"
} |
Q:
Filter values from Java stream on condition that a value is null
I am trying to get values out of a stream if they meet a certain criteria OR if a certain value is null:
for (ProjectTask task : tasks) {
TaskEnrollmentUpdateRequest updateTask = updatedPhase.getTasks().stream()
.filter(t -> task.getId().equals(t.getId()) || t.getId() == null).findFirst().get();
if (updateTask != null) {
if (updateTask.getId() != null) {
mapper.map(updateTask, task);
} else {
TaskEnrollmentUpdateRequest ted = updateTask;
}
}
...
}
The idea is that I will update task with the values in updateTask if updateTask already exists in the tasks array i.e. its id value is present on an object in that array. If the id is null, which it will be if this a new task, then I want to create a new task to add to the tasks array. Problem is my filter function is filtering out all values that have a null Id so I can never add them to the tasks array.
Is this the correct function to filter out the values as I have described above?:
t -> task.getId().equals(t.getId()) || t.getId() == null
A:
Following along from what @JBNizet says, I think you want something like this:
for (ProjectTask task : tasks) {
Optional<TaskEnrollmentUpdateRequest> updateTaskOptional = updatedPhase.getTasks().stream()
.filter(t -> task.getId().equals(t.getId()) || t.getId() == null).findFirst();
if (updateTaskOptional.isPresent()) {
TaskEnrollmentUpdateRequest updateTask = updateTaskOptional.get();
if (updateTask.getId() != null) {
...
} else {
...
}
}
}
If your filter results in an empty List, then .findFirst() will return an empty Optional, and calling .get() on it will throw an exception since it has no value. So you have to check for that case specifically by checking if the Optional returned by .findFirst() contains a value rather than assuming that it always will.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get last values from arrays
I am checking one line of code from my project
if ( (test == 0) &&
lenght.Matches(new Version(Convert.ToInt64(version))) )
Whenever i debugged i am getting currentVersion as a constant value of 18 digit number,but the result i wanted is last exisitng datas version
i am getting 'length' as using the following code
length.Version = (long)data.Version.Rows[0]["Version"];
So i suspect it is always taking Rows arrays first value, how can i change this code so that it will give the last value of arrays
A:
Using LINQ-expression
currentVersion.Version = (long)m_spodata.DataVersion.Rows.Last()["Version"];
Using Rows.Count property
currentVersion.Version =
(long) m_spodata.DataVersion.Rows[m_spodata.DataVersion.Rows.Count - 1]["Version"];
| {
"pile_set_name": "StackExchange"
} |
Q:
Getting multiple counts from two joined tables in MySQL
So I have two tables, categories and designs. I want to construct a query that will fetch all categories, along with the count of any sub categories (categories.parent_id equal to the categories.id) AND the count of any designs (design.category_id equal to categories.id)
If I try to just get one of these counts, everything works fine, but when I try for both with the following code, the count for both is the same number (and not the correct number) for either.
$this->db->select('categories.id AS id, categories.parent_id AS parent_id, categories.title AS title,
categories.description AS description, categories.img_path AS img_path, COUNT(designs.id) AS design_count,
COUNT(sub_categories.id) as sub_category_count');
$this->db->from('categories');
$this->db->join('designs', 'categories.id = designs.category_id', 'left');
$this->db->join('categories as sub_categories', 'categories.id = sub_categories.parent_id', 'left');
$this->db->group_by('categories.id');
Any help will be much appreciated, cheers!
A:
Assuming that the root categories do not contain designs, here is the query that returns the necessary information:
SELECT category.id, category.title, subcategory.id, designs.id
FROM categories category
LEFT JOIN categories subcategory ON category.id = subcategory.parent_id
LEFT JOIN designs ON subcategory.id = designs.category_id
WHERE category.parent_id IS NULL
Now all you need to do is to apply grouping:
SELECT category.id, category.title, COUNT(DISTINCT subcategory.id), COUNT(designs.id)
FROM categories category
LEFT JOIN categories subcategory ON category.id = subcategory.parent_id
LEFT JOIN designs ON subcategory.id = designs.category_id
WHERE category.parent_id IS NULL
GROUP BY category.id, category.title
The key here is the use of COUNT(DISTINCT ...).
| {
"pile_set_name": "StackExchange"
} |
Q:
Various translations of "ticket"
The English word ticket (that is, a slip of paper used to grant access to something) can be translated several different ways in Spanish:
boleto
pasaje
billete
ticket
entrada
resguardo
What are the differences between these words? In what situations would each be used? Specifically, which are appropriate for a plane, bus, or train ticket?
A:
Pasaje and billete are usually used in the transportation sector (pasaje de tren, billete de avión, etc.). Boleto is commonly used in the lottery and gambling world (boleto de lotería), but can also be used in the same way as pasaje and billete.
Entrada refers to a ticket to a show or a generic event.
Resguardo is usually a paper that certifies something (a comercial transaction, a bureaucratic affair, a package delivery, the delivery of a document...).
Note I'm talking about the usage of these words in Spain.
A:
Voy a colocar el significado coloquial de estas palabras en Chile, solo por referencia:
boleto/boleta
trozo de papel que atestigua de algún suceso. Es común que se refiera a un recibo de pago.
pasaje:
derecho de abordar a un transporte. También se refiere al trozo de papel que atestigua este derecho.
billete:
reservado para moneda de papel.
ticket:
igual que boleto pero no se usa para recibo de pago. Es común que se use en vez de entrada.
entrada:
derecho de entrar a algun lugar. Común para conciertos, cine, etc. Se usa también para nombrar el trozo de papel que atestigua esto.
resguardo / recibo:
no usado en Chile, pero se entiende que es un papel o comprobante en recibo de un pago o prenda.
A:
Boleto, pasaje, billete and ticket (and tiquete) have different local precise meanings, but are usually understandable by almost anyone.
Entrada, refers usually to a ticket you have to show up at the entrance (in the cinema, for example).
Resguardo is not an usual word for a ticket, but I heard it used referring to the part of the ticket stripped from the main body which allows you to exit from some place where you could only enter with a ticket and the going back in (showing the resguardo)
| {
"pile_set_name": "StackExchange"
} |
Q:
NAN when Normalize double values
I am trying to calculate tfidf values for files and save them into a matrix, the tfidf values I want to normalize them between 0 and 1 first.
But I have a problem, The first value calculated after normalization is NAN, how can I fix this problem.
This is what I did
double tf; //term frequency
double idf; //inverse document frequency
double tfidf = 0; //term frequency inverse document frequency
double minValue=0.0;
double maxValue=0;
File output = new File("E:/hsqldb-2.3.2/hsqldb-2.3.2/hsqldb/hsqldb/matrix.txt");
FileWriter out = new FileWriter(output);
mat= new String[termsDocsArray.size()][allTerms.size()];
int c=0; //for files
for (String[] docTermsArray : termsDocsArray) {
int count = 0;//for words
for (String terms : allTerms) {
tf = new TfIdf().tfCalculator(docTermsArray, terms);
idf = new TfIdf().idfCalculator(termsDocsArray, terms);
tfidf = tf * idf;
//System.out.print(terms+"\t"+tfidf+"\t");
//System.out.print(terms+"\t");
tfidf = Math.round(tfidf*10000)/10000.0d;
tfidfList.add(tfidf);
maxValue=Collections.max(tfidfList);
tfidf=(tfidf-minValue)/(maxValue-minValue); //Normalization here
mat[c][count]=Double.toString(tfidf);
count++;
}
c++;
}
This is the output I got
NaN 1.0 0.0 0.021
0.0 1.0 0.0 0.365 ... and others
only the first number is NAN, also this number is originally a number that is repeated many times in the matrix but its value is not NAN
Please give me some ideas to fix this issue.
Thanks
A:
You're dividing by zero. This will happen when the first value that is added to the tfidflist is 0.0.
In order to perform a real normalization, you'll probably have to compute all possible values first, then compute the min/max of these values, and afterwards, normalize all values based on these min/max values. Roughly:
// First collect all values and compute min/max on the fly
double minValue=Double.MAX_VALUE;
double maxValue=-Double.MAX_VALUE;
double values = new String[termsDocsArray.size()][allTerms.size()];
int c=0; //for files
for (String[] docTermsArray : termsDocsArray) {
int count = 0;//for words
for (String terms : allTerms) {
double tf = new TfIdf().tfCalculator(docTermsArray, terms);
double idf = new TfIdf().idfCalculator(termsDocsArray, terms);
double tfidf = tf * idf;
tfidf = Math.round(tfidf*10000)/10000.0d;
minValue = Math.min(minValue, tfidf);
maxValue = Math.max(maxValue, tfidf);
values[c][count]=tfidf;
count++;
}
c++;
}
// Then, create the matrix containing the strings of the normalized
// values (although using strings here seems like a bad idea)
c=0; //for files
for (String[] docTermsArray : termsDocsArray) {
int count = 0;//for words
for (String terms : allTerms) {
double tfidf = values[c][count];
tfidf=(tfidf-minValue)/(maxValue-minValue); //Normalization here
mat[c][count]=Double.toString(tfidf);
count++;
}
c++;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
WebGL verbose mode in Chrome / Chromium
I wish to start in the web development using WebGL technology, but I have a minor issue.
Usually, I test my applications in Chrome. I love its console which is, as far I'm concerned, better than Firebug.
However, even though a verbose mode is available in Firefox (with webgl.verbose set to true), I haven't found such a thing for Chrome. I know that there is some ways to avoid the problem by using some libraries (I've found webgl-debug.js, but some errors throw unreadable messages).
So my question is : do you know any builtin way to enable WebGL logging in Chrome / Chromium ?
A:
chrome supports "about:gpu" url. It provides basic information on webgl and a profiler.
Else you got WebGL Inspector which is a "An advanced WebGL debugging toolkit". In short this is a bit like the javascript console, but for WebGL
| {
"pile_set_name": "StackExchange"
} |
Q:
Derived functors of torsion functor
Let $A$ be a domain. For every $A$-module $M$ consider its torsion submodule $M^{tor}$ made up of elements of $M$ which are annihilated by a non zero-element of $A$. If $f \colon M \to N$ is a homomorphism, then $f(M^{tor}) \subseteq N^{tor}$, so call $f^{tor} \colon M^{tor} \to N^{tor}$ the induced map. We have a covariant functor $^{tor}$ from the category of $A$-modules to itself. It is straightforward to verify that $^{tor}$ is an additive left-exact functor; so we can consider its right-derived functors $R^i$, for $i \geq 0$: if $Q^{\bullet}$ is an injective resolution of $M$, then $R^i(M) = H^i((Q^{\bullet})^{tor})$.
If $A$ is a PID then it is quite easy to compute $R^i(M)$ for finite $A$-module $M$, because it is easy to have injective resolutions. In fact it is well-known that $K$ and $K/A$ are injective $A$-modules, where $K$ is the field of fractions of $A$.
My questions are:
Can one compute $R^i(M)$ if $A$ is not a PID or $M$ is not finite? Have these functors been studied?
What is the relation between $R^i$ and $\mathrm{Tor}_j$?
In the category of $A$-modules, are there other derived functors that have been studied and that are not $\mathrm{Tor}$ nor $\mathrm{Ext}$?
A:
You may wish to read about local cohomology. The Wikipedia article is mostly about the sheaf theory, but over an affine scheme and for quasicoherent sheaves, you can think of it as follows: if $R$ is a ring (say, noetherian), $I \subset R$ an ideal, then for an $R$-module $M$, $H^i_I(M)$ is the right-derived functor of the functor $M \mapsto \varinjlim \hom_R(R/I^m, M)$; this sends a module $M$ to its collection of $I$-torsion elements. (The interpretation of $I$-torsion and global sections supported along the subscheme cut out by $I$ gives the connection between this discussion and the sheaf theoretic formulation.) So, since the derived functor of $\hom$ is $\mathrm{Ext}$, a little abstract nonsense shows that $H^i_I(M)$ can be described as $\varinjlim \mathrm{Ext}^i(R/I^m, M)$. (It seems more natural for these functors to be related to $\mathrm{Ext}$ than to $\mathrm{Tor}$, since to obtain the torsion in a module, you are looking for maps into the module, not elements of the tensor product.
Let's say that $R$ is a local ring and $I$ the maximal ideal. Then, the local cohomology modules are exactly the torsion modules you ask about. If $R$ is regular, they can be computed using the local duality isomorphism $H^i_I(M) = (\mathrm{Ext}^{n-i}(M, k))^{\vee}$ (where $\vee$ denotes the Matlis dual, $n$ the dimension, and $k$ the residue field; see SGA 2, exp. V). So if you know a projective resolution for $M$, you can use that to compute the local cohomology groups (usually, a projective resolution is much easier to find than an injective one!).
A:
For 3: This is almost cheating, I know...
Mac Lane studied the functors $\mathrm{Trip}_n$ which arise from derivating (?) the functor $M\otimes N\otimes P$ of three variables. There are references in his book Homology. The interesting thing is, this cannot be expressed in terms of $\mathrm{Tor}$.
For 1: If $\mathcal I$ is the set of all non-zero ideals ordered by inclusion (which is directed), we have $$M^{tors}=\varinjlim_{I\in\mathcal I}\:\hom_A(A/I,M).$$ You should look up the relationship between right-deriving , and directed direct limits like this one, and that should tell you what your $R^i$ functors are. (This will tell you that your $R^i$ are more related to $\mathrm{Ext}$ than to $\mathrm{Tor}$)
| {
"pile_set_name": "StackExchange"
} |
Q:
How to become a Game Artist?
If I understand correctly, there are two major kinds of artists involved in game development - 2D and 3D.
I think I somewhat understand, that if I want to go 3D, I should learn some 3D modeling package really well and that would be it. Please correct me if I am wrong.
With 2D I am completely lost. There are a lot of tutorials on the web describing so many different topics like general drawing, vector graphics, raster graphics, pixel art and a whole lot of other things.
So what tools and techniques are most commonly used by professional 2D artists and if I wanted to become one, exactly were should I start(like tomorrow) and what should my goals be?
A:
think I somewhat understand, that if I want to go 3D, I should learn some 3D modeling package really well and that would be it. Please correct me if I am wrong.
I'll correct you, since you are wrong. Let's assume you want to become a writer, a good one. Do you just need to learn to read and write and type fast on keyboard and that would be it? Of course not.
If you want to become an artist SERIOUSLY consider going through real, traditional art lessons. Drawing, figure drawing especially, doodle in your notebook, give yourself tasks etc. Constantly draw and paint whatever you want to make. Parallel to that you can teach yourself some poly modeling program along with detailing program - a combo of maya+zbrush or maya+mudbox or 3dsmax+mudbox or modo+zbrush or whatever.
You see, you need only a few weeks at most to learn any of these programs sufficiently to know pretty much everything you need to know. What then? You are an artist all of a sudden? Just like by learning Word you are a writer? You, hopefully, understand what I'm talking about.
I'd suggest a plan:
pick up a program or two and stick with it. I suggest Maya or Max along with Mudbox or Zbrush. You can't go wrong with either. You can do magic with any of them. See which you like more and stick to it.
Look at gnomon or digital tutorials or 3dbuzz VMTs (good intro to maya they have) programs and go through basics of program. Doodle with it, play with it. Sing a long with those tutorials
Next step is give yourself a task and go from start to finish with it. 'I wan't to make a car' and make it. Try to go after original, your own designs (this is more appreciated). Avoid knitting models over blueprints and sketches that are not your own. Unless you want to be a drone later on.
Specialize in something. Maybe you like robots, maybe you like animals... draw them, design them, make them alive - models, textures, lights. Maybe you like designing environments, make them.
Work work and work. You WILL suck hard at it first, but you will get better. Ask for criticism around and take it in and work on it more to make it better. Never get criticism as personal insults or any of that, it's not.
Take a drawing class and life drawing lessons.
| {
"pile_set_name": "StackExchange"
} |
Q:
Google Analytics using React Web / Cordova
I have made an app using React (Web) and bundled it using Cordova.
I am using a plugin called 'react-ga' for tracking Google Analytics.
I initialise react-ga when the app is run using:
ReactGA.initialize('my-ga-uid', { debug: true, cookieDomain: 'auto' })
And create an event using something like:
ReactGA.event({
category: 'Test',
action: 'Test button pressed event.'
})
or,
ReactGA.set({ location.pathname })
ReactGA.pageview(location.pathname)
The analytics work fine in the browser and on dev builds, however when I bundle a build for iOS or Android, the analytics don't seem to be tracked?
Is there something wrong with my code? Do I need to initialise something else? Do I need a cordova plugin instead (although I want analytics to still work in a web browser)?
A:
ReactGA.set({ checkProtocolTask: null }) // Disable file protocol checking.
| {
"pile_set_name": "StackExchange"
} |
Q:
What is the best way to delete all of a large table in t-sql?
We've run across a slightly odd situation. Basically there are two tables in one of our databases that are fed tons and tons of logging info we don't need or care about. Partially because of this we're running out of disk space.
I'm trying to clean out the tables, but it's taking forever (there are still 57,000,000+ records after letting this run through the weekend... and that's just the first table!)
Just using delete table is taking forever and eats up drive space (I believe because of the transaction log.) Right now I'm using a while loop to delete records X at a time, while playing around with X to determine what's actually fastest. For instance X=1000 takes 3 seconds, while X=100,000 takes 26 seconds... which doing the math is slightly faster.
But the question is whether or not there is a better way?
(Once this is done, going to run a SQL Agent job go clean the table out once a day... but need it cleared out first.)
A:
TRUNCATE the table or disable indexes before deleting
TRUNCATE TABLE [tablename]
Truncating will remove all records from the table without logging each deletion separately.
A:
To add to the other responses, if you want to hold onto the past day's data (or past month or year or whatever), then save that off, do the TRUNCATE TABLE, then insert it back into the original table:
SELECT
*
INTO
tmp_My_Table
FROM
My_Table
WHERE
<Some_Criteria>
TRUNCATE TABLE My_Table
INSERT INTO My_Table SELECT * FROM tmp_My_Table
The next thing to do is ask yourself why you're inserting all of this information into a log if no one cares about it. If you really don't need it at all then turn off the logging at the source.
A:
1) Truncate table
2) script out the table, drop and recreate the table
| {
"pile_set_name": "StackExchange"
} |
Q:
How to find the faces that appear on the screen?
In my application, a user is browsing a scene, and I'd like to be able to find the faces that appear on the screen meaning that the user can see it (so I'd like to exclude the faces that are not in the frustum of the camera, and the faces that are hidden by other faces).
An idea I had was to use the Raycaster class to throw rays on each pixel of the screen, but I'm afraid the performances will be low (I don't need it to be realtime but I'd like it not to be really slow).
I know that there is a z-buffer to know which faces are shown because they are not hidden and I wanted to know if there was an easy way with Three.js to use the z-buffer to find those faces.
Thank you !
A:
My final solution is the following :
I use three.js server-side to render my model (people here, and there explain how to do it).
I use the color attribute of Face3 in order to set a specific color for each face. Each face has a number (the number of the face in the .obj file), this number will be represent the Face3 color.
I use only ambient light
I do the rendering
My render represents in fact a set of pixels : if a certain color appears on the rendering, it means that the face corresponding the color is appearing on the screen.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to write JSON Reads for models without a matching constructor signature?
I have the following JSON object:
{"values": "123,456,789"}
I'd like to convert this JSON object to an instance of class Foo with the following signature, using the JSON library of the play framework:
case class Foo(value1: Double, value2: Double, value3: Double)
In the documentation about JSON combinators, there are just examples for conversions where the constructor signature of the class matches the extracted JSON values. If I had such a case, I'd have to write the following Reads function:
import play.api.libs.json._
import play.api.libs.functional.syntax._
implicit val fooReads: Reads[Foo] = (
(JsPath \ "values").read[String]
)(Foo.apply _)
However, first I have to split the string "123,456,789"to three separate strings, and convert each of them to Double values before I can create an instance of class Foo. How can I do this with JSON combinators? I was not able to find examples for that. Trying to pass in a function literal as an argument does not work:
// this does not work
implicit val fooReads: Reads[Foo] = (
(JsPath \ "values").read[String]
)((values: String) => {
val Array(value1, value2, value3) = values.split(",").map(_.toDouble)
Foo(value1, value2, value3)
})
A:
The compiler is getting confused and thinks you are passing your function as the (usually implicit) parameter to the read method. You can get around this by explicitly using Reads.map instead of ApplicationOps.apply:
implicit val fooReads: Reads[Foo] = {
(JsPath \ "values").read[String] map { values =>
val Array(value1, value2, value3) = values.split(",").map(_.toDouble)
Foo(value1, value2, value3)
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Calling SOAP in Jersey
I have a requirement from a client which wants to write a wrapper REST web service around a SOAP web service.
I am new to both SOAP and REST. Can anyone please let me know
If we can call SOAP web service inside a REST web service?
If yes, then how to do it in Jersey 2.0?
Thanks in advance.
A:
Yes
There is nothing special about calling a SOAP service from inside a JAX-RS Resouce. Just write a JAX-WS client as described in the Java EE 7 Tutorial.
| {
"pile_set_name": "StackExchange"
} |
Q:
Javascript menu hover action: wont stop cascading
Here is the website I am talking about: http://benjaminpotter.org/clients/c3carlingford/
So I am building a menu that has popups appear when you hover over a menu item:
and so I have written a javascript (jQuery) function to animate it:
$(".info").css({"opacity": "0", "margin-top": "10px"}).hide(0);
$("#menu-item-51").mouseenter(function(){
$(".nav1").stop(0).show(0).delay(300).animate({"opacity": "1", "margin-top": "-3px"}, {"duration": (250)});
});
$("#menu-item-51").mouseleave(function(){
$(".nav1").stop(0).show(0).delay(0).animate({"opacity": "0", "margin-top": "10px"}, {"duration": (150)});
});
$("#menu-item-11").mouseenter(function(){
$(".nav2").stop(0).show(0).delay(300).animate({"opacity": "1", "margin-top": "-3px"}, {"duration": (250)});
});
$("#menu-item-11").mouseleave(function(){
$(".nav2").stop(0).show(0).delay(0).animate({"opacity": "0", "margin-top": "10px"}, {"duration": (150)});
});
$("#menu-item-12").mouseenter(function(){
$(".nav3").stop(0).show(0).delay(300).animate({"opacity": "1", "margin-top": "-3px"}, {"duration": (250)});
});
$("#menu-item-12").mouseleave(function(){
$(".nav3").stop(0).show(0).delay(0).animate({"opacity": "0", "margin-top": "10px"}, {"duration": (150)});
});
$("#menu-item-13").mouseenter(function(){
$(".nav4").stop(0).show(0).delay(300).animate({"opacity": "1", "margin-top": "-3px"}, {"duration": (250)});
});
$("#menu-item-13").mouseleave(function(){
$(".nav4").stop(0).show(0).delay(0).animate({"opacity": "0", "margin-top": "10px"}, {"duration": (150)});
});
$("#menu-item-14").mouseenter(function(){
$(".nav5").stop(0).show(0).delay(300).animate({"opacity": "1", "margin-top": "-3px"}, {"duration": (250)});
});
$("#menu-item-14").mouseleave(function(){
$(".nav5").stop(0).show(0).delay(0).animate({"opacity": "0", "margin-top": "10px"}, {"duration": (150)});
});
$("#menu-item-15").mouseenter(function(){
$(".nav6").stop(0).show(0).delay(300).animate({"opacity": "1", "margin-top": "-3px"}, {"duration": (250)});
});
$("#menu-item-15").mouseleave(function(){
$(".nav6").stop(0).show(0).delay(0).animate({"opacity": "0", "margin-top": "10px"}, {"duration": (150)});
});
so firstly there is the issue that it is a lot of code... but it works...
So what's the issue?
The issue is this:
when you mouse back and fourth over all the links, it cascades. Cool I know, but the client doesn't like it. Neither do I.
So how do I change it so that it behaves better?
I would love it to work like this:
where their dropdowns don't have the same: mouse over all, cascade thing...
You can check out their site here: http://thecity.org/
A:
Try changing stop(0) to stop(true, true) at all the places in your code. It should work as expected.
Passing true as both the arguments to stop method makes sure it clears the queue of previous animations and also forcefully completes them quickly if they are still animating.
stop(clearQueue, jumpToEnd) - Stops the currently-running animation on the matched elements.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to upload a csv to a JSP page and send the data in it to a MySQL database?
I have a JSP where a user can upload a csv using input type="file". I want the content in that csv to go to a database. I searched for hours and only found methods in php for this, but I have zero experience in php, so any help in java is appreciated.
My JSP:
<span>Upload CSV : <input type="file"></span>
<table style="margin-left: 20px">
<tr>
<th class="table-custom">Employee Name</th>
<th class="table-custom">In Time</th>
<th class="table-custom">Out Time</th>
</tr>
</table>
A:
Try this code
document.getElementById("upload").addEventListener("change", upload, false);
var out = "";
function upload(e) {
document.getElementById('csvForm').innerHTML = "";
var data = null;
var file = e.target.files[0];
var reader = new FileReader();
reader.readAsText(file);
reader.onload = function(event) {
var csvData = event.target.result;
var parsedCSV = d3.csv.parseRows(csvData);
parsedCSV.forEach(function(d, i) {
if (i == 0) return true; // skip the header
if (d.constructor === Array) {
createForm(d);
}
});
}
}
function createForm(csv) {
out += '<input value="' + csv[0] + '" name="date" type="hidden">';
out += '<input value="' + csv[1] + '" name="name" type="hidden">';
out += '<input value="' + csv[2] + '" name="in-time" type="hidden">';
out += '<input value="' + csv[3] + '" name="out-time" type="hidden">';
document.getElementById('csvForm').innerHTML = out;
document.getElementById('csvForm').setAttribute("formmethod", "post");
document.getElementById('csvForm').setAttribute("formaction", "testBrain.jsp");
out += '<br>';
}
<input id="upload" type="file">
<form id="csvForm"></form>
<input type="submit" value="submit" form="csvForm" formmethod="post" formaction="testBrain.jsp">
<script src="https://d3js.org/d3.v3.js"></script>
testBrain.jsp
<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<%@page import="java.sql.*,java.util.*"%>
<html>
<head>
<meta http-equiv="Refresh" content="5;url=test.jsp">
</head>
</html>
<%
String[] date=request.getParameterValues("date");
String[] name=request.getParameterValues("name");
String[] inTime=request.getParameterValues("in-time");
String[] outTime=request.getParameterValues("out-time");
try
{
Class.forName("com.mysql.jdbc.Driver");
Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/testDb", "root", "1234");
Statement st=conn.createStatement();
for(int x=0; x<name.length;x++) {
int i = st.executeUpdate("insert into csv(date,name,`in`,`out`)values('" + date[x] + "','" + name[x] + "','" + inTime[x] + "','" + outTime[x] + "')");
}
out.println("Data is successfully inserted! Redirecting.. Please wait");
// String redirectURL = "test.jsp";
// response.sendRedirect(redirectURL);
}
catch(Exception e)
{
System.out.print(e);
e.printStackTrace();
out.println("Error");
}
%>
I used this question and changed some stuff for it to work.
You can view the content before submitting it by removing the "type = hidden" in the createForm JS.
| {
"pile_set_name": "StackExchange"
} |
Q:
Python % o módulo
Siguiendo un manual de Python, no entiendo la explicación:
X dividido entre Y tiene como resto J" , 100 dividido entre 16 tiene como resto 4
Entiendo lo que quiere decir resto etc, pero no logro entender como realiza tal operación:
100 - 25 * 3 % 4 = 97
Gracias de antemano.
A:
El resto de una división es lo que queda, si una división no es exacta, por ejemplo, cuando uno trata de saber si una numero es par en programación, se usa
(num % 2) == 0
este código quiere decir, si el resto de 'num' dividido entre 2, es igual a 0, esto quiere decir que es par, por ejemplo
(10 % 2) == 0
esto devuelve verdadero, ya que el resto de 10 divido entre 2 es 0, el % hace referencia a este caso en el 0
| {
"pile_set_name": "StackExchange"
} |
Q:
How to read the in-memory (kernel) partition table of /dev/sda?
I accidentally overwrote my /dev/sda partition table with GParted (full story on AskUbuntu). Since I haven't rebooted yet and my filesystem is still perfectly usable, I was told I might be able to recover the partition table from in-kernel memory. Is that possible? If so, how do I recover it and restore it?
A:
Yes, you can do this with the /sys filesystem.
/sys is a fake filesystem dynamically generated by the kernel & kernel drivers.
In this specific case you can go to /sys/block/sda and you will see a directory for each partition on the drive. There are 2 specific files in those folders you need, start and size. start contains the offset from the beginning of the drive, and size is the size of the partition. Just delete the partitions and recreate them with the exact same starts and sizes as found in /sys.
For example this is what my drive looks like:
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 133119 65536 83 Linux
/dev/sda2 * 133120 134340607 67103744 7 HPFS/NTFS/exFAT
/dev/sda3 134340608 974675967 420167680 8e Linux LVM
/dev/sda4 974675968 976773167 1048600 82 Linux swap / Solaris
And this is what I have in /sys/block/sda:
sda1/
start: 2048
size: 131072
sda2/
start: 133120
size: 134207488
sda3/
start: 134340608
size: 840335360
sda4/
start: 974675968
size: 2097200
I have tested this to verify information is accurate after modifying the partition table on a running system
A:
I made a script to help solve this problem, with NO WARRANTY.
(but I tested on my virtual machine)
Running the following script, with damaged HD at first parameter, as in:
user@host:~$ ./repart.sh sda
Content of repart.sh:
#!/bin/bash
echo "unit: sectors"
for i in /sys/block/$1/$1?/; do
printf '/dev/%s : start=%d, size=%d, type=XX\n' "$(basename $i)" "$(<$i/start)" "$(<$i/size)"
done
The output is a sfdisk format. But caution, this file has to be modified to be used. At the extended partition type=5, increase the size, using all logical space plus space between start of extended and start of first logical partition.
unit: sectors
/dev/sda1 : start=63, size=2040192, type=XX
/dev/sda2 : start=2040255, size=20482875, type=XX
/dev/sda3 : start=22523130, size=19197675, type=XX
/dev/sda4 : start=41720805, size=2, type=XX
/dev/sda5 : start=41720868, size=208782, type=XX
You have to change the type, from XX to number of partition type. Put the bootable partition at first line.
unit: sectors
/dev/sda1 : start=63, size=2040192, type=83, bootable
/dev/sda2 : start=2040255, size=20482875, type=83
/dev/sda3 : start=22523130, size=19197675, type=fd
/dev/sda4 : start=41720805, size=208845, type=5
/dev/sda5 : start=41720868, size=208782, type=82
Apply this changes
cat repart.sfdisk | sfdisk -f /dev/sda
Reread partition tables
partprobe
/sbin/blockdev --rereadpt
Reinstall grub
grub-install /dev/sda
A:
Have you tried testdisk? It can scan the disk and recover lost partition tables, even after you've rebooted.
It's available pre-packaged for Debian and presumably for Ubuntu too. Probably other distros.
If you're booting a gparted CD it's probably worth checking to see if it's pre-installed on that.
| {
"pile_set_name": "StackExchange"
} |
Q:
Dealing with an empty array when using .map() in React
I have a React.JS component that will map the notes variable to display.
However, I have run into the problem of having no notes and receiving an error. What is a proper way to approach this?
Here is the code:
import React, {Component} from 'react';
class List extends Component {
constructor(props){
super(props);
}
render(){
var notes = this.props.items.map((item, i)=>{
return(
<li className="listLink" key={i}>
<p>{item.title}</p>
<span>{item.content}</span>
</li>
)
});
return(
<div className='list'>
{notes}
</div>
);
}
}
export default List;
A:
If you want to render the notes when at least one note exists and a default view when there are no notes in the array, you can change your render function's return expression to this:
return(
<div className='list'>
{notes.length ? notes : <p>Default Markup</p>}
</div>
);
Since empty arrays in JavaScript are truthy, you need to check the array's length and not just the boolean value of an array.
Note that if your items prop is ever null, that would cause an exception because you'd be calling map on a null value. In this case, I'd recommend using Facebook's prop-types library to set items to an empty array by default. That way, if items doesn't get set, the component won't break.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to create records of a custom object?
How to create records of a custom object?
I can create a record of any standard object type (for example Account) by going to App Launcher and selecting the standard object type (for example Account).
I can not do the same with the custom object type, there is just no option for the record of such type in App Launcher.
A:
You should create tabs to see the 'objects' in App Launcher.
Go to Setup > Tabs > Custom Object Tabs > New
| {
"pile_set_name": "StackExchange"
} |
Q:
how to give a parameters to a function in C
I don't know how to give a parameters to a function. I wrote a body, as you can see in my program below. Any answers and explanations appreciated!
#include <avr/io.h>
#include <stdint.h>
// Ceramic Resonator
#ifndef F_CPU
#define F_CPU 3686400 // 4MHz
#endif
// UART
#define UART_BAUD_RATE 9600
#define UART_BAUD_CALC(UART_BAUD_RATE,F_OSC) ((F_CPU)/((UART_BAUD_RATE)*16L)-1)
int decode( int rcv[i], ... ){ !!!
int returnValue;
if ((rcv[0] == rcv[1]) && (rcv[0] == rcv[2]) && (rcv[1] == rcv[2])){
returnValue = 0;
//return UDR0;
}
else if (rcv[1] != rcv[2] && (rcv[0] == rcv[1])){
returnValue = 1;
//UDR0 = 01;
}
else if (rcv[1] != rcv[2] && (rcv[0] == rcv[2])){
returnValue = 2;
//UDR0 = 02;
}
else if (rcv[0] != rcv[1] && (rcv[1] == rcv[2])){
returnValue = 3;
//UDR0 = 03;
}
return returnValue;
}
int main(void){
// USART
UBRR0H =(uint8_t) (UART_BAUD_CALC(UART_BAUD_RATE,F_CPU) >>8);
UBRR0L =(uint8_t) UART_BAUD_CALC(UART_BAUD_RATE,F_CPU);
UCSR0B = (1<<RXEN0) | (1<<TXEN0); // enable receiver and transmitter,
UCSR0C = (3<<UCSZ00); // 8 bit (default: asynchronous, no parity, 1 stop-bit)
DDRC = (1<<5); // set data direction register bit 5 to one, this means PC5 is configured as output
PORTC = (1<<5); // set output value of PC5 to High-Level (Source Current, 5V to ground)
// VARIABLES
//uint8_t get;
// PROGRAM
unsigned char code[3] = {'x','y','z'}; // Here you need to write your code
unsigned char rcv[3]={'0','0','0'}; // received data
int i = 0;
int retVal;
while(1){
i = 0;
for(i=0;i<=2;i++){
// wait for empty transmit buffer
//while (!(UCSR0A & (1<<UDRE0)));
// wait for data to be received
while (!(UCSR0A & (1<<RXC0)));
/* put data into buffer, sends the data*/
{
code[i]= UDR0 ;
}
//while(1) // forever
//{
PORTC ^= (1<<5); //this is for LED
// get received data from buffer
rcv[i] = code[i];
}
retVal = decode(int rcv[i], ... ); !!!
// wait for empty transmit buffer
while (!(UCSR0A & (1<<UDRE0)));
// put data into buffer, sends the data
/*if ((rcv[0] == rcv[1]) && (rcv[0] == rcv[2]) && (rcv[1] == rcv[2]))*/
UDR0 = retVal;
}
}
A:
You should put pointer to an array and maybe size of it:
change:
int decode( int rcv[i], ...)
to
int decode( unsigned char* rcv)
and
retVal = decode(int rcv[i], ... ); !!!
to
retVal = decode(rcv); //rcv is a pointer
| {
"pile_set_name": "StackExchange"
} |
Q:
why isnt this regex returning my price value?
this is the tag im looking to find with the below regex '<span itemprop="price">34.97</span>'
matches = re.findall(r'<span itemprop="price">\$(\d+)</span>', html)
the above has only been tried as it shows as well without the $
Im expecting to see the price 34.97 in this example, but when i run the code, here are the values that are returned (no results are returned)
Highest Price:$0
Lowest Price:$0
200
[]
A:
use this regex demo
<span itemprop=\"price\">(\d*\.?\d+)</span>
It considers decimals as well as numbers
If you really dont care about what is between the span the here is its regex demo2
<span itemprop=\"price\">([^<]+)</span>
Feel free to adjust it to what ever you require as re.findall will return the entire span so you might need a forward and a backward lookup in this regex if you want just the numbers only and not the entire span. But that is up to you.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there an iPad PDF Reader app that has autoscroll?
I read stacks of PDFs on my iPad and I have downloaded (and paid for) a few PDF readers hoping to find an auto-scroll feature, without luck.
Does anyone know of any iPad app that can auto-scroll PDF's (preferably with Dropbox integration so I can get at my PDFs easily!).
Thanks
A:
To date, I have found only one PDF reader that has Dropbox connectivity and has autoscroll(after a fashion.)
It is PDF Reader Pro iTunes Link. It is a universal app that works on both iPad and iPhone and ties to Box.net, Dropbox, Googledocs, and iDisk. It has an "Autoflow" feature that allows timed jumps (from 5 to 100 seconds) of a preset number of pixels (20 to 400.) I bought it at the low price of $0.99 and it performs, pretty much, as expected. It has some basic annotation capabilities and "scans" documents using the camera in the iPad 2. I have an original so I couldn't test that feature.
I haven't used it a lot as I like the annotative capabilities of iAnnotate, but the app is a good fall back if I need the autoflow feature.
For $0.99 it looks to be a solid reader that fits the requirements you laid out.
| {
"pile_set_name": "StackExchange"
} |
Q:
Setting xlim for rbokeh plot inside shiny application
I'm using the rbokeh package for R. I've been having some good results integrating into a shiny app. I want to now integrate a feature where a dateRangeInput will now select the date range for the chart(it is time series data).
##necessary packages
install.packages("shiny")
install.packages("devtools")
install.packages("dplyr")
library(devtools)
devtools::install_github("ramnathv/htmlwidgets")
devtools::install_github("bokeh/rbokeh")
library(rbokeh)
library(dplyr)
library(shiny)
#example data set
james<-mtcars[c("mpg")]
james$date<-seq(from=as.Date("2013-05-16"),to=as.Date("2013-06-16"),by="days")
james$index<-1:4
#shiny app
shiny_example <- function(chart_data = james){
date_minmax <- range(chart_data$date)
shinyApp(
ui=fluidPage(
titlePanel("a plot"),
sidebarLayout(
sidebarPanel(
textInput("index","Enter the index from 1 to 16",value=1),
uiOutput("date_range")
),
mainPanel(
rbokeh::rbokehOutput("plot_cars")
)
)
),
server=function(input,output,session)
{
current_data <- reactive({
current_df <- subset(james,index==input$index)
return(current_df)
})
output$date_range <- renderUI({
plot_data <- current_data()
current_id_range <- range(plot_data$date)
return(
dateRangeInput("date_range",
"Date Range(X Axis",
min=date_minmax[1],
max=date_minmax[2],
start=current_id_range[1],
end=current_id_range[2])
)
})
output$plot_cars <- rbokeh::renderRbokeh({
plot_data <- current_data()
g<-rbokeh::figure(title="Cars",
width=800,
heigh=400,
xlab="Date",
ylab="mpg",
xlim=input$date_range) %>%
rbokeh::ly_points(date,
mpg,
data=plot_data) %>%
rbokeh::ly_lines(date,
mpg,
data=plot_data,
alpha=0.3)
return(g)
})
}
)
}
##run the app
shiny_example()
The above is example data but it works without the xlim argument in rbokeh::figure, as in that typing in a number from 1 to 4 in the input subsets the data accordingly and produces a plot reactively. The xlim argument seems to produce errors in the plot. Could anyone perhaps point me in the right direction in trying to fix the xlim issue?
Let me know if you need any more details.
A:
It seems to be a date formatting issue in rbokeh: https://github.com/bokeh/rbokeh/issues/100 which should be fixed soon.
| {
"pile_set_name": "StackExchange"
} |
Q:
JFace Dialog disposes widgets when still in use
I have a class that extends jface.dialogs.Dialog. In that dialog is a save button. When the user pressed that button I need to read the values from some swt.widgets.Text fields, but the text fields are disposed already.
What am I doing wrong?
public class MyNewDialog extends Dialog {
private Text txt;
public MyNewDialog(Shell parentShell) {
super(parentShell);
}
@Override
protected Control createDialogArea(Composite parent) {
Composite container = (Composite) super.createDialogArea(parent);
container.setLayout(new GridLayout(2, false));
txt = new Text(container, SWT.BORDER);
txt.setLayoutData(new GridData(SWT.FILL, SWT.TOP, true, false, 1, 1));
return container
}
@Override
protected void createButtonsForButtonBar(Composite parent) {
Button saveButton = createButton(parent, IDialogConstants.OK_ID, "Save", true);
saveButton.addSelectionListener(new SelectionAdapter() {
@Override
public void widgetSelected(SelectionEvent p_e) {
String string = txt.getText() //widget is disposed exception
}
}
}
A:
Since you're using IDialogConstants.OK_ID for your button, you can use the okPressed() method. No need to add a specific listener.
@Override
protected void okPressed()
{
value = txt.getText();
super.okPressed();
}
Then create a getter method method to return the value variable:
public String getValue()
{
return value;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Group as the Union of Subgroups
We know that a group $G$ cannot be written as the set theoretic union of two of its proper subgroups. Also $G$ can be written as the union of 3 of its proper subgroups if and only if $G$ has a homomorphic image, a non-cyclic group of order 4.
In this paper http://www.jstor.org/stable/2695649 by M.Bhargava, it is shown that a group $G$ is the union of its proper normal subgroups if and only if its has a quotient that is isomorphic to $C_{p} \times C_{p}$ for some prime $p$.
I would like to make the condition more stringent on the subgroups. We know that Characteristic subgroups are normal. So can we have a group $G$ such that , $$G = \bigcup\limits_{i} H_{i}$$ where each $H_{i}$'s are Characteristic subgroups of $G$?
A:
One way to ensure this happens is to have every maximal subgroup be characteristic. To get every maximal subgroup normal, it is a good idea to check p-groups first. To make sure the maximal subgroups are characteristic, it makes sense to make sure they are simply not isomorphic. To make sure there are not too many maximal subgroups, it makes sense to take p=2 and choose a rank 2 group.
In fact the quasi-dihedral groups have this property. Their three maximal subgroups are cyclic, dihedral, and quaternion, so each must be fixed by any automorphism.
So a specific example is QD16, the Sylow 2-subgroup of GL(2,3).
Another small example is 4×S3. It has three subgroups of index 2, a cyclic, a dihedral, and a 4 acting on a 3 with kernel 2. Since these are pairwise non-isomorphic, they are characteristic too. It also just so happens (not surprisingly, by looking in the quotient 2×2) that every element is contained in one of these maximal subgroups.
| {
"pile_set_name": "StackExchange"
} |
Q:
MSMQ problem: admin_queue$' cannot be initialized
We had during a planned failover of a cluster (Server 2003) an error:
The Message Queuing service cannot start. The internal private queue
'admin_queue$' cannot be initialized (Error: 0xc00e0001). If the
problem persists, reinstall Message Queuing.
We were not able to start the MSMQ cluster resource on the node(s).
because of urgence we did a reinstall (removed the cluster MSMQ resource and added it again).
Does anybody have an ideas how the MSMQ data got corrupted, can we avoid this in the future or can we restore the MSMQ data?
Kind regards,
Jonathan
A:
In the ClusteredMSMQ\storage\LQS directory there are a bunch of files that hold the configuration of your queues. One of these is the admin_queue$ file and something happened to it. Either it's missing or corrupted. Easiest solution is to copy a file from another MSMQ machine. The admin_queue$ file should be the same from machine to machine as it's not user-generated and is non-configurable. Make sure you copy the file to the right place - not the local msmq\storage\lqs directory.
| {
"pile_set_name": "StackExchange"
} |
Q:
Span width Related issue
in jsfiddle demo please give attention to position of "needed" word in ie9 it's coming on first line and in other browser it's going to second line i want this to go to second line in case of ie9 also.
js fiddle link
A:
The difference seems to be caused by the new rendering features of IE 9. If you look at the page on IE 9 in different modes (use F12 and set document mode to Compatibility Mode vs. Standards Mode), you’ll see that the width of the text changes: letters are packed more tightly in Standards Mode. (It is not a letter spacing issue; rather, a matter of text rendering details such as sub-pixel rendering.)
If you just need to force a line break, use <br>. Otherwise, the approach depends on the goal. It is impossible to force browsers to render a piece of text exactly the same way, even though they obey you font family and font size settings.
| {
"pile_set_name": "StackExchange"
} |
Q:
Ajuda em botão voltar php
Boa tarde,
Botão para voltar php.
Pessoal utilizo a função abaixo para ir para a próxima pergunta de um sistema de perguntas que tenho,não consegui criar essa função para voltar,alguém teria alguma dica?
Botão:
echo"<button id='button' type='next' name='next' class='btn btn-danger'>
<span class='glyphicon glyphicon-circle-arrow-right'></span>Próxima</button><br />";
Função:
if(isset($_GET['proxima'])){
$pergunta = (int)$_GET['proxima'];
header('location: comportamento.php?nro_pergunta='.$pergunta);
}
A:
Talvez a solução seja uma coisa bem simples, como no caso abaixo, utilizando HTTP_REFERER
<?php if (isset($_SERVER['HTTP_REFERER'])): ?>
<a href="<?php echo $_SERVER['HTTP_REFERER']; ?>">Voltar</a>
<?php endif ?>
| {
"pile_set_name": "StackExchange"
} |
Q:
Attempting to roll two dice randomly and add the sums till it reaches twentyone
I am trying to code a program that rolls two dice randomly, adds them together, and keeps doing that until it reaches 21. If it reaches 21 it wins but if it hits over 21 it loses.
This is what I have so far, it would be great if I could have some assistance on how to get the dice rolling properly. I am a beginner in java so still trying to understand the syntax.
import java.util.Random;
public class TwentyOne{
public static void main(String[] args) {
int dice1;
int dice2;
welcome();
rollingDice(int dice1,int dice2);
}
public static void welcome() {
System.out.println("Welcome to the game of Twenty-One! FEELING LUCKY?! goodluck!");
}
public static int rollingDice(int dice1, int dice2) {
dice1 = (int)(Math.random()*6 + 1);
dice2 = (int)(Math.random()*6 + 1);
int sum = dice1 + dice2;
return sum;
}
}
A:
As @KamalNayan stated above, you need to loop rollingDice until you are at or above 21, and there is no need to pass int agruments in to the rollingDice method since the rolled die values are generated within the scope of that method. Some printing of what's going on also helps to demonstrate what's going on during runtime:
public static void main(String[] args) {
welcome();
int total = 0;
while (total < 21) {
total += rollingDice();
};
System.out.println("Total for all rolls was: " + total);
if (total == 21) {
System.out.println("You win!");
}
else {
System.out.println("You lose.");
}
}
public static void welcome() {
System.out.println("Welcome to the game of Twenty-One! FEELING LUCKY?! goodluck!");
}
public static int rollingDice() {
int dice1 = (int) (Math.random() * 6 + 1);
int dice2 = (int) (Math.random() * 6 + 1);
int sum = dice1 + dice2;
System.out.println(String.format("dice1: %d dice2: %d for a total: %d", dice1, dice2, sum ));
return sum;
}
Here the output from a won game:
Welcome to the game of Twenty-One! FEELING LUCKY?! goodluck!
dice1: 4 dice2: 1 for a total: 5
dice1: 1 dice2: 4 for a total: 5
dice1: 1 dice2: 3 for a total: 4
dice1: 6 dice2: 1 for a total: 7
Total for all rolls was: 21
You win!
Process finished with exit code 0
| {
"pile_set_name": "StackExchange"
} |
Q:
Diagonal/Nilpotent vs ad-diagonal/nilpotent in Lie algebra
I am a bit confused about the connection between ad-diagonal/nilpotent and diagonal/nilpotent property of Lie algebra elements. Suppose $L$ is a complex Lie algebra and $x\in L$. If $L$ is semisimple complex Lie algebra, say $\mathfrak{sl}(2,\mathbb{C})$, then Jordan-Chevalley decomposition says we can always write this as $x=d+n$ where $d$ is diagonal (because $L$ is complex and hence the field is algebraically closed) and $n$ is nilpotent. It can then be shown that this also implies that $\text{ad}(x)=\text{ad}(d)+\text{ad}(n)$.
Should I understand this as saying that $\text{ad}(x)$ is diagonal iff $x$ is diagonal (same for nilpotent case)? If not, what are the conditions that relate $x$ and its adjoint representation $\text{ad}(x)$?
At least it looks to me that for arbitrary complex Lie algebra this is not true: take the identity matrix $I\in L=\mathfrak{gl}(2,\mathbb{C})$, and clearly $\text{ad}(I)\in \text{gl}(L)$ is a nilpotent matrix even though $I$ is not nilpotent matrix.
A:
The Jordan-Chevalley decompositions holds for semisimple Lie algebras. The Lie algebra $\mathfrak{gl}_n(\Bbb C)$ is not semisimple, since it has a non-trivial abelian ideal, its center.
Furthermore, $d$ need not be diagonal. It is only diagonalizable. And indeed, if $x$ is a non-nilpotent endomorphism in a linear Lie algebra, $ad(x)$ can be nilpotent. For $x=I$ in $\mathfrak{gl}_n(\Bbb C)$, we have $ad(I)=0$, the zero endomorphism, because $[I,B]=IB-BI=0$ for all $B\in \mathfrak{gl}_n(\Bbb C)$.
An element $x\in L$ is called ad-nilpotent, if $ad(x)$ is nilpotent.
"Should I understand this as saying that $ad(x)$ is nilpotent iff $x$ is nilpotent?" No, this is not true, as you have seen yourself.
| {
"pile_set_name": "StackExchange"
} |
Q:
In Beautifulsoup4, Get All SubElements of an Element, but Not SubElements of the SubElements
I've got the following html:
<div class="what-im-after">
<p>
"content I want"
</p>
<p>
"content I want"
</p>
<p>
"content I want"
</p>
<div class='not-what-im-after">
<p>
"content I don't want"
</p>
</div>
<p>
"content I want"
</p><p>
"content I want"
</p>
</div>
I'm trying to extract all the content from the paragraph tags that are SubElements of the <div class="what-im-after"> container, but not the ones that are found within the <div class="not-what-im-after"> container.
when I do this:
soup = Beautifulsoup(html.text, 'lxml')
content = soup.find('div', class_='what-im-after').findAll('p')
I get back all the <p> tags, including those within the <div class='not-what-im-after>, which makes complete sense to me; that's what I'm asking it for.
My question is how do I instruct Python to get all the <p> tags, unless they are in another SubElement?
A:
What you want is to set recursive=False if you just want the p tags under the what-im-after div that are not inside any other tags:
soup = BeautifulSoup(html)
print(soup.find('div', class_='what-im-after').find_all("p", recursive=False))
That is exactly the same as your loop logic checking the parent.
| {
"pile_set_name": "StackExchange"
} |
Q:
Set a div at random position on a grid
I am trying to make a grid that takes up 100% of the width of the browser window, firstly i am not sure on how to go about this grid and secondly I am wanting a div to have a random position within that grid, but will only fill the position if it is not occupied already.
I guess my question is, how would I go about it and if its even possible.
I'm guessing I would need a db to log all positions?
ps: When I say grid I don't mean 960 grid or any of them framework grids i'm just wanting a simple square grid
although i'm looking for each square to be 15px by 15px and the 'border' to be only 1px
Thanks for your help.
EDIT: All answers were great and all were acceptable I have chosen the one I have because it is the one that works best for what I want to do and the one that I used, I'm not saying that the others didn't work because they worked just as well. My initial requirements were for a fluid grid but have since changed which has made the answer I picked to be easier to integrate within my project.
Thank you everyone for your help!
A:
You can set a <div>'s position with CSS:
#div1 {
position: absolute;
left: 100px;
top: 100px;
width: 15px;
height: 15px;
}
should work. Then, knowing each div's coordinates via their left/top (store those somewhere) as well as how big they are, you can check for "collisions" when placing a new one with some simple math.
For example, to check if a single div New collides with an Existing one you can check if any of New's corners is within the Existing's square, for example:
if LeftNew >= LeftExisting AND LeftNew <= (LeftExisting + WidthExisting) then collides
if TopNew >= TopExisting AND TopNew <= (TopExisting + HeightExisting) then collides
| {
"pile_set_name": "StackExchange"
} |
Q:
Using Resharper to Identify Instances of IDisposable
Can Resharper (v 7.1.3) help identify code that does not apply the "using" keyword when instantiating objects that implement IDisposable (i.e. SqlConnection, StreamReader)?
A:
ReSharper alone cannot do this.
FXCop cannot do this either, unfortunately. FXCop can warn about types that contain fields of types that implement IDisposable, but the type that contains them does not implement IDisposable. This is not what is being asked for here.
What you need is Visual Studio 2012 and then enable the Code Analysis engine to work its magic on your code. Make sure to enable a ruleset that contains the rule.
Specifically you want to enable the CA2000 warning:
after enabling this, and writing code like this:
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
var stream = new MemoryStream();
}
}
}
you get this:
d:\Dev\VS.NET\ConsoleApplication1\ConsoleApplication1\Program.cs(14): warning : CA2000 : Microsoft.Reliability : In method 'Program.Main(string[])', call System.IDisposable.Dispose on object 'stream' before all references to it are out of scope.
Note: This will in some cases create both false negatives and false positives. First, the rule detects, and does not warn about the fact that you return such an object.
However, in the method where you get back the object, it will only call out the fact that you don't dispose it in some cases.
Specifically, this will create the warning:
static void Main(string[] args)
{
var stream = CreateStream(); // warning here
}
private static MemoryStream CreateStream()
{
return new MemoryStream();
}
whereas this will not:
static void Main(string[] args)
{
var stream = GetStream(); // NO warning here
}
private static MemoryStream GetStream()
{
return new MemoryStream();
}
The rule seems to detect that Create is a prefix for a factory method, so it falls upon the caller to dispose of the object, whereas Get is not such a prefix, so it falls upon the method being called to dispose of it, but since it returns the object, it doesn't have that responsibility either.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I extract html with fscanf
I have a file, each line holds a .
<div style="random properties" id="keyword1:string id:int">text</div>
<div style="random properties" id="keyword1:string id:int">text</div>
<div style="random properties" id="keyword2:string id:int">text</div>
<div style="random properties" id="keyword2:string id:int">text</div>
Can i with fscanf return a list of text and id for a matching keyword1 and keyword2?
A:
You can simply read it with regex:
std::string s;
std::regex r( "<div style=\"[^\"]*\" id=\".*(\\d+)\">((?:(?!</div>).)*)</div>" );
while( std::getline(in, s) ) {
std::smatch m;
if( std::regex_match(s, m, r) ) {
std::cout << "id = " << m.str(1) << ", text = " << m.str(2) << std::endl;
} else {
std::cout << "invalid pattern" << std::endl;
}
}
But if you want to read more about regex please go to http://en.cppreference.com/w/cpp/regex
| {
"pile_set_name": "StackExchange"
} |
Q:
Ear Test using MATLAB
I'm trying to test my ear frequency range by using MATLAB. The thing I could not understand is that I can hear the frequencies above 20 kHz. My sampling frequency is enough to satisfy the Nyquist criteria (80 kHz which is limit for MATLAB). So there is no possibility to have an aliasing. What can be the problem? Thank you for any help.
Here is my code:
K = 1000; % kilo
sampling_freq = 80*K; % 80KHz sampling for sin wave
sampling_period = sampling_freq^(-1);
f = 20*K; %kilohertz
t = 0:sampling_period:5; % 5 second sin signal
testing_sound = sin(2*pi*f*t);
plot(t,testing_sound)
sound(testing_sound,sampling_freq);
A:
First of all - don't use 80kHz (I don't know why you came up with this number as a limit for MATLAB, whereas it is 1e6 in fact) but something that is more reasonable and standardized, i.e. 96kHz (or 88.2kHz if you wish...). Another thing is that your sound card might not be able to serve such high sampling frequency. You should check that in playback options --> Advanced tab and set it correctly. Last thing - you should drop using obsolete sound and use audioplayer - unless you use some ooold MATLAB.
| {
"pile_set_name": "StackExchange"
} |
Q:
unknown output in snakemake
I'm working on implementing a very simple pipeline in snakemake in hopes of replacing a chain of annoying bash scripts with one cohesive Snakefile.
I'm having trouble writing a rule that splits a file into smaller pieces (using GNU split), and then leads to a second rule where the output is concatenated together.
I don't know what to write for the input in the concat step, since I don't know how to define all the files fitting the pattern bam_files/test*. I tried with glob, but that decidedly doesn't seem to work (it seems like it's actually skipping split altogether with the glob included). Is there any better way that I could be doing this?
# test snakemake pipeline
import glob
SAMPLE_IDS = ["test"]
rule all:
input:
expand("bam_files/{FASTQ}.out", FASTQ=SAMPLE_IDS)
rule split:
input:
expand("{FASTQ}.txt", FASTQ=SAMPLE_IDS)
output:
"bam_files/{FASTQ}."
shell:
"cat {input} | split -l 1000 -d - {output}."
rule concat:
input:
split_files = glob.glob("bam_files/{FASTQ}.*")
output:
"bam_files/{FASTQ}.out"
shell:
"cat {input} > {output}"
A:
I think this should work:
SAMPLE_IDS = ["test"]
rule all:
input:
expand("bam_files/{FASTQ}.out", FASTQ=SAMPLE_IDS)
rule split:
input:
"{FASTQ}.txt"
output:
dynamic("bam_files/{FASTQ}.{PART}")
params:
length=1000
shell:
"cat {input} | split -l {params.length} -d - bam_files/{FASTQ}."
rule concat:
input:
split_files = dynamic("bam_files/{FASTQ}.{PART}")
output:
"bam_files/{FASTQ}.out"
shell:
"cat {input} > {output}"
It looks like the split rule should be taking one file {FASTQ}.txt at a time and producing {FASTQ}.1, {FASTQ}.2, ... or something similar. Because you don't know ahead of time how many files it will produce, you need to use dynamic() for both split.output and concat.input.
| {
"pile_set_name": "StackExchange"
} |
Q:
Once a function is bound with `bind`, its `this` cannot be modified anymore?
Just playing around with JS a bit, I'm wondering why the following code outputs "foo" instead of "bar":
String.prototype.toLowerCase.bind("FOO").call("BAR")
In my understanding, .bind("FOO") returns a function that will have "FOO" for this, so calling .bind("FOO")() outputs "foo".
But then, .call("BAR") calls the function with "BAR" for this, so "bar" should have been outputted.
Where am I wrong?
A:
.bind("FOO") returns a function that will have "FOO" for this
Not quite. It returns a function which binds "FOO" for this of toLowerCase. It works like this:
function bind(func, thisArg) {
return function () {
return func.call(thisArg);
}
}
You can rebind the this of the returned function all you want, the call to func (here: toLowerCase) is already "hardcoded" to be thisArg (here: "FOO").
| {
"pile_set_name": "StackExchange"
} |
Q:
construction of a chord that is trisected by a point
Find a construction of a chord through a point P such that
P divides the chord in the ratio 1:2 in any given circle.
Cleary not all points P work, so I'm trying to find the construction when it is possible. What is invariant about all such chords that would be useful in finding a construction?
A:
Assume $P$ is a point where such a chord (labeled here as $\overline{QR}$ could be constructed. Draw a circle with center $O$ that passes through $P$. By symmetry, $A$ is the other chord trisector. Take $x=QP=PA=AR$.
Let $r$ be the radius of the larger circle and $r'$ the radius of the smaller circle. Also draw $\overline{QBO}$ and extend it to $C$. By the intersecting secant theorem: $$QB\cdot QC=QP\cdot QA\\(r-r')(r+r')=x(2x)\\r^2-r'^2=2x^2\\x=\sqrt{\frac{r^2-r'^2}{2}}$$
So $x$ is constructable from $r$ and $r'$ (I remember doing it for a level in Euclidea and it was gross but not impossible), and drawing a circle with radius $x$ centered at $P$ will identify $Q$ on the outer circle.
To find this point $Q$ in the plane we have to have the calculated distance $x$ match or exceed the gap from $P$ to the given circle. To wit,
$x=\sqrt{\frac{r^2-r'^2}{2}}\ge (r-r')$
Square both sides of the inequality and factor the difference of squares:
$\frac{(r+r')(r-r')}{2}\ge (r-r')^2$
$r+r'\ge2(r-r')$
Thence
$3r'\ge r$
This says that $P$ must be on or outside the circle concentric with the given one and having radius one-third as large, a constraint we expect on intuitive grounds.
ETA: https://www.youtube.com/watch?v=KxnrR_Dg8Tg is a walkthrough of the Euclidea level I mentioned. The goal of that is "backwards" in that they are trying to find $P$ given $Q$, but they do the same job of constructing $x$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Using Select By Location in ArcPy?
How do you create a new feature class from the select by location tool in python?
This is my script that works but I just don't know what the next step is.
arcpy.SelectLayerByLocation_management("Buildings", "WITHIN_A_DISTANCE", "Hydrography", "50 feet", "NEW_SELECTION","NOT_INVERT")
A:
To do this with python you need to create some variables first so you can call the selection and with the arcpy.CopyFeatures tool copy the selection to a new feature class.
# Define output feature class location
fc = "C:\Users\Documents\ArcGIS\Default.gdb\Testers"
# Define Selection criteria
Selection = arcpy.SelectLayerByLocation_management('Points', "WITHIN", 'Trajectory')
# Define output selection and fc
arcpy.CopyFeatures_management(Selection, fc)
This example was used within the Python interpreter in ArcMap. You can see that by using variables it makes everything easier to use and understand.
The example you provide should be something like this:
import arcpy
#Set geoprocessing environments
arcpy.env.workspace = "C:/Student/PythonBasics10_0/Westerville.gdb"
arcpy.env.overwriteOutput = True
# Set name of output fc and select buildings by location
Outputfc = "BuildingsWithin50ft"
Selection = arcpy.SelectLayerByLocation_management("Buildings", "WITHIN_A_DISTANCE", "Hydrography", "50 feet", "NEW_SELECTION","NOT_INVERT")
arcpy.CopyFeatures_management(Selection, Outputfc)
A:
An alternative approach to creating new fc from selected layer features would be:
Define environmental workspace
Define two selection fc's to use within select by location method
Use Make Feature Layer method (you will have to use this if performing selections via standalone script, outside of ArcGIS)
Use Select Layer by Location method
Use Feature Class to Feature Class method to create new layer from selected features
| {
"pile_set_name": "StackExchange"
} |
Q:
Can my network use Ubuntu Openstack?
I have a small network of computers,
I am migrating from Arch Linux to Ubuntu's OpenStack Platform for usability reasons.
The reason I am asking is because of the following:
Installing Ubuntu OpenStack requires at least seven machines with two disks, two of which have two network interfaces (NICs). Install Ubuntu Server on one of the machines with two interfaces.
I have two systems, one of which I can and hopefully will host six virtual machines and will form the cluster, and the other I am planning to be the cluster controller.
I am having numerous issues meeting the prerequisites of using the ubuntu openstack platform, namely:
Network Configuration and Setup
Actually, I don't know anything about OpenStack.
Could someone give me a walkthrough on how to meet the requirements of this situation with the following resources:
Computer 1 - The (Hopefully it works) Virtualised Cluster
Intel Core i7 - 3770K.
16GB of physical memory, 8GB virtual (swap).
6+ KVM Virtual Machines (Presumably using all available resources).
Two NIC's.
Ubuntu Server as VM Host.
Now is the controller.
Computer 2 - OpenStack Controller
4GB RAM and Swap
Two NIC's
A new-ish Intel Pentium (The one a bit better than a late Core 2 Duo)
I can't really afford any more systems as I am on a High School student's income, is there a way to make this work? I am not deleting my Arch partition until I know for sure. I am really limited by my systems and the documentation for the networking setup seems oddly scarce. Is anyone able to assist me on installation? Also, I don't know how to configure libvirt to allow me to network like this, will I need to host DHCP on one of my 'servers' or can I still rely on my good little Netgear router (DGND4000).
A:
There are multiple ways you can build and deploy OpenStack on Ubuntu:
(easiest) use the OpenStack Autopilot. As you noticed, currently this requires 7 machines.
Juju. You can use Juju to deploy and configure OpenStack services. You need to install MAAS and Juju, but you do not need to know about OpenStack as much.
(hardest) Apt. follow the installation guide for Ubuntu on OpenStack.org, and build your configuration manually. You will need to learn quite a bit about OpenStack.
All three solutions require hardware, because OpenStack is infrastructure, and while it can be faked to test patches, OpenStack with almost no hardware is... hardly useful.
If you want to test OpenStack in the small scale, not for production, I recommend for now you use:
apt-get install openstack
and try the single system cloud configuration you can build there. In alternative, you can try DevStack, which also gives you a (hardly realistic) cloud packed into one box.
Finally, I hear the next beta of the OpenStack Autopilot may require less hardware resources for non-HA configurations. You may want to wait a few weeks until that's out and see if we managed to bring the bar low enough for you - you may still need a third machine, but that would be all.
| {
"pile_set_name": "StackExchange"
} |
Q:
Custom Comma Separators?
I am working on an application that presents data to both Indian and US managers.
I'd like to display numbers with western-style comma separation (000,000,000) or Indian style (00,00,00,000) based on the currency being displayed. I don't want to use localize as figures need to be displayed consistently, no matter which region it is being viewed in.
For my purposes, the final three zeroes in each case can be dropped, so that I will either need a comma every third or second character, working from the end.
I'm starting with something like this:
def commas(str, format="US"):
if format == "IN":
x = 2
elif format == "US":
x = 3
new_str = ','.join(str[i:i+x] for i in range(0, len(str), x))
return new_str
This outputs a number like so: 815,532,89. Is there a way to reverse this so that I can get it to render properly?
A:
You can apply the commas to the reversed string and then reverse it again:
def commas(str, format="US"):
if format == "IN":
x = 2
elif format == "US":
x = 3
rev = str[::-1]
new_str = ','.join(rev[i:i+x] for i in range(0, len(rev), x))[::-1]
return new_str
| {
"pile_set_name": "StackExchange"
} |
Q:
Can I use another way to get the time from C++?
string GetTime()
{
time_t Timev;
struct tm * TimeInformation;
time(&(time_t)Timev);
(tm*)TimeInformation = localtime(&(time_t)Timev);
char Timec[100];
strftime((char*)Timec, 100, "[%X]:", (tm*)TimeInformation);
string GetTimex((char*)Timec);
return GetTimex;
};
Why do I get the warning
warning C4996: 'localtime': This function or variable may be unsafe. Consider using localtime_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS.
Is there any other way to get time in format [Hour:Minutes:Second], and is it possible to shorten the code like int abc; abc=123 to int abc{123}?
A:
If you are willing to install Howard Hinnant's free, open-source tz library, then GetTime() can be simplified down to:
#include "tz.h"
#include <string>
std::string
GetTime()
{
using namespace std::chrono;
using namespace date;
return format("[%X]:", make_zoned(current_zone(), system_clock::now()));
}
This just output for me:
[10:42:32]:
Here are the installation directions (including for Windows).
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.