text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
Do all rules and lore go out the window if ysalamiri are used by Jedi?
For those who don't know, ysalamiri are in the Legends part of Star Wars - creatures that repel the Force. They seem to blunt the powers of anyone using the Force. If one of these creatures was used by a Jedi, who then kills another Jedi in anger - does the creature nullify any desire to change to the dark side?
A:
NO
Anger is an emotion, not a Jedi power. The Ysalamiri repel the force.
So a Jedi doing killing somebody in anger will still be considered to "have slipped towards the Dark side".
Jedi code:
There is no Emotion, There is only peace.
There is no ignorance, there is knowledge.
There is no passion, there is serenity.
There is no death, there is the Force.
So that particular Jedi would have broken the first and third line of the code that is the core of Jedi teachings. He would be considered to be on the path of the Dark side.
Sith code:
Through passion, I gain strength.
Through strength, I gain power.
Through power, I gain victory.
Through victory, my chains are broken.
The Force shall free me.
As you see Killing in anger fulfils the first line of the Sith code.
Wearing a pet wont change that.
If this person is remorseful after the act he or she can be redeemed. If they continue doing the "Sith-y" things they would just slip further down the dark path.
A:
I disagree with the answers stating that it does not make a difference. Drawing upon the dark side (as an angry jedi will) is both corrupting and addictive beyond the act you are committing with the force. See "forever will it dominate your destiny". Using the force while angry will make you more open to the dark side.
Ysalamiri will prevent you from touching the force and therefore anger will not open the jedi to the dark side.
Of course, the anger might point to self control issues and those will will haunt the jedi in the Ysalamiri free future.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can't access request in python flask
I am trying to access request headers when a user visits a particular page. It is a cron function and according to this link. I should be able to identify if the request comes from the Service Account by checking for header X-Appengine-Cron. I've tried this and have been using request throughout my application with no problems however for this it returns the error:
local variable 'request' referenced before assignment
I have tried replicating other functions which utilise request however there is no issue there. I am using a very basic test to check it is working:
from flask import (
Blueprint, Flask, flash, g, redirect, render_template, request, url_for
)
@app.route('/cloud-datastore-export', methods = ['GET'])
def cloud_datastore_export():
test = request.headers.get('X-Appengine-Cron')
I expect test to contain X-Appengine-Cron value.
A:
The message:
local variable 'request' referenced before assignment
is a good suggestion. Indeed, you did not define request anywhere in your code.
request property is "injected" under the hood by Flask when you decorate a method with @app.route.
Hence:
You will be able to access request properly only from within methods decorated with @app.route where app is an instance of flask.Flask class.
For example:
from flask import Flask
# (some code here)
app = Flask(__name__)
@app.route('/path/to/this/method')
def my_handler():
headers = request.headers
# (some more logic here)
| {
"pile_set_name": "StackExchange"
} |
Q:
Textbox in Excel
I have Gridview like this.
Here is my last column Gridview code;
<EditItemTemplate>
<asp:TextBox ID="txtTNOT" runat="server" Height="35" TextMode="MultiLine" DataSourceID="SqlDataSource8"></asp:TextBox>
<asp:SqlDataSource ID="SqlDataSource8" runat="server"
ConnectionString="<%$ ConnectionStrings:SqlServerCstr %>"
SelectCommand="SELECT [T_NOT] FROM [TAKIP] WHERE T_HESAP_NO = @T_HESAP_NO ">
<SelectParameters>
<asp:Parameter Name="T_HESAP_NO" Type="String" />
</SelectParameters>
</asp:SqlDataSource>
</EditItemTemplate>
My last column has a Textbox.
When i import to excel with this code;
protected void LinkButton1_Click(object sender, EventArgs e)
{
Response.Clear();
Response.AddHeader("content-disposition", "attachment;filename=TahTakip.xls");
Response.Charset = "";
Response.ContentType = "application/vnd.xls";
System.IO.StringWriter stringWrite = new System.IO.StringWriter();
System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite);
GridView1.RenderControl(htmlWrite);
Response.Write(stringWrite.ToString());
Response.End();
}
Still i have a Textbox in my Excel file.
How can i delete Textbox [NOT VALUE INSIDE COLUMN] when i exporting to Excel?
Best Regards,
Soner
A:
Try this approach as shown in this link(http://mattberseth.com/blog/2007/04/export_gridview_to_excel_1.html). I would suggest to replace textbox with Label controls to avoid this.
| {
"pile_set_name": "StackExchange"
} |
Q:
help understanding part of Wiener's attack on RSA w/ small d
In this paper, the author writes:
Now $k\phi(N) = ed - 1 < ed$. Since $e < \phi(N)$, we see that $k <d < \frac{1}{3}N^{\frac{1}{4}}$. Hence, we obtain:
$\mid \frac{e}{N} - \frac{k}{d} \mid \leq \frac{1}{dN^{\frac{1}{4}}} < \frac{1}{2d^2}$
I am having trouble seeing how these last equalities were found. I see how to get the following:
$\mid \frac{e}{N} - \frac{k}{d} \mid$ = $\frac{k}{d} - \frac{e}{N}$ $< \frac{3d}{dn^{\frac{1}{2}}} $
I do not see how to get from here to $\frac{1}{2d^2}$.
Is anyone willing to explain this to me?
A:
The previous part of the proof in that paper already shows that $\left|\frac{e}{N} - \frac{k}{d}\right| \le \frac{3k}{d\sqrt{N}}$. As $k < d$ we estimate this by $\frac{3d}{d\sqrt{N}}$, where the $d$'s cancel to get $\frac{3}{\sqrt{N}} = \frac{3}{N^{1 \over 2}} = \frac{3}{N^{1 \over 4}}\frac{3}{N^{1 \over 4}}\frac{1}{3}$.
As $d < \frac{1}{3}N^{\frac{1}{4}}$ we also get that $\frac{1}{d} > \frac{1}{\frac{1}{3}N^{{1 \over 4}}} = \frac{3}{N^{{1 \over 4}}}$.
Combining we get that $\left|\frac{e}{N} - \frac{k}{d}\right| \le \frac{1}{3d^2}$, which is even a better bound (the 2 might be the constant in the continued fraction convergents theorem, so that might explain it presence).
| {
"pile_set_name": "StackExchange"
} |
Q:
Linear Dependence Using Gaussian Elimination
Let $$\left\{\begin{pmatrix} 1\\ 2 \end{pmatrix},\begin{pmatrix} 4\\ 1 \end{pmatrix},\begin{pmatrix} 1\\ 1 \end{pmatrix}\right\}$$
Find if there are linear independent or not, if so show a vector that is linear dependent
I know that the row rank is equal to the column rank so I can look on the matrix of the vectors as is meaning
\begin{pmatrix} 1& 4 &1\\ 2 &1&1 \end{pmatrix} or
\begin{pmatrix} 1& 2 \\ 4 &1 \\ 1&1 \end{pmatrix}
In the first case I am looking at $$\alpha \begin{pmatrix} 1\\ 4\\ 1 \end{pmatrix}+ \beta \begin{pmatrix} 2\\ 1\\ 1\end{pmatrix}= \begin{pmatrix} 0\\ 0\\ 0\end{pmatrix}$$
So the only solution is $\alpha=\beta=0$ so I know it is linear dependent, but I do not know how to generate one vector from the two others.
Unlike
$$\alpha \begin{pmatrix} 1\\ 2 \end{pmatrix}+ \beta \begin{pmatrix} 4\\ 1\end{pmatrix}+\gamma \begin{pmatrix} 1\\ 1 \end{pmatrix}= \begin{pmatrix} 0\\ 0\\ 0\end{pmatrix}$$
In one case I am looking at the $\ker(T)$ and the other $\text{Im}(T)$?
A:
With the matrix
$$
\begin{pmatrix}1&2\\4&1\\1&1\end{pmatrix}
$$
you're looking at two vectors in $\mathbb{R}^3$; doing row reduction will only show that the original vectors in $\mathbb{R}^2$ are linearly dependent.
With row reduction on the other matrix
$$
\begin{pmatrix}
1 & 4 & 1 \\
2 & 1 & 1
\end{pmatrix}
\to
\begin{pmatrix}
1 & 4 & 1 \\
0 & -7 & -1
\end{pmatrix}
$$
you get that the third vector is in the span of the first two. If you also find the reduced row echelon form
$$
\to
\begin{pmatrix}
1 & 4 & 1 \\
0 & 1 & 1/7
\end{pmatrix}
\to
\begin{pmatrix}
1 & 0 & 3/7 \\
0 & 1 & 1/7
\end{pmatrix}
$$
you also prove that
$$
\begin{pmatrix} 1 \\ 1 \end{pmatrix}
=
\frac{3}{7}\begin{pmatrix} 1 \\ 2 \end{pmatrix}
+
\frac{1}{7}\begin{pmatrix} 4 \\ 1 \end{pmatrix}
$$
because row operations don't change linear relations between the columns.
| {
"pile_set_name": "StackExchange"
} |
Q:
Force a new start of "Cassini" when I start debugging a new instance of my web application
In our ASP.NET application we perform some initializations upon the Application Start event.
When the application is started in visual Studio 2010 with 'Debug->Start new instance' the ASP.NET Development server does not start new, and my Application's Start event is not fired.
My workaround is to manually stop the development server - is there a setting to force this automatically?
A:
I think setting Project properties > Web > Enable Edit and Continue forces Cassini to restart when debugging is started.
| {
"pile_set_name": "StackExchange"
} |
Q:
Mapbox GL JS markers go outside map
When the map loads, all of the burgers / markers are visible (I intentionally set the zoom to account for all the burgers in the area.) For some reason, when I pan around on the map or zoom in/out, the burgers / markers follow the pan and escape the map's bounds / edges. I tried using default markers and removing the script that programmatically adds popups to the markers. I'll post some relevant code here.
You can see that the burgers not only show up outside the map, but stretch the width of the window as they move.
HTML
<div class="content">
<div class="story-list"></div>
<div class="story-map">
<div class="story-map-container" id="story-map-container"></div>
</div>
</div>
CSS
.content {
padding: 6.5%;
width: 87%;
background-image: url("../media/images/temp-gradient-low.jpg");
background-repeat: no-repeat;
background-size: cover;
background-position: center;
}
/* story-list */
.story-list {
display: inline-block;
position: relative;
width: 66%;
z-index: 1;
vertical-align: top;
font-size: 0;
padding-bottom: 1%;
}
/* story-map */
.story-map { /* using id='' in order to override the position set by mapbox*/
/*background-color: white;*/
display: inline-block;
position: sticky;
top: 0;
width: 33%;
height: 100vh;
/*padding-left: 2.5%;*/
z-index: 0;
vertical-align: top;
/*float: right;*/
}
#story-map-container {
background-color: lightgreen;
width: 100%;
/*margin-left: 2.5%;*/
height: 100%;
overflow: visible;
}
.mapboxgl-map {
position: absolute;
overflow: visible;
}
.mapboxgl-marker {
background-image: url("../media/icons/burger-marker.png");
background-size: cover;
width: 50px;
height: 50px;
border-radius: 50%;
cursor: pointer;
}
.mapboxgl-popup {
max-width: 200px;
}
.mapboxgl-popup-content {
text-align: center;
}
JS
var map = null;
function initMapbox() {
mapboxgl.accessToken = 'pk.eyJ1IjoiZGFua3NreSIsImEiOiJjanNmbTA0YWkwdWx5NDNtdG1idHpwNTE3In0.Y16huX7_p26tsDlcJTWWFQ';
map = new mapboxgl.Map({
container: 'story-map-container',
style: 'mapbox://styles/mapbox/streets-v11',
zoom: 10,
center: [-118.338604, 34.083480]
});
}
function parseStuff() {
const lorem = `Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.`;
const list = Array.from({length: 10}, (x,i) => {
return {
name: 'The Burger Place',
address: '123 Yumyum Hwy',
coordinates: {lat: 34.083480 + Math.random() * 0.1, lng: -118.348604 + Math.random() * 0.1},
phoneNumber: '1-123-456-7890',
website: {
text: 'BURGERSITE',
url: 'http://google.com'
},
description: 'A happy place for people who eat meat.',
review: lorem.substring(0, lorem.length * 0.6)
};
});
console.log(list);
list.forEach((element, index) => {
var customMarker = document.createElement('div');
customMarker.className = 'mapboxgl-marker';
customMarker.onclick = (e) => {
map.panTo([element.coordinates.lng, element.coordinates.lat]);
window.location.hash = `burger-place-${index}`
};
var popupContent = `<a href="${element.website.url}">${element.name}</a><br /><a href="tel:${element.phoneNumber}">${element.phoneNumber}</a>`
var marker = new mapboxgl.Marker(customMarker)
.setLngLat([element.coordinates.lng, element.coordinates.lat]);
marker.setPopup(new mapboxgl.Popup({ offset: 25 }).setHTML(popupContent))
.addTo(map);
})
$('.story-list').html(componentList);
}
window.onload = () => {
initMapbox();
parseStuff();
};
A:
.story-map {
overflow: hidden;
//...
}
fixes your issues. Though I'm also somewhat confused as to why mapbox draws the markers beyond its canvas.
| {
"pile_set_name": "StackExchange"
} |
Q:
Odd array behaviour in JavaScript
I'm attempting to draw boxes onto a canvas using JavaScript; my code works, but I'm having trouble with my arrays. Say I have a multi-demensional array called map and it is declared like so:
var map = [
[0,1,1],
[0,0,1],
[0,1,1],
];
Where 1 is a box and 0 is blank space, but when I run my code the output looks like the following:
0,0,0
1,0,1
1,1,1
Is there any way to fix this so the output matches map? My code looks like this:
var canvas = null;
var ctx = null;
var x,y,count,inc,ax,ay;
var map = [
[0,0,0],
[1,0,1],
[1,1,1],
];
window.onload = function () {
canvas = document.getElementById("gameArea");
ctx = canvas.getContext("2d");
y=0;
x=0;
ax=0;
ay=0;
count=0;
inc=0;
for(;count<3;count++){
if(count>0){
inc = inc + 40;
console.log("inc:"+inc);
console.log();
}
ay=count;
console.log("ay:"+ay);
console.log();
y = y + inc;
console.log("y:"+y);
console.log();
for(;ax<3;x=x+40,ax++){
if(map[ax][ay]==1){
console.log(ax+","+ay)
console.log(map[ax][ay]);
console.log();
ctx.strokeRect(x,y,40,40);
console.log("block:"+x+","+y);
}
}
console.log();
x=0;
y=0;
ax=0;
}
};
And the HTML is as follows:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Single Stage</title>
<script src="js/game.js" type="text/javascript">
</script>
<style type="text/css">
#gameArea{
display:block;
margin:0 auto;
background-color:#FFFFFF;
border: 1px solid black;
}
</style>
</head>
<body>
<canvas id="gameArea" width='800' height='480'></canvas>
</body>
</html>
A:
You've just mixed up your rows and columns
try switching map[ax][ay]==1 to map[ay][ax]==1
| {
"pile_set_name": "StackExchange"
} |
Q:
Photopicker & FB GrapSharer issues in iOS 11
My code was working fine on iOS 10 but after updating to iOS 11 nothing seems to work.
This is my code For sharing video on facebook :
internal func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]){
self.dismiss(animated: true, completion: { () -> Void in
})
guard let videoURL = info[UIImagePickerControllerReferenceURL] as? NSURL else {
return // No video selected.
}
print(videoURL)
let video = Video(url: videoURL as URL)
var content = VideoShareContent(video: video)
content.hashtag = Hashtag.init("#Ojas")
if FBSDKAccessToken.current() != nil{
if FBSDKAccessToken.current().hasGranted("publish_actions") {
print("Have permission")
let sharer = GraphSharer(content: content)
sharer.failsOnInvalidData = true
sharer.message = "From #Ojas App"
sharer.completion = { result in
// Handle share results
print("Share results : \(result)")
}
do{
try sharer.share()
//try shareDialog.show()
}catch{
print("Facebook share error")
}
}
}
But nothing is working as it was working before.
Here is log I see for ImagePicker :
[discovery] errors encountered while discovering extensions: Error Domain=PlugInKit Code=13 "query cancelled" UserInfo={NSLocalizedDescription=query cancelled}
And now there is a alert saying "app_name" wants to use "facebook.com" to Sign in.
Links I reffered :
PhotoPicker discovery error: Error Domain=PlugInKit Code=13
Any idea why everything stopped working for iOS 11. Any help would be appreciated.
A:
Okay, this is about asking permissions which I already asked at the beginning of my app. Still I need to ask again, I dont know why, but it worked.
PHPhotoLibrary.requestAuthorization({ (status: PHAuthorizationStatus) -> Void in
()
if PHPhotoLibrary.authorizationStatus() == PHAuthorizationStatus.authorized {
print("creating 2")
// Impelement UiImagepicker method
}
})
| {
"pile_set_name": "StackExchange"
} |
Q:
Form to form with PHP
I am trying to create a multi steps form where user will fill the form on page1.php and by submitting can go to page2.php to the next 'form'. What would be the easiest way?
Here is my code:
<?php
if ($_SERVER['REQUEST_METHOD'] == 'POST') {
?>
<form id="pdf" method="post">
New project name:<input type="text" name="pr_name" placeholder="new project name..."><br/>
New project end date:<input id="datepicker" type="text" name="pr_end" placeholder="yyyy-mm-dd..."><br/>
<textarea class="ckeditor" name="pagecontent" id="pagecontent"></textarea>
<?php
if ($_POST["pr_name"]!="")
{
// data collection
$prname = $_POST["pr_name"];
$prend = $_POST["pr_end"];
$prmenu = "pdf";
$prcontent = $_POST["pagecontent"];
//SQL INSERT with error checking for test
$stmt = $pdo->prepare("INSERT INTO projects (prname, enddate, sel, content) VALUES(?,?,?,?)");
if (!$stmt) echo "\nPDO::errorInfo():\n";
$stmt->execute(array($prname,$prend, $prmenu, $prcontent));
}
// somehow I need to check this
if (data inserted ok) {
header("Location: pr-pdf2.php");
}
}
$sbmt_caption = "continue ->";
?>
<input id="submitButton" name="submit_name" type="submit" value="<?php echo $sbmt_caption?>"/>
</form>
I have changed following Marc advise, but I don't know how to check if the SQL INSERT was OK.
Could give someone give me some hint on this?
thanks in advance
Andras
the solution as I could not answer to my question (timed out:):
Here is my final code, can be a little bit simple but it works and there are possibilities to check and upgrade later. Thanks to everyone especially Marc.
<form id="pdf" method="post" action="pr-pdf1.php">
New project name:<input type="text" name="pr_name" placeholder="new project name..."><br/>
Email subject:<input type="text" name="pr_subject" placeholder="must be filled..."><br/>
New project end date:<input id="datepicker" type="text" name="pr_end" placeholder="yyyy-mm-dd..."><br/>
<textarea class="ckeditor" name="pagecontent" id="pagecontent"></textarea>
<?php
include_once "ckeditor/ckeditor.php";
$CKEditor = new CKEditor();
$CKEditor->basePath = 'ckeditor/';
// Set global configuration (will be used by all instances of CKEditor).
$CKEditor->config['width'] = 600;
// Change default textarea attributes
$CKEditor->textareaAttributes = array(“cols” => 80, “rows” => 10);
$CKEditor->replace("pagecontent");
if ($_SERVER['REQUEST_METHOD'] == 'POST')
{
// data collection
$prname = $_POST["pr_name"];
$prsubject = $_POST["pr_subject"];
$prend = $_POST["pr_end"];
$prmenu = "pdf";
$prcontent = $_POST["pagecontent"];
//SQL INSERT with error checking for test
$stmt = $pdo->prepare("INSERT INTO projects (prname, subject, enddate, sel, content) VALUES(?,?,?,?,?)");
// error checking
if (!$stmt) echo "\nPDO::errorInfo():\n";
// SQL command check...
if ($stmt->execute(array($prname, $prsubject, $prend, $prmenu, $prcontent))){
header("Location: pr-pdf2.php");
}
else{
echo"Try again because of the SQL INSERT failing...";
};
}
$sbmt_caption = "continue ->";
?>
<input id="submitButton" name="submit_name" type="submit" value="<?php echo $sbmt_caption?>"/>
</form>
A:
A basic structure like this will do it:
form1.php:
<?php
if ($_SERVER['REQUEST_METHOD'] == 'POST') {
... process form data here ...
if (form data ok) {
... insert into database ...
}
if (data inserted ok) {
header("Location: form2.php");
}
}
?>
... display page #1 form here ...
And then the same basic structure for each subsequent page. Always submit the form back to the page it came from, and redirect to the next page if everything's ok.
| {
"pile_set_name": "StackExchange"
} |
Q:
Use Attribute to create Request IP constraint
I would like to do the following (Pseudo Code):
[InternalOnly]
public ActionResult InternalMethod()
{ //magic }
The "InternalOnly" attribute is for methods that should check the HttpContext request IP for a known value before doing anything else.
How would I go about creating this "InternalOnly" attribute?
A:
You could create a custom filter attribute:
public class InternalOnly : FilterAttribute
{
public void OnAuthorization (AuthorizationContext filterContext)
{
if (!IsIntranet (filterContext.HttpContext.Request.UserHostAddress))
{
throw new HttpException ((int)HttpStatusCode.Forbidden, "Access forbidden.");
}
}
private bool IsIntranet (string userIP)
{
// match an internal IP (ex: 127.0.0.1)
return !string.IsNullOrEmpty (userIP) && Regex.IsMatch (userIP, "^127");
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
PhoneGap/jQuery Mobile on Android gets pushed down
I'm just starting to use Phone Gap with jQuery Mobile so I don't quite know how to debug and what sort of debugging tools are available on the emulators.
The header title bar gets pushed down a bit on the Android emulator (AVD_for_Nexus_7_by_Google). However, everything looks fine on the iOS simulator and web browsers.
Any ideas what might be happening or what I might try to debug this?
Here's the code:
<!-- Start of index page -->
<div data-role="page" id="index">
<div data-role="header">
<a href="index.html" data-icon="home" class="ui-btn-left" data-transition="slide" data-direction="reverse">Home</a>
<h1>Hello!</h1>
</div><!-- /header -->
<div data-role="content">
<ul data-role="listview" data-filter="true" data-inset="true">
<li><a href="" data-transition="slide">hihi</a>
<ul data-role="listview" data-filter="true" data-inset="true">
<li><a href="" data-transition="slide">hihi1</a>
<ul data-role="listview" data-filter="true" data-inset="true">
<li>examplea</li>
<li>exampleb</li>
<li>exampleb</li>
</ul>
</li>
<li><a href="" data-transition="slide">hihi2</a>
<ul data-role="listview" data-filter="true" data-inset="true">
<li>example2</li>
<li>example2</li>
<li>example2</li>
</ul>
</li>
<li><a href="" data-transition="slide">hihi3</a>
<ul data-role="listview" data-filter="true" data-inset="true">
<li>example3</li>
<li>example3</li>
<li>example3</li>
</ul>
</li>
</ul>
</li>
</ul>
</div><!-- /content -->
</div><!-- /page -->
Here's an image of what I'm seeing:
Thank you for your time!
A:
Add margin-top:0 in your code.
Code is :-
.ui-shadow, .ui-btn-up-a, .ui-btn-hover-a, .ui-btn-down-a, .ui-body-b, .ui-btn-up-b, .ui- btn-hover-b, .ui-btn-down-b, .ui-bar-c, .ui-body-c, .ui-btn-up-c, .ui-btn-hover-c, .ui-btn-down-c, .ui-bar-c, .ui-body-d, .ui-btn-up-d, .ui-btn-hover-d, .ui-btn-down-d, .ui-bar-d, .ui-body-e, .ui-btn-up-e, .ui-btn-hover-e, .ui-btn-down-e, .ui-bar-e, .ui-overlay-shadow, .ui-shadow, .ui-btn-active, .ui-body-a, .ui-bar-a {
box-shadow: none;
margin-top: 0;
text-shadow: none;
}
Just add this to your CSS. Also remember PhoneGap is easy, just make sure you read the documentation properly.
| {
"pile_set_name": "StackExchange"
} |
Q:
Scheduled database polling with WSO2 Data Services Server is not working
I am working on the this Scheduled database polling with WSO2 Data Services Server blog on linux Ubuntu with WSO2 DSS 3.0.1 and ESB 4.7.
0
While i am inserting the values into student_registration table,
Nothing displaying in WSO2 ESB terminal side and WSO2 DSS terminal side.
Scheduling in not working, some one please help me to solve this.
A:
Please share your DB content, you need to have an intial timestamp set in the timestamp table
| {
"pile_set_name": "StackExchange"
} |
Q:
Adding adjacent ranges to existing range
I have a rather uncommon issue
I'm trying to create a named range with several areas in it.
Using that I will use the [area] method in index to retrieve my data
What I will do is to have a named range in Excel referring to several areas
MyNamedRange refersto ="A1:A3,B1:B3,C1:C3" etc.
This named range is being build with VBA so what i have is
For x = 1 to 10
Set rngTemp = Range(cells(x,1), cells(x,3))
if x > 1 then
Set unionRange = union(unionRange, rngTemp)
else
Set unionRange = rngTemp
end if
Next x
MyWorkBook.Names.Add Name:="MyRange", RefersTo:=unionRange
However, this range is set to A1:C3 (and thus is not divided into several areas)
I know it is because the ranges are adjacent to each other, but is there anyway I can override this and make sure that Excel splits them up into several areas?
Best regards,
A:
Is this what you are trying?
Sub Sample()
Dim MyWorkBook As Workbook
Dim ws As Worksheet
Dim x As Long
Dim rngTemp As Range
Dim refR1C1 As String
'~~> Change this to the relevant workbook
Set MyWorkBook = ThisWorkbook
'~~> Change this to the relevant Sheet
Set ws = MyWorkBook.Sheets("Sheet1")
With ws
For x = 1 To 10
Set rngTemp = Range(.Cells(x, 1), .Cells(x, 3))
If refR1C1 = "" Then
refR1C1 = "="
Else
refR1C1 = refR1C1 & .Name & "!" & rngTemp.Address(ReferenceStyle:=xlR1C1) & ","
End If
Next x
refR1C1 = Left(refR1C1, Len(refR1C1) - 1)
End With
Debug.Print refR1C1
MyWorkBook.Names.Add Name:="MyRange", RefersToR1C1:=refR1C1
Debug.Print Range("MyRange").Areas.Count
End Sub
Screenshot:
| {
"pile_set_name": "StackExchange"
} |
Q:
package problems in java
I write a source file and put that public class in a package :
package abc;
public class Employee
{
// Constructor
public Employee(String name, double salary)
{
this.name = name;
this.salary = salary;
}
// Methods
public String getName()
{
return this.name;
}
public double getSalary()
{
return this.salary;
}
// instance field
private String name;
private double salary;
}
Then I try to compile it using command: javac Employee.java , it generates a .class file in the same directory as the source file
Now I try to use this package, so I write a source file :
import abc.*;
public class HelloWorld
{
public static void main(String args[]){
//System.out.println("hello world");
Employee aEmployee = new Employee("David",1000);
System.out.println(aEmployee.getName() + aEmployee.getSalary());
}
}
I try to compile it using: javac HelloWorld.java , but it has a error says : package abc doesn't exist
I have the following questions:
1) Why did this error happen ?
2) How to solve this problem ?
3) Each time when I package some classes, where can I find the package to use afterwards
I've read some docs about this, but that's so complex, can somebody explain it simply ?
A:
1) Why did this error happen ?
This is because the java compiler looks for a directory tree when it tries to load a package, either in the classpath or in a jar file. This means for a package called abc.foo.bar, it will look for the directory tree: /abc/foo/bar and expect classes that belong to that package to be there. You've compiled your Employee class but when you import it, the compiler looks for a directory abc in your classpath, and it's not there.
2) How to solve this problem ?
You need to make sure when you compile the Employee class, its classfile is in a directory abc which is somewhere in your classpath. The simplest thing may be to create a directory called abc, then move the Employee.java file into the abc directory, then compile:
javac abc/Employee.java
This will create a Employee.class file in the abc directory. Then you can compile your HelloWorld:
javac HelloWorld.java
3) Each time when I package some classes, where can I find the package to use afterwards
In the directory tree that you've named your package. See the later part of the response to 1).
| {
"pile_set_name": "StackExchange"
} |
Q:
java swt jface TreeViewer expanding from node
How can I expand the node to its root node?
So I have this method to expand its parent node recursively
private void expand( Object object ) {
if ( object.getParent() != null ) {
tree.setExpandedState( object.getParent(), true );
expand( object.getParent() );
}
}
A:
Use the expandToLevel TreeViewer method:
viewer.expandToLevel(element, 1);
element can be your model element (the object your content provider provides) or it can be a TreePath. You may need to call setUseHashlookup(true) on the viewer to speed up element lookup.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there a 'correct frame' in general relativity?
The stress-energy-momentum tensor in GR, has components that vary according to velocity (ex: the top left most energy density component)
depending on your frame of reference (i.e. relative velocity with respect to some point in space) the energy density at that point could be wildly different, as I believe it is given by $ \frac{E_{rel}}{c^2} $
This surprises me, since it means one observer could see a point in spacetime to be wildy energy-dense, another not-so-much, and that could lead to either:
inconsistent curvatures
OR inconsistent constants (i.e. despite measuring difference stress-energy-momenta tensors they still ended up with the same curvature but the constants relationship MUST then be different)
This can't be avoided unless there is a notion of a proper frame of reference for each point in space time. Which also feels wrong. Where am I going wrong here?
A:
It might be unsettling, but no there is no such preferred frame of reference. But you can still check which quantities are actually invariant constants under change of frame.
| {
"pile_set_name": "StackExchange"
} |
Q:
It's easy to track down the etymology of the verb to calve . What is the origin of glacial calving?
Is this term recent and is it supposed to parallel birthing a calf?
A:
Google Books searches do not turn up such early matches as those mentioned in Josh61's answer from Etymonline (1837 and 1818). The Google Books matches go back only to the 1850s. From Henry Cheever, The Whale and His Captors, or, The Whaleman's Adventures and the Whale's Biography (1851):
One large American whaler, the Pacific, was lost in the year 1807, by mooring alongside of a lofty iceberg. Conscious of the danger, the seamen had taken the precaution to run out lines so as to ride at some distance from the ice. A boat was sent off with the mooring anchors, which are shaped like the letter S, and let into the ice by means of holes cut for them with a hatchet. On one of the seamen striking a projecting piece, a crackling noise was heard, and presently several large pieces, or calves, as they are technically called, fell off into the sea. Such occurrences, however, are sufficiently common to excite no unusual surprise, though indicating the unsound state of the iceberg; but before the boat could row back to the ship, the entire mass, of probably many millions of tons of weight, gave way with a tremendous crash, flinging an immense crag from its summit right in the direction of the ship, which stove in its side. ... An equally sudden, but less fatal accident, occurred to the Thomas of Hull, in 1812, while lying moored to an iceberg in Davis' Straits [between Labrador and Greenland]. The more common mode of the iceberg calving, as it is called in seaman's phrase, is by large masses tumbling from above; sometimes, however, they get detached at the base, by the grounding of the floating mass far beneath, and rise with great velocity to the surface. Such was the danger from which the crew of the Thomas narrowly escaped. A calf, detached from beneath, rose with such tremendous force, that the keel of the ship was lifted on a level with the water at the bow, and the stern was nearly immersed beneath the surface. Fortunately the blow was received on the keel, and the ship was not materially damaged; but had it struck the side of the vessel, as in the previous case, it would probably have stove it in and sunk it.
From H. Rink, "On the large Continental Ice of Greenland, and the Origin of the Icebergs in the Arctic Seas," in The Journal of the Royal Geographical Society of London (May 9, 1853):
In considering the manner in which the ice moves from the interior down into the ice-friths, and its breaking up there; and how the calving, or liberation of the floating icebergs, is effected, the following special remarks may help to explain and illustrate the earlier views formed on the subject in different records of travels. ... In the meanwhile this weighty plane body [of glacial ice now submerged as it is pushed into the ice-frith] preserves its continuity in progressive motion over the old beach at the bottom of the sea unchanged as when on shore, till the outer end has reached a depth in which the water begins to bear it up, where, still preserving its connection [to the parent glacier], it proceeds, thus borne up by the sea, till some exterior cause makes the connection cease, when the outer end breaks off and becomes a floating iceberg. This action is called calving, and such is the concussion, that it sometimes sets the sea in motion to a distance of 16 miles. ... Continental ice, or gletchers, which are exposed to much action of the sea, produce only small ice-calves, and no icebergs, or at best, of small size. It is uncertain whether the continental ice advances gradually and regularly, or periodically. Its breaking or calving is altogether independent of this, as it seems to depend on outward causes, in such wise that the station or limit, to which the outer end may attain, is uncertain, and may sometimes proceed much further without breaking than at other times; ...
And from John Towson, "Icebergs in the Southern Ocean; A Paper Read before the Historic Society of Lancashire and Cheshire, on the 19th of November 1857" (1859):
Icebergs are not the produce of one season; on the contrary, there is reason to believe that these masses commenced their formation at a period equally remote with that of the origin of some of our tertiary rocks. They are of the same nature as the glaciers of the warmer regions of the earth; but instead of being melted in the valleys, they are pressed forward into the ocean till at length the water is sufficient to float them, and immense blocks are broken off. This process has been termed by the Greenland whale fishermen, the "calving" of an iceberg.
All three of these early examples involve whalers working in the vicinity of Greenland, so the attribution in Towson's 1857 paper of the term calving to "the Greenland whale fishermen" may be as reasonable a conjecture as we can hope for. I note that very young whales are also termed calves, as are very young walruses (when they aren't called pups). It may be that whalers were disposed to view the birth of something so large as an iceberg (or a large fragment from an iceberg) as being comparable to the birth of a baby whale, and to give it the same name.
| {
"pile_set_name": "StackExchange"
} |
Q:
Unexpected result doing 64-bit negative integer arithmetic
The best way to illustrate my question is through this snippet from my actual code:
ULONG uiDelaySec = 5;
ULONGLONG iiDaysOver = 0;
LONGLONG ii = -10000000LL * (LONGLONG)uiDelaySec / (2 + iiDaysOver);
//Why do I get 'ii' equal to 9223372036829775808?
PS. I'm running this code in Visual Studio 2008.
A:
Let consider how this statement
LONGLONG ii = -10000000LL * (LONGLONG)uiDelaySec / (2 + iiDaysOver);
is executed step by step.
At first there is executed subexpression
-10000000LL * (LONGLONG)uiDelaySec
It has type LONGLONG and its value is
-50000000
Then this result has to be divided by operand (2 + iiDaysOver) of type ULONGLONG because iiDaysOver is defined like
ULONGLONG iiDaysOver = 0;
To perform the operation the compiler shall to use the common type for its operands. The left operand has type LONGLONG while the right operand has type ULONGLONG. According to the rules of the usual arithmetic conversions signed type is converted to unsigned type if the both types have the same rank.
From the C Standard
6.3.1.1 Boolean, characters, and integers
— The rank of any unsigned integer type shall equal the rank of the
corresponding signed integer type, if any.
and
6.3.1.8 Usual arithmetic conversions
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
Thus the left operand will be converted to type ULONGLONG. And you will get that the negative value of the left operand
-50000000
interpretated as a non-negative value will have value
18446744073659551616
To be sure that it is indeed valid you can insert statement
printf( "%llu\n", ( unsigned long long )-50000000LL );
or if to use your typedef(s) for fundamental types
printf( "%llu\n", ( ULONGLONG )-50000000LL );
Dividing this value by 2 you will get unsigned value
9223372036829775808?
that can be represented as a non-negative value in both types: LONGLONG and ULONGLONG. So this value will be assigned to ii without any conversion.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to implement django-fluent-contents to Mezzanine existing project
I have existing Mezzanine project with existing pages. It is possible to implement to AdminPage fluent-contents feature without fluent-pages feature? Just want to Mezzanine Page creation as it is but with fluent-contents in it. Is this possible to implement? Can anybody show some example how to implement it to Mezzanine AdminPage.
A:
Since nobody had this problem and I already figure it out and successfully implement Fluent-contents to existing Mezzanine project.
It is very simple but investigation takes edit core sources of mezzanine cms. After all output solution was simple app that extending Mezzanine Pages as in Admin or Client side.
(DIFFICULTY: medium/expert)
SOLUTION:
(for this example I had used app with name "cms_coremodul")
PS: It was made with ver. Python 3.4 with virtual environment.
MEZZANINE SETUP AND INSTALLS:
-version of Mezzanine 4.0.1
-install fluent-contents with desired plugins what you need
(follow fluent-contents docs).
pip install django-fluent-contents
-also you can optionally install powerfull wysiwyg CKEditor.
pip install django-ckeditor
-after you have all installed lets go to setup settings.py and migrate everything up.
settings.py :
-fluent-contents have to be above of your app and below of Mezzanine apps.
INSTALLED_APPS = (
...
"fluent_contents",
"django_wysiwyg",
"ckeditor",
# all working fluent-contents plugins
'fluent_contents.plugins.text', # requires django-wysiwyg
'fluent_contents.plugins.code', # requires pygments
'fluent_contents.plugins.gist',
'fluent_contents.plugins.iframe',
'fluent_contents.plugins.markup',
'fluent_contents.plugins.rawhtml',
'fluent_contents.plugins.picture',
'fluent_contents.plugins.oembeditem',
'fluent_contents.plugins.sharedcontent',
'fluent_contents.plugins.googledocsviewer',
...
'here_will_be_your_app',
)
-settings for django-ckeditor:
settings.py :
# CORE MODUL DEFAULT WYSIWYG EDITOR SETUP
RICHTEXT_WIDGET_CLASS = "ckeditor.widgets.CKEditorWidget"
RICHTEXT_FILTER_LEVEL = 3
DJANGO_WYSIWYG_FLAVOR = "ckeditor"
# CKEditor config
CKEDITOR_CONFIGS = {
'awesome_ckeditor': {
'toolbar': 'Full',
},
'default': {
'toolbar': 'Standard',
'width': '100%',
},
}
-after setup of settings.py of fluent-contents is complete lets migrate everything up:
python manage.py migrate
-if there will be any error/fault about dependencies of fluent-contents, install that dependency and migrate again.
CREATE NEW APP FOR FLUENT-CONTENTS:
Create an new app in Mezzanine project (same as in Django):
python manage.py startapp nameofyourapp
models.py :
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models
from mezzanine.pages.models import Page
from django.utils.translation import ugettext_lazy as _
from fluent_contents.models import PlaceholderRelation, ContentItemRelation
from mezzanine.core.fields import FileField
from . import appconfig
class CoreModulPage(Page):
template_name = models.CharField("Template choice", max_length=255, choices=appconfig.TEMPLATE_CHOICES, default=appconfig.COREMODUL_DEFAULT_TEMPLATE)
# Accessing the data of django-fluent-contents
placeholder_set = PlaceholderRelation()
contentitem_set = ContentItemRelation()
class Meta:
verbose_name = _("Core page")
verbose_name_plural = _("Core pages")
admin.py :
from django.contrib import admin
from django.http.response import HttpResponse
from mezzanine.pages.admin import PageAdmin
import json
# CORE MODUL IMPORT
from fluent_contents.admin import PlaceholderEditorAdmin
from fluent_contents.analyzer import get_template_placeholder_data
from django.template.loader import get_template
from .models import CoreModulPage
from . import appconfig
from fluent_contents.admin.placeholdereditor import PlaceholderEditorInline
class CoreModulAdmin(PlaceholderEditorAdmin, PageAdmin):
#################################
#CORE MODUL - PAGE LOGIC
#################################
corepage = CoreModulPage.objects.all()
# CORE FLUENT-CONTENTS
# This is where the magic happens.
# Tell the base class which tabs to create
def get_placeholder_data(self, request, obj):
# Tell the base class which tabs to create
template = self.get_page_template(obj)
return get_template_placeholder_data(template)
def get_page_template(self, obj):
# Simple example that uses the template selected for the page.
if not obj:
return get_template(appconfig.COREMODUL_DEFAULT_TEMPLATE)
else:
return get_template(obj.template_name or appconfig.COREMODUL_DEFAULT_TEMPLATE)
# Allow template layout changes in the client,
# showing more power of the JavaScript engine.
# THIS LINES ARE OPTIONAL
# It sets your own path to admin templates and static of fluent-contents
#
# START OPTIONAL LINES
# this "PlaceholderEditorInline.template" is in templates folder of your app
PlaceholderEditorInline.template = "cms_plugins/cms_coremodul/admin/placeholder/inline_tabs.html"
# this "PlaceholderEditorInline.Media.js"
# and "PlaceholderEditorInline.Media.css" is in static folder of your app
PlaceholderEditorInline.Media.js = (
'cms_plugins/cms_coremodul/admin/js/jquery.cookie.js',
'cms_plugins/cms_coremodul/admin/js/cp_admin.js',
'cms_plugins/cms_coremodul/admin/js/cp_data.js',
'cms_plugins/cms_coremodul/admin/js/cp_tabs.js',
'cms_plugins/cms_coremodul/admin/js/cp_plugins.js',
'cms_plugins/cms_coremodul/admin/js/cp_widgets.js',
'cms_plugins/cms_coremodul/admin/js/fluent_contents.js',
)
PlaceholderEditorInline.Media.css = {
'screen': (
'cms_plugins/cms_coremodul/admin/css/cp_admin.css',
),
}
PlaceholderEditorInline.extend = False # No need for the standard 'admin/js/inlines.min.js' here.
#
# END OPTIONAL LINES
# template to change rendering template for contents (combobox in page to choose desired template to render)
change_form_template = "cms_plugins/cms_coremodul/admin/page/change_form.html"
class Media:
js = (
'cms_plugins/cms_coremodul/admin/js/coremodul_layouts.js',
)
def get_layout_view(self, request):
"""
Return the metadata about a layout
"""
template_name = request.GET['name']
# Check if template is allowed, avoid parsing random templates
templates = dict(appconfig.TEMPLATE_CHOICES)
if not templates.has_key(template_name):
jsondata = {'success': False, 'error': 'Template was not found!'}
status = 404
else:
# Extract placeholders from the template, and pass to the client.
template = get_template(template_name)
placeholders = get_template_placeholder_data(template)
jsondata = {
'placeholders': [p.as_dict() for p in placeholders],
}
status = 200
jsonstr = json.dumps(jsondata)
return HttpResponse(jsonstr, content_type='application/json', status=status)
admin.site.register(CoreModulPage, CoreModulAdmin)
appconfig.py :
-you have to create new appconfig.py file in your app.
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
TEMPLATE_CHOICES = getattr(settings, "TEMPLATE_CHOICES", ())
COREMODUL_DEFAULT_TEMPLATE = getattr(settings, "COREMODUL_DEFAULT_TEMPLATE", TEMPLATE_CHOICES[0][0] if TEMPLATE_CHOICES else None)
if not TEMPLATE_CHOICES:
raise ImproperlyConfigured("Value of variable 'TEMPLATE_CHOICES' is not set!")
if not COREMODUL_DEFAULT_TEMPLATE:
raise ImproperlyConfigured("Value of variable 'COREMODUL_DEFAULT_TEMPLATE' is not set!")
settings.py :
-this lines add to settings.py of your Mezzanine project.
# CORE MODUL TEMPLATE LIST
TEMPLATE_CHOICES = (
("pages/coremodulpage.html", "CoreModulPage"),
("pages/coremodulpagetwo.html", "CoreModulPage2"),
)
# CORE MODUL default template setup (if drop-down not exist in admin interface)
COREMODUL_DEFAULT_TEMPLATE = TEMPLATE_CHOICES[0][0]
-append your app to INSTALLED_APPS (add your app to INSTALLED_APPS).
INSTALLED_APPS = (
...
"yourappname_with_fluentcontents",
)
create templates for your contents of your app:
-template with one placeholder:
coremodulpage.html:
{% extends "pages/page.html" %}
{% load mezzanine_tags fluent_contents_tags %}
{% block main %}{{ block.super }}
{% page_placeholder page.coremodulpage "main" role='m' %}
{% endblock %}
-template with two placeholders (one aside):
{% extends "pages/page.html" %}
{% load mezzanine_tags fluent_contents_tags %}
{% block main %}{{ block.super }}
{% page_placeholder page.coremodulpage "main" role='m' %}
<aside>
{% page_placeholder page.coremodulpage "sidepanel" role='s' %}
</aside>
{% endblock %}
-after your app is setup lets make migrations:
-1.Make migration of your app:
python manage.py makemigrations yourappname
-2.Make migration of your app to database:
python manage.py migrate
FINALLY COMPLETE!
- try your new type of Admin Page with Fluent-contents plugin installed.
- In dropdown with Page type in Admin select Core Page. If you had create template for render of fluent-contents tab with placeholders shows with dropdown in it. Now you can select desired plugin and create your modular content of your page.
| {
"pile_set_name": "StackExchange"
} |
Q:
Wordpress + Custom Taxonomy + Permalinks
I know there has been similar questions before, but I can't find a solution and feel that mine might be slightly unique.
I have a few custom post types + taxonomies to go with.
Post Type = Product
Taxonomy = Product_Categories
My Test Site is: http://tech.stickystudios.ca/
If you are able to visit, Products -> Broadcast, click on a category on the left...
I am un-able to get anything to show up in these pages, no matter how I play with the URL.
Some Extra Information on Plugins being used.
- Magic Fields 2
- Query Wrangler
- Woo Commerce (for the 'components' page)
It seems to be a trend on my entire site things with 'categories' just don't want to 'list' properly.
Any help or guidance would be greatly appreciated!
A:
From what I've understood, your trying to upgrade your permalinks to utilize the custom post type and taxonomies in order to cross-examine them.
The simplest method to use this is...
example.com/?cat=1
OR
example.com/?cat=1,2,3&tag=tag1,tag2
Which will only include the terms, not require the category terms and tag slugs, but will require posts to have at least one term and tag. The post type defaults to Posts. In order in utilize the permalinks with post types and taxonomies you have to identify and use the slugs and IDs (category ids only).
Categories = cat=1,2,3 (IDs)
Tags = tag=tag_slug1,tag_slug2,tag_slug3
(Custom) Post Types = post_type=post_type_slug
Custom Taxonomies = taxonomy_slug=term_slug1,term_slug2,term_slug4
More advanced methods of using this...
example.com/?post_type=posts&cat=21,32&tag=one&taxonomy_slug=term_slug1,term_slug2
OR
example.com/?post_type=foods&cat=12,43&tag=fruit,veg&color_taxonomy=red,white,purple
Using this method will allow you to search within a specific post type (which allows only one slug), include terms within the taxonomies, as well as require at least one of the IDs and slugs being used in each taxonomy. One known plugin that utilizes this and offers a dynamic navigation sidebar is Taxonomy Picker. Which should allow you experiment with the URL navigation. Another plugin that will allow you to create lists of posts and pages is Advanced Post List. Which can list multiple post types and include/require taxonomies as well as add terms from the current post/page. It requires a little more work to create your lists, but it goes a step further than what WordPress has to offer.
| {
"pile_set_name": "StackExchange"
} |
Q:
Create signed URLs for Google Cloud Storage with node.js for direct upload from browser
actual testcase code: https://github.com/HenrikJoreteg/google-cloud-signedurl-test-case
I'm trying to add ability for my API to return signed URLs for direct upload to Google Cloud Storage from the client.
Serverside, I'm using the gcloud SDK for this:
const gcloud = require('gcloud')
const gcs = gcloud.storage({
projectId: 'my project',
keyFilename: __dirname + '/path/to/JSON/file.json'
})
const bucket = gcs.bucket('bucket-name')
bucket.file('IMG_2540.png').getSignedUrl({
action: 'write',
expires: Date.now() + 60000
}, (error, signedUrl) => {
if (error == null) {
console.log(signedUrl)
}
})
Then in the browser I've got an <input type='file'/> that I've selected a file with, then I attempt to post it to the URL generated from my server-side script like this:
function upload(blobOrFile, url) {
var xhr = new XMLHttpRequest();
xhr.open('PUT', url, true);
xhr.onload = function(e) {
console.log('DONE!')
};
xhr.upload.onprogress = function(e) {
if (e.lengthComputable) {
console.log((e.loaded / e.total) * 100)
}
};
xhr.send(blobOrFile);
}
// grab the `File` object dropped (which incidentally
// matches the file name used when generating the signed URL
upload($('[name=file]').files[0], 'URL GENERATED FROM SERVER-SIDE SCRIPT HERE');
What happens?
Response is:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.</Message>
<StringToSign>PUT
image/png
1476631908
/bucket-name/IMG_2540.png</StringToSign>
</Error>
I've re-downloaded the JSON key file to make sure it's current and has proper permissions to that bucket and I don't get any errors or anything when generating the signed URL.
The clientside code appears to properly initiate an upload (I see progress updates logged out) then I get the 403 error above. Filenames match, content-types seem to match expected values, expiration seems reasonable.
The official SDK generated the URL, so it seems like it'd be ok.
I'm stuck, any help appreciated.
A:
As was pointed out by Philip Roberts, aka @LatentFlip on my github repo containing this case, adding a content-type to the signature took care of it.
https://github.com/HenrikJoreteg/google-cloud-signedurl-test-case/pull/1/commits/84290918e7b82dd8c1f22ffcd2c7cdc06b08d334
Also, it sounds like the Google folks are going to update docs/error to be a bit more helpful: https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1695
| {
"pile_set_name": "StackExchange"
} |
Q:
Best way to prevent duplicate rows in database
I am using hibernate , looking for best practice for avoid that same insert in database. I wrote program witch saves results of search from API depends on user input, when input is the same I have duplicated rows without id in database. It's problematic for me because in future uses.
Each insert must be UNIQUE.
Part of entity model :
@Entity
@Component
public class Flight {
private Long id;
private String departure;
private String currency;
private String destination;
private BigDecimal price;
... etc //
Save code:
@RequestMapping(value = "/save", method = RequestMethod.GET)
public String save(
@ModelAttribute("FlightDTO") FlightDTO flightDTO,
@ModelAttribute("FlightOutput") Map<String, Map<String, FlightDeserialization>> flightOutputMap,
@ModelAttribute("HotelOutput") ArrayList<Comparison> hotelOutputList,
@ModelAttribute("HotelGooglePlaceList") Map<Integer, List<PlacesResults>> hotelGooglePlaceList,
@ModelAttribute("HotelGoogleImageList") Map<Integer, List<Imageresults>> hotelGoogleImageList) {
boolean isHotelGoogleListEqualToHotelOutputList = hotelGooglePlaceList.keySet().size() == hotelOutputList.size() & hotelGooglePlaceList.keySet().size() == hotelGoogleImageList.size();
Flight flight;
for (String keyOut : flightOutputMap.keySet()) {
for (String keyIn : flightOutputMap.get(keyOut).keySet()) {
flight = new Flight();
flight.setCurrency(flightDTO.getCurrency());
flight.setAirline(csvParser.airlineParser(flightOutputMap.get(keyOut).get(keyIn).getAirline()));
flight.setAirlineIata(flightOutputMap.get(keyOut).get(keyIn).getAirline());
flight.setDepartureTime(flightOutputMap.get(keyOut).get(keyIn).getDepartureTime());
flight.setReturnTime(flightOutputMap.get(keyOut).get(keyIn).getReturnTime());
flight.setFlightNumber(flightOutputMap.get(keyOut).get(keyIn).getFlightNumber());
flight.setPrice(flightOutputMap.get(keyOut).get(keyIn).getPrice());
flight.setExpiresAt(flightOutputMap.get(keyOut).get(keyIn).getExpiresAt());
flight.setDestination(flightDTO.getDestination());
flight.setDeparture(flightDTO.getDeparture());
flight.setUserName(CurrentUserName().getUsername());
if (isHotelGoogleListEqualToHotelOutputList) {
Hotel hotel;
BigDecimal exchangeRate = currencyRepository.findByName(flightDTO.getCurrency()).getExchangeRate();
for (int i = 0; i < hotelOutputList.size(); i++) {
Comparison array = hotelOutputList.get(i);
hotel = new Hotel();
hotel.setImage(hotelGoogleImageList.get(i).get(1).getImage());
hotel.setHotelLink(hotelGoogleImageList.get(i).get(0).getLink());
hotel.setLatitude(hotelGooglePlaceList.get(i).get(0).getGps_coordinates().getLatitude());
hotel.setLongitude(hotelGooglePlaceList.get(i).get(0).getGps_coordinates().getLongitude());
hotel.setCurrency(flightDTO.getCurrency());
hotel.setName(array.getHotel());
hotel.setSite(array.getVendor1());
hotel.setSite(array.getVendor2());
hotel.setPrice(hotelCurrencyService.amountCalculator(array.getVendor1Price(), exchangeRate));
hotel.setPrice(hotelCurrencyService.amountCalculator(array.getVendor2Price(), exchangeRate));
hotel.setPrice(hotelCurrencyService.amountCalculator(array.getVendor3Price(), exchangeRate));
flight.setHotel(hotel);
flightRepository.save(flight);
I have tried used @UniqueConstraint , and @Unique, but I guess it intended for something else.
Please Help me !
A:
First, you don't need @Component on your entity.
Second, add this annotation
@Table(name = "table_name", uniqueConstraints={@UniqueConstraint(columnNames ={"id", "departure", ...})})
with fields that should not be duplicated.
NB: you need to add spring.jpa.hibernate.ddl-auto=update to application.properties
| {
"pile_set_name": "StackExchange"
} |
Q:
An idiom for "tangential association"
Is there an idiom to replace 'tangential association'in this sentence ?
" She pictured this man, Jared, with the woman, Lisa, but failed to connect the two in any way but tangential association. "
Not really, what I'm looking for is the phrase that describes connecting two things without any 'normal' or objective, clear association
Thanks.
A:
The phrase you may be looking for is
by association
which means there is an unspecified connection between two things, as in
He was guilty by association.
meaning he was guilty due to some unspecified involvement with whatever happened.
So, your sentence might be reworded as
She thought Jared was involved with Lisa by association.
meaning she thought they were in a relationship because they were together.
| {
"pile_set_name": "StackExchange"
} |
Q:
Reduce multiple columns into one using pandas
I have several columns in a DataFrame that I would like to combine into one column:
from functools import reduce # python 3.x
na=pd.np.nan
df1=pd.DataFrame({'a':[na,'B',na],'b':['A',na,na],'c':[na,na,'C']})
print(df1)
a b c
0 NaN A NaN
1 B NaN NaN
2 NaN NaN C
The output I am trying to get is supposed to look like (column name doesn't matter):
a
0 A
1 B
2 C
I get ValueError: cannot index with vector containing NA / NaN values when I run this line of code:
reduce(lambda c1,c2: df1[c1].fillna(df1[c2]),df1.loc[:,'a':'c'])
However, it seems to work when I change the sequence argument of reduce to just two columns df1.loc[:,'a':'b']:
reduce(lambda c1,c2: df1[c1].fillna(df1[c2]),df1.loc[:,'a':'b'])
0 A
1 B
2 NaN
Name: a, dtype: object
I've also tried to use the DataFrame/Series .combine method, but that produces the same error. I would like to try to get this working in case I ever want to fill non-nan values:
reduce(lambda c1,c2: df1[c1].combine(df1[c2],(lambda x,y: y if x==pd.np.nan else x)),df1.loc[:,'a':'c'])
I don't think this is working like I am hoping though, because when I again restrict to just two columns I get this output:
reduce(lambda c1,c2: df1[c1].combine(df1[c2],(lambda x,y: y if x==pd.np.nan else x)),df1.loc[:,'a':'b'])
0 NaN
1 B
2 NaN
dtype: object
A:
One way is to use sum over axis 1
df1.fillna('').sum(1)
0 A
1 B
2 C
Option2: use bfill and pick the first column
df1.bfill(axis = 1).iloc[:, 0]
| {
"pile_set_name": "StackExchange"
} |
Q:
Wordpress - Translate a string within javascript or jquery
Im trying to translate this string in javascript but i cant seem to do it properly.
$(".search-overlay .s").attr("placeholder", "Type here to search");
Ive tried the following but it gives errors, any ideas ?
$(".search-overlay .s").attr("placeholder", "<?php _e( '"Type here to search"', 'romeo' ); ?>");
Thanks.
A:
You should do this proper Wordpress way by using wp_localize_script() function
Please check this codex page out:
https://codex.wordpress.org/Function_Reference/wp_localize_script
Basically in php:
// Register the script
wp_register_script( 'some_handle', 'path/to/myscript.js' );
// Localize the script with new data
$translation_array = array(
'some_string' => __( 'Some string to translate', 'plugin-domain' ),
'a_value' => '10'
);
wp_localize_script( 'some_handle', 'object_name', $translation_array );
// Enqueued script with localized data.
wp_enqueue_script( 'some_handle' );
And in javascript:
alert(object_name.some_string);
| {
"pile_set_name": "StackExchange"
} |
Q:
Smallest possible runnable Mach-O executable
What is the smallest possible runnable Mach-O executable on x86_64? The program can do nothing (not even returning a return code), but must be a valid executable (must run without errors).
My try:
GNU Assembler (null.s):
.text
.globl _main
_main:
retq
Compilation & Linking:
as -o null.o null.s
ld -e _main -macosx_version_min 10.12 -o null null.o -lSystem
Size: 4248 bytes
Looking at the hex values it seems there is a lot of zero padding which maybe can be removed, but I don't know how. Also I don't know if it is possible to make the exectubale run without linking libSystem...
A:
Smallest runnable Mach-O has to be at least 0x1000 bytes. Because of XNU limitation, file has to be at least of PAGE_SIZE.
See xnu-4570.1.46/bsd/kern/mach_loader.c, around line 1600.
However, if we don't count that padding, and only count meaningful payload, then minimal file size runnable on macOS is 0xA4 bytes.
It has to start with mach_header (or fat_header / mach_header_64, but those are bigger).
struct mach_header {
uint32_t magic; /* mach magic number identifier */
cpu_type_t cputype; /* cpu specifier */
cpu_subtype_t cpusubtype; /* machine specifier */
uint32_t filetype; /* type of file */
uint32_t ncmds; /* number of load commands */
uint32_t sizeofcmds; /* the size of all the load commands */
uint32_t flags; /* flags */
};
It's size is 0x1C bytes.
magic has to be MH_MAGIC.
I'll be using CPU_TYPE_X86 since it's an x86_32 executable.
filtetype has to be MH_EXECUTE for executable, ncmds and sizeofcmds depend on commands, and have to be valid.
flags aren't that important and are too small to provide any other value.
Next are load commands.
Header has to be exactly in one mapping, with R-X rights -- again, XNU limitations.
We'd also need to place our code in some R-X mapping, so this is fine.
For that we need a segment_command.
Let's look at definition.
struct segment_command { /* for 32-bit architectures */
uint32_t cmd; /* LC_SEGMENT */
uint32_t cmdsize; /* includes sizeof section structs */
char segname[16]; /* segment name */
uint32_t vmaddr; /* memory address of this segment */
uint32_t vmsize; /* memory size of this segment */
uint32_t fileoff; /* file offset of this segment */
uint32_t filesize; /* amount to map from the file */
vm_prot_t maxprot; /* maximum VM protection */
vm_prot_t initprot; /* initial VM protection */
uint32_t nsects; /* number of sections in segment */
uint32_t flags; /* flags */
};
cmd has to be LC_SEGMENT, and cmdsize has to be sizeof(struct segment_command) => 0x38.
segname contents don't matter, and we'll use that later.
vmaddr has to be valid address (I'll use 0x1000), vmsize has to be valid & multiple of PAGE_SIZE, fileoff has to be 0, filesize has to be smaller than size of file, but larger than mach_header at least (sizeof(header) + header.sizeofcmds is what I've used).
maxprot and initprot have to be VM_PROT_READ | VM_PROT_EXECUTE. maxport usually also has VM_PROT_WRITE.
nsects are 0, since we don't really need any sections and they'll add up to size.
I've set flags to 0.
Now, we need to execute some code. There are two load commands for that: entry_point_command and thread_command.
entry_point_command doesn't suit us: see xnu-4570.1.46/bsd/kern/mach_loader.c, around line 1977:
1977 /* kernel does *not* use entryoff from LC_MAIN. Dyld uses it. */
1978 result->needs_dynlinker = TRUE;
1979 result->using_lcmain = TRUE;
So, using it would require getting DYLD to work, and that means we'll need __LINKEDIT, empty symtab_command and dysymtab_command, dylinker_command and dyld_info_command. Overkill for "smallest" file.
So, we'll use thread_command, specifically LC_UNIXTHREAD since it also sets up stack which we'll need.
struct thread_command {
uint32_t cmd; /* LC_THREAD or LC_UNIXTHREAD */
uint32_t cmdsize; /* total size of this command */
/* uint32_t flavor flavor of thread state */
/* uint32_t count count of uint32_t's in thread state */
/* struct XXX_thread_state state thread state for this flavor */
/* ... */
};
cmd is going to be LC_UNIXTHREAD, cmdsize would be 0x50 (see below).
flavour is x86_THREAD_STATE32, and count is x86_THREAD_STATE32_COUNT (0x10).
Now the thread_state. We need x86_thread_state32_t aka _STRUCT_X86_THREAD_STATE32:
#define _STRUCT_X86_THREAD_STATE32 struct __darwin_i386_thread_state
_STRUCT_X86_THREAD_STATE32
{
unsigned int __eax;
unsigned int __ebx;
unsigned int __ecx;
unsigned int __edx;
unsigned int __edi;
unsigned int __esi;
unsigned int __ebp;
unsigned int __esp;
unsigned int __ss;
unsigned int __eflags;
unsigned int __eip;
unsigned int __cs;
unsigned int __ds;
unsigned int __es;
unsigned int __fs;
unsigned int __gs;
};
So, it is indeed 16 uint32_t's which would be loaded into corresponding registers before thread is started.
Adding header, segment command and thread command gives us 0xA4 bytes.
Now, time to craft the payload.
Let's say we want it to print Hi Frand and exit(0).
Syscall convention for macOS x86_32:
arguments passed on the stack, pushed right-to-left
stack 16-bytes aligned (note: 8-bytes aligned seems to be fine)
syscall number in the eax register
call by interrupt
See more about syscalls on macOS here.
So, knowing that, here's our payload in assembly:
push ebx #; push chars 5-8
push eax #; push chars 1-4
xor eax, eax #; zero eax
mov edi, esp #; preserve string address on stack
push 0x8 #; 3rd param for write -- length
push edi #; 2nd param for write -- address of bytes
push 0x1 #; 1st param for write -- fd (stdout)
push eax #; align stack
mov al, 0x4 #; write syscall number
#; --- 14 bytes at this point ---
int 0x80 #; syscall
push 0x0 #; 1st param for exit -- exit code
mov al, 0x1 #; exit syscall number
push eax #; align stack
int 0x80 #; syscall
Notice the line before first int 0x80.
segname can be anything, remember? So we can put our payload in it. However, it's only 16 bytes, and we need a bit more.
So, at 14 bytes we'll place a jmp.
Another "free" space is thread state registers.
We can set anything in most of them, and we'll put rest of our payload there.
Also, we place our string in __eax and __ebx, since it's shorter than mov'ing them.
So, we can use __ecx, __edx, __edi to fit the rest of our payload.
Looking at difference between address of thread_cmd.state.__ecx and end of segment_cmd.segname we calculate that we need to put jmp 0x3a (or EB38) in last two bytes of segname.
So, our payload assembled is 53 50 31C0 89E7 6A08 57 6A01 50 B004 for first part, EB38 for jmp, and CD80 6A00 B001 50 CD80 for second part.
And last step -- setting the __eip. Our file is loaded at 0x1000 (remember vmaddr), and payload starts at offset 0x24.
Here's xxd of result file:
00000000: cefa edfe 0700 0000 0300 0000 0200 0000 ................
00000010: 0200 0000 8800 0000 0000 2001 0100 0000 .......... .....
00000020: 3800 0000 5350 31c0 89e7 6a08 576a 0150 8...SP1...j.Wj.P
00000030: b004 eb38 0010 0000 0010 0000 0000 0000 ...8............
00000040: a400 0000 0700 0000 0500 0000 0000 0000 ................
00000050: 0000 0000 0500 0000 5000 0000 0100 0000 ........P.......
00000060: 1000 0000 4869 2046 7261 6e64 cd80 6a00 ....Hi Frand..j.
00000070: b001 50cd 8000 0000 0000 0000 0000 0000 ..P.............
00000080: 0000 0000 0000 0000 0000 0000 2410 0000 ............$...
00000090: 0000 0000 0000 0000 0000 0000 0000 0000 ................
000000a0: 0000 0000 ....
Pad it with anything up to 0x1000 bytes, chmod +x and run :)
P.S. About x86_64 -- 64bit binaries are required to have __PAGEZERO (any mapping with VM_PROT_NONE protection covering page at 0x0). IIRC they [Apple] didn't make it required on 32bit mode only because some legacy software didn't have it and they're afraid to break it.
A:
28 Bytes, Pre-compiled.
Below is a formated hex dump of the Mach-O binary.
00 00 00 00 FF FF FF FF 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|---------| |---------| |---------| |---------| |---------| |---------| |---------/
| | | | | | +---------- uint32_t flags; // Once again redundant, no flags for safety.
| | | | | +---------------------- uint32_t sizeofcmds; // Size of the commands. Not sure the specifics for this, yet it doesn't particularly matter when there are 0 commands. 0 is used for safety.
| | | | +---------------------------------- uint32_t ncmds; // Number of commands this library proivides. 0, this is a redundant library.
| | | +---------------------------------------------- uint32_t filetype; // Once again, documentation is lacking in this department, yet I don't think it particularly matters for our useless library.
| | +---------------------------------------------------------- cpu_subtype_t cpusubtype; // Like cputype, this suggests what systems this can run on. Here, 0 is ANY.
| +---------------------------------------------------------------------- cpu_type_t cputype; // Defines what cpus this can run on, I guess. -1 is ANY. This library is definitely cross system compatible.
+---------------------------------------------------------------------------------- uint32_t magic; // This number seems to be provided by the compiling system, as I lack a system to compile Mach-O, I can't retrieve the actual value for this. But it will always be 4 bytes. (On 32bit systems)
Consists entirely of the header, and does not need the data nor the cmds. This is, by nature, the smallest Mach-O binary possible. It might not run correctly on any conceivable hardware, but it matches the specification.
I'd supply the actual file, but it entirely consists of unprintable characters.
| {
"pile_set_name": "StackExchange"
} |
Q:
Largest interval between adjacent points in $\{ 0, q^n, q^{n-1}, \dots, q^2, q, 1 \}$
Given $q \in (0,1)$ and $n \in \mathbb{N}$, define the points $\{ 0, q^n, q^{n-1}, \dots, q^2, q, 1 \}$. What is the largest interval between two adjacent points?
So far I've got the following. The interval lengths are $q^{k-1}-q^k$ for all intervals, except for the leftmost, which has length $q^n$. The largest interval is never a middle interval (if $n>1$), since $q^{k-2}-q^{k-1} < q^{k-1}-q^k$ by a factor of $q$. Thus the largest interval is either the leftmost or the rightmost interval, with length $q^n$ or $1-q$, respectively. But this depends on the choice of $q$ and $n$, so to come up with an answer, I'm trying to determine when $1-q > q^n$, but here I am stuck (no luck on Wolfram etc). The best answer I have right now is $max(1-q,q^n)$.
Is my work so far correct? How can I get any further? Thanks!
A:
You can shorten your argument a little by noting that $q^{k-1}-q^k=q^{k-1}(1-q)$; since the $q^{k-1}$ factor decreases with increasing $k$, this difference is clearly largest when $k$ is minimal, i.e., when $k=1$ and the difference is $1-q$. This of course immediately yields your $\max\{1-q,q^n\}$. In the case $n=1$ this boils down to $\max\{1-q,q\}$, which clearly cannot be improved, though it can of course be expressed differently, e.g., as
$$\begin{cases}
q,&\text{if }q\ge\frac12\\
1-q,&\text{otherwise}\;.
\end{cases}$$
It seems unlikely that the cases with $n>1$ are any better behaved, so I doubt that you can get rid of the maximum.
| {
"pile_set_name": "StackExchange"
} |
Q:
When should I reduce my training intensity before a competition?
I'm on a more or less intense training regime in the last few weeks. As I have a bike race coming up in two weeks I need to find the time to begin slowing down my training and start a short resting phase before the race.
Mo: Strength
Tu: Cardio
We: Strength
Th: Cardio
Fr: Strength
Sa: Party
Su: Pizza
Cardio primarily means biking for some hours (50-100km) in the afternoon. Depending on my mood and the weather I sometimes add half an hour of swimming in the morning.
Strength means Stronglifts 5x5, as I just restarted it the weight ranges are in the lower area (around 30-40kg depending on the exercise).
I add some bodyweight exercises (push ups, pull ups, headstands, planks etc), yoga and other cardio every now and then when I am bored.
The race is two weeks from now on a Saturday, how should I reduce the training intensity?
When to stop adding weight to 5x5?
On which day should I stop lifting completely?
Note: I know this workout program smells like overtraining, please ignore that fact. I know it myself, I can currently keep up to it and am more interested in the fun of exercising than gains. Additionally I can't keep it up for much longer as soon as I have to study again.
Note2: Ignore that this question is about a specific event, simply answer like the event is still 2 weeks from now, as there will always be another race two weeks from now.
A:
Unfortunately, tapering is one of those things that is far from an exact science, and varies from individual to individual. Generally it's learned through long trial and error.
That being said, my taper period is 2-3 weeks depending on the intensity of the event and how much I am targeting it as an 'A' race. I generally cut 30% from my workouts in week 1, another 20% in week two, and light workouts/refresher only in week 3.
I would try cutting by 20-30% this week, and light on the cardio next week as well as any lower body stuff. Other than making sure you replenish glycogen stores, you can probably keep the upper body, but if you start feeling fatigued don't be afraid to bag it. Then, you have a baseline of what to work from the next time you need to taper.
| {
"pile_set_name": "StackExchange"
} |
Q:
Result returning zero
Not sure what I am missing here.
I am using the following code
DECLARE @sqlText nvarchar(4000)
SET @sqlText = N'SELECT InitialComment, DATEDIFF(d, InitialComment, GETDATE() ) AS Duration FROM dbo.SocialManagementTracker;'
DECLARE @newVal nvarchar(4000)
SET @newVal = ''
exec sp_executesql @sqlText, @newVal out
UPDATE dbo.SocialManagementTracker
SET DaysToResolve = @newVal
WHERE SocialID = 2
The dates being compared are 2018/07/08 and 2018/08/31. My result should be 23. Any reason why this returns 0 instead?
A:
I don't see any reason for dynamic sql here... this should work fine:
UPDATE dbo.SocialManagementTracker
SET DaysToResolve = DATEDIFF(day, InitialComment, GETDATE() )
WHERE SocialID = 2
| {
"pile_set_name": "StackExchange"
} |
Q:
Pass vector of card into function to print and use?
I'm struggling through a card program and want to pass my single card vector into my function so it can be used. Right now, I'd just like to test to see if it can print the cards from the deck, but passing it into a player hand is the ultimate goal. What's the best way to pass this vector for use in functions?
Thanks in advance!!
function to create deck:
void Deck::createDeck() {
deck.clear();
static const char suits[] = {'C','D','H','S'};
for (int suit=0; suit < 4; suit++)
for (int val=1; val <=13; val++)
deck.push_back(Card(val,suits[suit]));
}
function to pass the card to:
void Card::printCard(Card& drawnCard) const { //for debugging purposes
cout << value << " of " << suit << endl;
}
prototypes have been declared in header as follows:
class Card{
public:
int value;
char suit;
string drawnCard;
Card(int value, char suit) : value(value), suit(suit) {}
void printCard(Card& drawnCard) const;
};
class Deck {
public:
void createDeck();
void shuffleDeck(Card);
Card drawRandomCard();
Deck();
vector<Card> deck;
};
Thanks again!
A:
There's a lot to critique here. You probably don't want to pass the list of cards directly, but probably want to pass the deck, instead, like:
void DoSomething(const Deck& deck) {
// ...
}
However, assuming you do pass the list, the way to pass it would be as a const reference:
void DoSomething(const std::vector<Card>& cards) {
// ...
}
There are many other areas, though, where your sample code could be improved. For example, the data fields of Card and Deck should probably be "private" and only accessed through accessor functions as appropriate. In addition printCard does not need to take a Card as input, since it operates on this (and, if it did take a parameter, a function that simply prints an object should take its parameter by a const, not a mutable reference).
See this gist for an example.
| {
"pile_set_name": "StackExchange"
} |
Q:
AKS loadbalancer expose different external IP address
Hi i have been deploying and exposing my web api container image using azure-devops to azure kubernetes service (container image ) . i have created multiple applications why each application gets new external IP address ? Is it by default when we use type as load balancer ?
I also tried using node port but in node port case external IP is none.
deployment deployment-name --type=LoadBalancer --port 80 --name=service-name
A:
yep, this behaviour is by default, if you want them all have the same external IP you should use the ingress resource:
Ingress exposes HTTP and HTTPS routes from outside the cluster to
services within the cluster. Traffic routing is controlled by rules
defined on the Ingress resource.
Its a bit of a learning, but its quite mandatory if you want to use kubernetes for anything serious.
https://kubernetes.io/docs/concepts/services-networking/ingress/
| {
"pile_set_name": "StackExchange"
} |
Q:
Testing dependent variables
I'm trying to test whether or not I can simplify a function using Mathematica. At first glance it seems there is a dependency of $w$ on $R_0$, but I'm wondering if Mathematica can help me remove this.
My code is:
nu0 = (rho0 - (k^2*mu0)/w^2);
nu1 = (rho1 - (k^2*c66)/w^2 + (k^2*c64^2)/(w^2*c44));
rhohat0 = -I*w*nu0;
rhohat1 = -I*w*nu1;
zeta0 = w*(nu0^.5/mu0^.5);
zeta1 = w*(nu1^.5/c44^.5);
R0 = (rhohat1*zeta0 - rhohat0*zeta1)/(rhohat0*zeta1 + rhohat1*zeta0)
where I'm trying to remove the $w$ dependancy in $R_0.$ Are there any commands in Mathematica which could help me?
A:
nu0 = (rho0 - (k^2*mu0)/w^2);
nu1 = (rho1 - (k^2*c66)/w^2 + (k^2*c64^2)/(w^2*c44));
rhohat0 = -I*w*nu0;
rhohat1 = -I*w*nu1;
zeta0 = w*(nu0^.5/mu0^.5);
zeta1 = w*(nu1^.5/c44^.5);
R0 = (rhohat1*zeta0 - rhohat0*zeta1)/(rhohat0*zeta1 + rhohat1*zeta0);
Using
FullSimplify[R0, w > 0]
gives
which makes it clear that the dependency on w is not removable.
| {
"pile_set_name": "StackExchange"
} |
Q:
Get value of scope variable in controller AngularJS
I have to fetch value of scope variable defined in directives. I have to get value of that scope variable in controller using AngularJS. How can i fetch value of scope variable?
Directive
app.directive('checkToggle', function() {
return {
scope: true,
link: function ($scope, element, attrs) {
$(element).on('click', function() {
$(element).find('i').toggleClass('icon-check icon-check-empty');
if ($(element).find('i').hasClass('icon-check')) {
$scope.isChecked = 'true';
} else {
$scope.isChecked = 'false';
}
});
}
}
});
I have to get $scope.isChecked value in controller.
A:
If I understand your use-case correctly you would like to toggle an icon on click. If so you don't need to write any directive for this. And provided that you would like to write a directive your shouldn't go about it as you've started. Your code is very imperative, jQuery-like while AngularJS power is in driving declarative UI based on model changes.
Anyway, toggling an icon can be easily done with standard AngularJS directives:
<i ng-class="{'icon-star' : isChecked, 'icon-star-empty': !isChecked}" ng-click="isChecked = !isChecked"></i>
Here is a working plunk: http://plnkr.co/edit/nXXQA41w00Cpeo6tTibg?p=preview
| {
"pile_set_name": "StackExchange"
} |
Q:
Elegant Dataframe Operations in Pandas
What is the most pythonic/elegant way to approach the following problem?
I have a dataframe df:
Group Start Date End Date
A 8/15/2017 8/30/2017
B 8/20/2017 NaT
C 8/07/2017 8/14/2017
A 9/07/2017 NaT
Group is a string and Start Date and End Date are datetimes
I need to perform some operations with the Groups that have no End Date each day. If these operations dictate that the group's end date is on that day, I replace the NaT with the date.
The only way I can figure out doing this is as follows:
import pandas as pd
df_closed = df[pd.notnull(df['End_Date'])]
df_open = df[pd.isnull(df['End_Date'])]
Which gives me:
df_closed
Group Start Date End Date
A 8/15/2017 8/30/2017
C 8/07/2017 8/14/2017
and:
df_open
Group Start Date End Date
B 8/20/2017 NaT
A 9/07/2017 NaT
Then I perform my operations. If, say, I determine that Group A's End Date should be 'today' (let's say 'today' is 9/10/2017), I do
df_open.loc['A','End Date'] = 9/10/2017
so I have the following:
df_open
Group Start Date End Date
B 8/20/2017 NaT
A 9/07/2017 9/10/2017
At the end of these operations I want my original dataframe to show all original rows but with updated end dates. so I do the following:
df = df_closed.append(df_open)
which gives me:
Group Start Date End Date
A 8/15/2017 8/30/2017
B 8/20/2017 NaT
C 8/07/2017 8/14/2017
A 9/07/2017 9/10/2017
This gets the job done but I have to think there is a less 'clunky' way to do this.
Insights?
Thanks in advance.
A:
You can locate null values and return them for assignment in the same step:
df.loc[df['End Date'].isnull(), 'End Date'] = <<val>>
If you need to locate the group as well:
df.loc[(df['End Date'].isnull()) & (df['Group']==<<group>>), 'End Date'] = <<val>>
This way you can keep everything in the same dataframe, which is less messy than separating your df and re-merging.
| {
"pile_set_name": "StackExchange"
} |
Q:
Specifying Postfix queue lifetimes on a per-message basis
Is it possible to specify the lifetime of a message on the Postfix deferred queue on a per-message basis, or using some rules based on, for example, the sender address?
In our outgoing mail queue, we have a mixture of different classes of email, and I would like some of those to have a fairly short lifetime (promotional emails) but still have a long lifetime for most emails (operational messages, supplier notifications).
The only controls I can find are bounce_queue_lifetime and maximal_queue_lifetime which affect all messages.
The alternative approach, I suppose, is to simply have two Postfix instances with different parameters serving two queues. I was hoping to avoid the complexity but there may be no other way?
A:
I believe this is all there is. You might consider using a different mailer which supports either different lifetimes (like sendmail) or per-message queue management (like Exim) as a smtp_fallback_relay. This way everything that is going to end up in the queue as temporary undeliverable under normal conditions, will be sent to the defined mailer where it will be handled according to your predefined rules.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to display more than 100 as a search result?
When doing a search, and getting a result of more than 100, it is not possible to display all at the same page. It is only limited to 25, 50 or 100.
How can I display more than 100 search results on the same page?
A:
In CiviCRM version 4.5+ you can control exactly how many results are shown per page. At the bottom-right corner of the search results, click the up/down arrows to increment the count by 25, or type in whatever number you wish:
| {
"pile_set_name": "StackExchange"
} |
Q:
Development environment setup and configuration for web development
I know this is stupid question.I am newbie. But me and my friend want to work on a website project. We both are located at few miles from each other. So I want information and steps towards making a smooth working environment for both of us by which we can see updates and results from both of us and watever changes we make to the website. I wanted to know how can we use git(as this will be used for version control), zend framework (we decided to use this one), phpdesigner (our IDE) collectively in developing this site. Also I wanted to know steps and information on how we work locally and push our changes at final product using git. Right now I have all scattered information about git and zend. So if someone would please align all these scattered things and let me know how can we can setup our first development environment.
Also if someone could tell me how to setup development, test, pre-production and production environment.
Dude "m learning " man :)
A:
Here are the steps I use to work collectively. For this you have to use netbeans as IDE & need to have Github account as you are using git as scm.
create account at Github
Create Repo
Copy your repo url
Create Branch
Clone it from netbeans
Now push, pull or fetch
Create a pull request before merge
Merge pull request
This are the steps I follow for my personal work with working on a team. I hope this helps you.
| {
"pile_set_name": "StackExchange"
} |
Q:
Parsing embedded expressions in Roslyn
I am trying to write a parser for a QML-like markup language and I would like to allow C# expressions in the markup. So an example might look like this:
ClassName {
Property1: 10
Property2: Math.Sqrt(123)
Property3: string.Format("{0} {1}", "Hello", "World")
}
(This is also somewhat like ASP.NET's Razor engine but afaics Razor doesn't use Roslyn?)
How would I do this? I want to parse only a single expression, whether that be a literal, a method call, a lambda etc. I've tried using CSharpSyntaxTree.ParseText but that expects a whole file and I can't find any documentation that seems to relate to this use-case.
A:
You need to call CSharpSyntaxTree.ParseText(), and pass a CSharpParseOptions with SourceCodeKind.Interactive, which allows top-level expressions.
A:
SyntaxFactory.ParseExpression() worked for me.
| {
"pile_set_name": "StackExchange"
} |
Q:
XeLaTeX, xy, and dejavu-otf
The following document renders fine with XeLaTeX...
\documentclass[12pt]{standalone}
%\usepackage{dejavu-otf}
\usepackage[all]{xy}
\newcommand*{\point}[1]{*+[F.]{\makebox[2.8em]{$#1\mathstrut$}}}
\newcommand*{\dotsitem}{*+[F.]{\makebox[2em]{\ldots\mathstrut}}}
\begin{document}
\xy
\xymatrix @C=0pt @R=0pt{
*++{\textbf{mz\mathstrut}} &
\point{0} & \point{1} & \dotsitem & \point{n - 1} &
\point{n} & \point{n + 1} & \dotsitem & \point{2n - 1} &
\dotsitem & \dotsitem & \point{l - 2} & \point{l - 1}\\
*++{\textbf{scan\mathstrut}} &
\point{0} & \point{1} & \dotsitem & \point{n - 1} &
\point{n} & \point{n + 1} & \dotsitem & \point{2n - 1} &
\dotsitem & \dotsitem & \point{l - 2} & \point{l - 1}\\
*++{\textbf{intens\mathstrut}} &
\point{0} & \point{1} & \dotsitem & \point{n - 1} &
\point{n} & \point{n + 1} & \dotsitem & \point{2n - 1} &
\dotsitem & \dotsitem & \point{l - 2} & \point{l - 1}}
\save "1,2"."3,5"="chunk1" \restore
\save "1,6"."3,9"="chunk2" \restore
\save "1,11"."3,13"="chunkN" \restore
\POS"chunk1"!CD!<0pt,-2\jot>*\frm{_\}} *++!U\txt<6em>{Chunk $1$}
\POS"chunk2"!CD!<0pt,-2\jot>*\frm{_\}} *++!U\txt<6em>{Chunk $2$}
\POS"chunkN"!CD!<0pt,-2\jot>*\frm{_\}} *++!U\txt<6em>{Chunk $N$}
\save "chunk1"*\frm{-} \restore
\save "chunk2"*\frm{-} \restore
\save "chunkN"*\frm{-} \restore
\endxy
\end{document}
... unless I uncomment the
\usepackage{dejavu-otf}
line (I do want DejaVu fonts):
Is it a problem with xy or dejavu-otf, and can it be somehow worked around?
A:
This happens as soon as you load unicode-math, which dejavu-otf does internally.
You have to restore some legacy symbols, namely those for the underbraces and overbraces, otherwise the commands point to the wrong symbol in the Unicode math font.
\documentclass[12pt]{standalone}
\usepackage{dejavu-otf}
\usepackage[all]{xy}
% restore the legacy brace pieces using cmex
\DeclareSymbolFont{oldlargesymbols}{OMX}{cmex}{m}{n}
% for horizontal braces
\DeclareMathSymbol{\braceld}{\mathord}{oldlargesymbols}{"7A}
\DeclareMathSymbol{\bracerd}{\mathord}{oldlargesymbols}{"7B}
\DeclareMathSymbol{\bracelu}{\mathord}{oldlargesymbols}{"7C}
\DeclareMathSymbol{\braceru}{\mathord}{oldlargesymbols}{"7D}
% for vertical braces
\DeclareMathSymbol{\braceur}{\mathord}{oldlargesymbols}{"38}
\DeclareMathSymbol{\braceul}{\mathord}{oldlargesymbols}{"39}
\DeclareMathSymbol{\bracedr}{\mathord}{oldlargesymbols}{"3A}
\DeclareMathSymbol{\bracedl}{\mathord}{oldlargesymbols}{"3B}
\DeclareMathSymbol{\bracecl}{\mathord}{oldlargesymbols}{"3C}
\DeclareMathSymbol{\bracecr}{\mathord}{oldlargesymbols}{"3D}
\DeclareMathSymbol{\bracec}{\mathord}{oldlargesymbols}{"3E}
%%% end of fix
\newcommand*{\point}[1]{*+[F.]{\makebox[2.8em]{$#1\mathstrut$}}}
\newcommand*{\dotsitem}{*+[F.]{\makebox[2em]{\ldots\mathstrut}}}
\begin{document}
\begin{xy}
\xymatrix @C=0pt @R=0pt{
*++{\textbf{mz\mathstrut}} &
\point{0} & \point{1} & \dotsitem & \point{n - 1} &
\point{n} & \point{n + 1} & \dotsitem & \point{2n - 1} &
\dotsitem & \dotsitem & \point{l - 2} & \point{l - 1}\\
*++{\textbf{scan\mathstrut}} &
\point{0} & \point{1} & \dotsitem & \point{n - 1} &
\point{n} & \point{n + 1} & \dotsitem & \point{2n - 1} &
\dotsitem & \dotsitem & \point{l - 2} & \point{l - 1}\\
*++{\textbf{intens\mathstrut}} &
\point{0} & \point{1} & \dotsitem & \point{n - 1} &
\point{n} & \point{n + 1} & \dotsitem & \point{2n - 1} &
\dotsitem & \dotsitem & \point{l - 2} & \point{l - 1}}
\save "1,2"."3,5"="chunk1" \restore
\save "1,6"."3,9"="chunk2" \restore
\save "1,11"."3,13"="chunkN" \restore
\POS"chunk1"!CD!<0pt,-2\jot>*\frm{_\}} *++!U\txt<6em>{Chunk $1$}
\POS"chunk2"!CD!<0pt,-2\jot>*\frm{_\}} *++!U\txt<6em>{Chunk $2$}
\POS"chunkN"!CD!<0pt,-2\jot>*\frm{_\}} *++!U\txt<6em>{Chunk $N$}
\save "chunk1"*\frm{-} \restore
\save "chunk2"*\frm{-} \restore
\save "chunkN"*\frm{-} \restore
\end{xy}
\end{document}
What's the problem? Xy-pic needs the \braceld command and the companions one for drawing the braces and it uses glyphs in the standard OMX encoded font for math extensions. Unfortunately, when unicode-math is loaded, the font used for the math extensions is the same as the main math font and this has different glyphs at the slots shown above. For instance, "7A is z, which is why you see a “z” in your picture. Those pieces the braces are made do not exist in Unicode, but we can easily use the ones in the standard math extension font, with the definitions above.
This might be a feature request for Xy-pic.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I change properties for the reboot task?
After the latest Windows 10 update, the procedure described in Conclusively stop wake timers from waking Windows 10 desktop no longer works. When attempting to save changes to the reboot task conditions, the user is asked to supply a password for account S-1-5-18, which is unknown. Taking ownership of the "reboot" file doesn't help. Can someone please supply an updated procedure? Thanks.
A:
S-1-5-18 is the Local System account so it is not sufficient to be Admin to change taskschd for reboot for UpdateOrchestrator.
You can solve it by running as local system using tools like PsExec from SysInternals. Download sysinternals PSTools and run this command in an elevated command prompt (as administrator) to launch Task Scheduler: psexec -i -d -s mmc taskschd.msc
| {
"pile_set_name": "StackExchange"
} |
Q:
Rails and Phusion Passenger - can only access public folder
I just installed Phusion Passenger on my Apache Server to host a Rails 3 App.
My vhost file looks like that:
<VirtualHost *:80>
ServerName markusdanek.com
DocumentRoot /var/www/loremipsum/n22/public
<Directory /var/www/loremipsum/n22>
AllowOverride all
Options -Multiviews
Options -Indexes
</Directory>
</VirtualHost>
So when I try to open loremipsum/n22 - I only get to the 404 Page (not even the index.html)
So how can I get to my app folder (localhost:3000/ or localhost:3000/posts)?
Is there anything else, I have to add to the vhost?
My route.rb:
get "home/index"
root :to => 'home#index'
A:
Are you sure Phusion Passenger is running properly? Set PassengerLogLevel to 1, access example.com/n22, then look in your global Apache error log to check whether you see any errors.
| {
"pile_set_name": "StackExchange"
} |
Q:
Find Largest Cell Values in Multiple Columns
I am trying to figure out a formula to find the largest values of a column and then use the value of a second and third column basically to solve a tie, and then display the persons name associated to that data in a different cell.
I've provided an image with test data to try to illustrate what I need:
To the right of the orange boxes shows the top 5 people based on the Criteria I want to use Basically I want Box 1 to display whatever person has the highest value in field K, followed by the second highest in box 2, etc. If the value in box K is identical I want the Value in the Total column to act as a tie breaker, if that still doesn't break the tie I want to use column I as the final tie breaker.
Obviously I want to leave the sorting in the table as is and the values within the table will change regularly (so copying all the data to a secondary data sheet manually to use sort functions won't work unless that process can be automated).
I've tried variations of VLOOKUP, INDEX, and MAX functions without any luck.
A:
Assuming all values are non-negative integers, you can make a new column with the score you want to maximize, one that is a formula involving K, J, and I:
L1:
=(K1*(MAX(J:J)+1)+J1)*(MAX(I:I)+1)+I1
Repeat that down column L, then use RANK() and VLOOKUP() as normal to select the winner and runners-up.
| {
"pile_set_name": "StackExchange"
} |
Q:
Merge polygons and plot using spplot()
I would like to merge some regions in gadm data and then plot the map. So far I have the following:
#install.packages("sp",dependencies=TRUE)
#install.packages("RColorBrewer",dependencies=TRUE)
#install.packages("maptools",dependencies=TRUE)
library(sp)
library(maptools)
#library(RColorBrewer)
# get spatial data
con <- url("http://gadm.org/data/rda/CZE_adm2.RData")
print(load(con))
close(con)
IDs <- gadm$ID_2
IDs[IDs %in% c(11500:11521)] <- "11500"
gadm_new <- unionSpatialPolygons(gadm, IDs)
# plot map
spplot(gadm_new, "NAME_2", col.regions=col, main="Test",colorkey = FALSE, lwd=.4, col="white")
However this results in error:
Error in function (classes, fdef, mtable) :
unable to find an inherited method for function "spplot", for signature "SpatialPolygons"
Now I have no idea what can possibly fix this error.
A:
I'm not sure about what you're trying to do here.
The error is due to the fact that spplot is used to draw spatial objects with attributes, ie with associated data. Your gadm object is of class SpatialPolygonsDataFrame, so it defines polygons and associated data that can be accessed via the slot gadm@data. When you use UnionSpatialPolygons, you only get a SpatialPolygons class object, which can be plotted with plot, but not with spplot :
IDs <- gadm$ID_2
IDs[IDs %in% c(11500:11521)] <- "11500"
gadm_new <- unionSpatialPolygons(gadm, IDs)
plot(gadm_new)
If you want to use spplot, you have to merge your associated data manually, the same way you merged your polygons, and then build back a SpatialPolygonsDataFrame. One way to do it is the following :
gadm_new <- gadm
## Change IDs
gadm_new$ID_2[gadm_new$ID_2 %in% c(11500:11521)] <- "11500"
## Merge Polygons
gadm_new.sp <- unionSpatialPolygons(gadm_new, gadm_new$ID_2)
## Merge data
gadm_new.data <- unique(gadm_new@data[,c("ID_2", "ENGTYPE_2")])
## Rownames of the associated data frame must be the same as polygons IDs
rownames(gadm_new.data) <- gadm_new.data$ID_2
## Build the new SpatialPolygonsDataFrame
gadm_new <- SpatialPolygonsDataFrame(gadm_new.sp, gadm_new.data)
Then you can use spplot to plot a map with an associated attribute :
spplot(gadm_new, "ENGTYPE_2", main="Test", lwd=.4, col="white")
Note that here I only used the ENGTYPE_2 variable of your data, not the NAME_2 variable, as I don't see the point to represent a variable where each value seems unique for each polygon.
| {
"pile_set_name": "StackExchange"
} |
Q:
Set widget initial size
How can I set widget initial size in GTK+3?
I tried gtk_widget_set_size_request(widget,w,h) before the widget has been realized, and then gtk_widget_set_size_request(widget,-1,-1) to release the constraint (after the widget has been realize). This results in a larger window that has larger size, but the widget was size was minimized (it did not remember my initial size).
MCVE:
//@{"targets":[{"name":"initsize","type":"application","pkgconfig_libs":["gtk+-3.0"]}]}
#include <gtk/gtk.h>
int main()
{
gtk_init(NULL,NULL);
auto window=gtk_window_new(GTK_WINDOW_TOPLEVEL);
auto paned=gtk_paned_new(GTK_ORIENTATION_HORIZONTAL);
gtk_container_add(GTK_CONTAINER(window),paned);
auto scrollbox=gtk_scrolled_window_new(NULL,NULL);
gtk_paned_add1(GTK_PANED(paned),scrollbox);
auto other=gtk_label_new("Right panel");
gtk_paned_add2(GTK_PANED(paned),other);
auto tv=gtk_text_view_new();
gtk_container_add(GTK_CONTAINER(scrollbox),tv);
//Make the widget large
gtk_widget_set_size_request(scrollbox,500,300);
gtk_widget_show_all(window);
//Remove constraint. The new (larger) size of `window` is preserved as
//desired, but `scrollbox` shrinks as a consequence of the constraint
//removal
gtk_widget_set_size_request(scrollbox,-1,-1);
gtk_main();
return 0;
}
Hint: While creating this example, the problem appeared when I added the paned widget.
Here is a screenshot of how the desired initial layout.
I achieved this by request sizes for the ScrolledWindow to the right, and for the GLArea to the right (without this, everything collapses to almost zero). After the UI is configured, it should be possible to shrink any of these panels, so the constraint must be removed without affecting any sizes. I also tried to preserve the paned position (get its value, remove constraint, and restore the old position, but that did not work).
A:
The closest solution is probably to reverse the problem and set the size of the main window to the sum of the desired sizes, by using gtk_window_set_default_size(). Then use gtk_paned_set_position () with the value for the leftmost widget. While this is only an approximate solution, it should be sufficient for most applications.
| {
"pile_set_name": "StackExchange"
} |
Q:
Determine the dimensions of image from dataURI
I'm allowing users to load their own image file from file system according to this post (last part - Images from the local file system)
https://stackoverflow.com/a/20285053
My Code
handleFiles: function (evt) {
var file = evt.target.files[0];
var reader = new FileReader();
reader.readAsDataURL(file);
reader.onloadend = function (e) {
var contents = e.target.result;
console.log(contents);
//apply contents to src of img element
};
},
After getting the dataUri, I want to display the image on screen, however some images are bigger than the area in which I'm displaying them. Is there a way to check the size of the image given the dataUri (width and height) and resize is accordingly?
I am placing the dataUri in the 'src' attribute of image tag.
A:
You can simply create an element in memory and read the value from it. Assuming "contents" is the dataURI of the image
handleFiles: function (evt) {
var file = evt.target.files[0];
var reader = new FileReader();
reader.readAsDataURL(file);
reader.onloadend = function (e) {
var contents = e.target.result;
console.log(contents);
var memoryImg = document.createElement('img');
memoryImg.src = contents;
var width = memoryImg.width;
var height = memoryImg.height;
};
},
The element will be cleaned up as soon as the interpreter finishes your function.
| {
"pile_set_name": "StackExchange"
} |
Q:
Arguments.callee is deprecated - what should be used instead?
For doing things like
setTimeout(function () {
...
setTimeout(arguments.callee, 100);
}, 100);
I need something like arguments.callee. I found information at javascript.info that arguments.callee is deprecated:
This property is deprecated by ECMA-262 in favor of named function
expressions and for better performance.
But what should be then used instead? Something like this?
setTimeout(function myhandler() {
...
setTimeout(myhandler, 100);
}, 100);
// has a big advantage that myhandler cannot be seen here!!!
// so it doesn't spoil namespace
BTW, is arguments.callee cross-browser compatible?
A:
Yes, that's what, theoretically, should be used. You're right. However, it doesn't work in some versions of Internet Explorer, as always. So be careful. You may need to fall back on arguments.callee, or, rather, a simple:
function callback() {
// ...
setTimeout(callback, 100);
}
setTimeout(callback, 100);
Which does work on IE.
A:
But what should be then used instead? Something like this?
Yes, you answered your own question. For more information, see here:
Why was the arguments.callee.caller property deprecated in JavaScript?
It has a pretty good discussion about why this change was made.
| {
"pile_set_name": "StackExchange"
} |
Q:
Typical throughput on switched network
I recently got around to measuring effective throughputs on a pretty big, switched network. I measured the throughputs by having two laptops, both running Iperf; one being the server, the other a client. I made sure to measure both the up- and downlink throughputs. My problem comes from the fact that some 100 Mbit/s paths were measured as everything from 50 to 80 Mbit/s. That seems kind of low, even taking overhead and plenty of active users into account.
Some useful information:
The network uses RSTP and has no routed hops.
There are at least 100 active users along the path from the Iperf client to the server.
The throughputs were measured on paths with at least three 100 Mbit/s switches between the two laptops.
I measured using TCP.
So my question can be summarized as: are these values something to be expected
Another question: I also got about 250-300 Mbit/s on a Gigabit switch while having the two nodes plugged into it. I used regular straight-through cables for both nodes. This can't be expected, even though the switch is in use by other machines, right?
A:
With throughput defined as the amount of data transferred from point A to point B in a period of time. The significant variables in throughput are: Latency, Packet Size, and Retransmissions (quality). This was proven with the Mathis Equation.
http://www.slac.stanford.edu/comp/net/wan-mon/thru-vs-loss.html
TCP Basics: TCP connections establish a session with a SYN, the receiving machine sends and ACK, Then data flows. When the window (setting in windows saying how much data to collect before it asks for validation/acknowledgement of receipt) fills up another set of acknowledgements are sent to start the next flow of data.
If your environment has the default MTU of 1500, and you introduce a device that has an MTU of 1460, this will slow down your network because all packets flowing through the device will be fragmented; when the 1500 packet hits the device with MTU of 1460 it will fragment the packet into two packets and transmit it.
If you have Jumbo frames enabled and your MTU is 9220 you will get much higher throughput, 5 times higher. Each payload is larger even though the packet has the same latency.
In short:
Throughput is conditional and depended on both host and network device settings. Use the Mathis Equation as a guide for what you could expect with the given data points for your network. Validate your findings and verify units; Megabit/sec and Megabyte or Kilobit and Kilobyte are not the same.
| {
"pile_set_name": "StackExchange"
} |
Q:
Powershell, sum totals from text files
I have a script that creates a monthly total of daily use listed in text files.
Each daily stats file is in this format:
location,Feeder,total,endpoints
Oaklake,1,11153,310
oaklake,2,26214,291
oaklake,3,4593,147
oaklake,4,5279,145
Here is the relevant portion of the script I'm having trouble with:
#create list of last month's days
if ((get-date).day -eq 2) {
$fullmonth = "location,Feeder,Usage,endpoints`n"
$year = (get-date).addmonths(-1).year
$month = (get-date).addmonths(-1).month
$days = 1..[datetime]::daysinmonth($year,$month) |
%{(get-date -day $_ -month $month -year $year).toshortdatestring()}
#select the file for each particular day and add it's content to $fullmonth
foreach ($day in $days) {
$dayfile = ls "c:\powershell\locationUse\summaries" |
?{$_.creationtime -gt $day -and $_.creationtime -lt $(get-date $day).adddays(1).toshortdatestring()} |
sort creationtime | select -last 1 | gc | select -skip 1
$fullmonth += $($dayfile | out-string)
}
$fullmonth | export-csv ./fullmonthtest.csv -notypeinformation
My problem is the final output of fullmonthtest.csv is a repetition of the last day of summary
files, like so:
location,Feeder,total,endpoints
Oaklake,1,11153,310
oaklake,2,26214,291
oaklake,3,4593,147
oaklake,4,5279,145
Oaklake,1,11153,310
oaklake,2,26214,291
oaklake,3,4593,147
oaklake,4,5279,145
Oaklake,1,11153,310
oaklake,2,26214,291
oaklake,3,4593,147
oaklake,4,5279,145
Oaklake,1,11153,310
oaklake,2,26214,291
oaklake,3,4593,147
oaklake,4,5279,145
Am I using the "+=" method incorrectly or something? Thanks for any help you can provide!
A:
Nevermind. I discovered the content of the month's worth of files was identical in each one. My script was working correctly, I just didn't verify the sources of my data. That's what I get for working with data created from and old version of my current script. Oops! Thanks for the response though Mathias.
| {
"pile_set_name": "StackExchange"
} |
Q:
I can't get the marquee effect in Android even after trying all things
public class Crunk extends Activity {
TextView textView = null;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_crunk);
Button button = null;
button = (Button)findViewById(R.id.button1);
button.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
Log.e("It's Clicked", "Don't Worry");
textView.setText("SeeMe");
textView.setEllipsize(TruncateAt.MARQUEE);
}
});
textView = (TextView)this.findViewById(R.id.textView1);
textView.setSelected(true);
textView.setText("Waitin'....");
}
This code just shows the text not the marquee effect.
Help me out getting the effect.
I want the text to show the effect when the button is tapped.
Still can't get the desired effect even though tried most of the things....
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent" >
<Button
android:id="@+id/button1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentTop="true"
android:layout_centerHorizontal="true"
android:layout_marginTop="180dp"
android:text="@string/TapMe" />
<TextView
android:text="@string/SeeMe"
android:id="@+id/textView1"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:singleLine="true"
android:ellipsize="marquee"
android:marqueeRepeatLimit="marquee_forever"
android:scrollHorizontally="true"
android:layout_alignParentTop="true"
android:layout_centerHorizontal="true"
android:layout_marginTop="51dp"
android:textAppearance="?android:attr/textAppearanceLarge" />
A:
Try following code in your onClick() of Button:
button = (Button)findViewById(R.id.button1);
button.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
Log.e("It's Clicked", "Don't Worry");
textView.setText("Simple application that shows how to use marquee, with a long ");
textView.setEllipsize(TruncateAt.MARQUEE);
textView.setSelected(true);
textView.setSingleLine(true);
}
});
as you have set in textView.setText("SeeMe"); is small text then textView's width.
| {
"pile_set_name": "StackExchange"
} |
Q:
Associated Classes Definition
Quick question(s)! According to my Java Computer Science textbook, "If a class C1 is associated with another class, C2, then C1 depends on C2 for its implementation..." is a true statement in the T/F section of a practice test. Is the 'associated' keyword 1-directional, as in this 'C1 is associated with C2' has a different meaning than 'C2 is associated with C1?' Also, when is this the case besides in the case of abstract classes and interfaces, and what would associated mean in these contexts?
Thanks
A:
Association is relation between two separate classes which establishes through their Objects. Association can be one-to-one, one-to-many, many-to-one, many-to-many.
e.g Car and Driver
Both can live independent of each other.
Aggregation is a special case of association.It contains has-a relationship.It is one-directional.
e.g Wallet and Money classes.
Wallet has money.
Composition is a restrictive case of aggregation. In this one object requires another object to exist.
e.g Car and Engine
| {
"pile_set_name": "StackExchange"
} |
Q:
add button to only one row in android listView
I am trying to add a button in the middle of my listView. Ideally The button will split the listView and it will continue afterward, but if this is not possible I will be ok with a button inside a row in the listView.
For example. My list view will have line one (image + text) , line two ( image + text) , button, and go on with the list view.
I have wrote the following code. This adds a button to a row in listView, but on the way it also adds an empty button (a button with now text) to every row in my listView. In addition the gravity setting for center is not working.
My xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="horizontal" >
<ImageView android:id="@+id/imgUserIcon"
android:layout_width="wrap_content"
android:layout_height="fill_parent"
android:layout_marginRight="10dp"
android:layout_marginTop="5dp"
android:gravity="center_vertical"
android:scaleType="fitStart" />
<Button
android:id="@+id/buttonShowHide"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:gravity="center"
android:text="@string/showHide" />
<TextView android:id="@+id/txtTitle"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginLeft="10dp"
android:layout_marginTop="5dp"
/>
</LinearLayout>
my adapter
public class UserAdapter extends ArrayAdapter<UserAccountData> {
Context context;
int layoutResourceId;
User data[] = null;
public UserAdapter(Context context, int layoutResourceId,
UserAccountData[] data) {
super(context, layoutResourceId, data);
this.layoutResourceId = layoutResourceId;
this.context = context;
this.data = data;
}
@Override
public View getView(int position, View convertView, ViewGroup parent) {
View row = convertView;
UserAccountDataHolder holder = null;
if (row == null) {
LayoutInflater inflater = ((Activity) context).getLayoutInflater();
row = inflater.inflate(layoutResourceId, parent, false);
holder = new UserAccountDataHolder();
holder.imgIcon = (ImageView) row.findViewById(R.id.imgUserIcon);
holder.txtTitle = (TextView) row.findViewById(R.id.txtTitle);
holder.showHide = (Button) row.findViewById(R.id.buttonShowHide);
row.setTag(holder);
} else {
holder = (UserHolder) row.getTag();
}
User user = data[position];
holder.txtTitle.setText(user.title);
holder.imgIcon.setImageResource(user.icon);
holder.showHide.setText(user.buttonName);
return row;
}
static class UserHolder {
ImageView imgIcon;
TextView txtTitle;
Button showHide;
}
}
My Java object for the row. I have created two constructors one for the button and one for the image and text.
public class UserAccountData {
public int icon;
public String type;
public String title;
public CharSequence buttonName;
public UserAccountData(){
super();
}
// for image and text
public UserAccountData(int icon, String title, String type) {
super();
this.icon = icon;
this.title = title;
this.type = type;
}
// for button
public UserData(CharSequence buttonName, String type) {
super();
this.buttonName = buttonName;
this.type = type;
}
public void setType(String type){
this.type = type;
}
}
In my activity I am adding the following two rows to the array , that later my adapter will use to create the listView ( I am passing it an ArrayList that being changed into an Array)
user_data.add(new UserAccountData(icon, "title,"type"));
user_data.add(new UserAccountData("show Password","button"));
a) is there a way to split the listView and the middle and just add a button? and continue the same listView? Because my current solution tries to add a button to a row.
b) any ideas why I am actually also adding an empty button to the icon, title type row?
I am getting icon, title, empty button on my actual listView
Thank you very much
UPDATE:
Found two blogs
http://logc.at/2011/10/10/handling-listviews-with-multiple-row-types/
and
http://android.amberfog.com/?p=296
, but still don't have any luck. Would appreciate some more in depth help
A:
is there a way to split the listView and the middle and just add a button? and continue the same listView? Because my current solution tries to add a button to a row.
If I understand your question you want something like:
My list view will have:
image + text
image + text
button
image + text
etc...
You can have more than one type of row layout if you override getViewTypeCount() and getItemViewType().
getViewTypeCount() should return the number of types, in this case 2.
getItemViewType(int position) will return which type the row at position is, in this case either 0 or 1.
Addition
I don't really know how to make the distinction between the image text row and the button. I tried to find a way to see if my image is null (using the 2nd constructor) , but this does not seems to work
This sounds like a good approach, but since icon is an int it will never be null, the default value for an uninitialized integer is 0. Try:
@Override
public int getItemViewType(int position) {
UserAccountData data = getItem(position);
if(data.icon == 0)
return 1;
return 0;
// The same thing in one line:
//return getItem(position).icon == 0 ? 1 : 0;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Ionic - App works fine on browser and iOS simulator but not in android simulator
I am really new to the mobile development world and trying my hands on it using IonicFramework.
I am creating a login form and on successful login the user gets take to another state which is called viewMyList. Everything seems to be working fine when I run the command ionic serve I am able to login and proceed to the next state and all seems to be fine on iOS simulator as well but on Android simulator on clicking the login button nothing happens, I don't see any error either.
My attempt
login.html
<ion-view title="Login">
<ion-content class="has-header" padding="true">
<form class="list">
<h2 id="login-heading3" style="color:#000000;text-align:center;">Welcome back!</h2>
<div class="spacer" style="width: 300px; height: 32px;"></div>
<ion-list>
<label class="item item-input">
<span class="input-label">Email</span>
<input type="text" placeholder="" ng-model="credentials.username">
</label>
<label class="item item-input">
<span class="input-label">Password</span>
<input type="text" placeholder="" ng-model="credentials.password">
</label>
</ion-list>
<div class="spacer" style="width: 300px; height: 18px;"></div>
<a class="button button-positive button-block" ng-click="login()">Sign In</a>
</form>
</ion-content>
</ion-view>
ng-click is linked with login()
Here is my loginCtrl which contains the login() function
.controller('loginCtrl', function ($scope, $state, $ionicHistory, User) {
$scope.credentials = {
username: '',
password: ''
};
$scope.login = function () {
User.login($scope.credentials)
.then(function (response) {
console.log(JSON.stringify(response));
//Login should not keep any history
$ionicHistory.nextViewOptions({historyRoot: true});
$state.go('app.viewMyList');
})
};
$scope.message = "this is a message loginCtrl";
})
Here is my User service that takes care of the login logic
angular.module('app.user', [])
.factory('User', function ($http) {
var apiUrl = 'http://127.0.0.1:8000/api';
var loggedIn = false;
return {
login: function (credentials) {
console.log(JSON.stringify('inside login function'));
console.log(JSON.stringify(credentials));
return $http.post(apiUrl + '/tokens', credentials)
.success(function (response) {
console.log(JSON.stringify('inside .then of login function'));
var token = response.data.token;
console.log(JSON.stringify(token));
$http.defaults.headers.common.Authorization = 'Bearer ' + token;
persist(token);
})
.error(function (response) {
console.log('inside error of login function');
console.log(JSON.stringify(response));
})
;
},
isLoggedIn: function () {
if (localStorage.getItem("token") != null) {
return loggedIn = true;
}
}
};
function persist(token) {
window.localStorage['token'] = angular.toJson(token);
}
});
Here is the route behind the login
.state('login', {
url: '/login',
templateUrl: 'templates/login.html',
controller: 'loginCtrl'
})
I am really clueless at the moment as I cant seem to figure out why nothing happens on Android, from my troubleshooting all I could find was when I click on login button the code does not seem to be going inside the following function.
$scope.login = function () {
User.login($scope.credentials)
.then(function (response) {
console.log(JSON.stringify(response));
//Login should not keep any history
$ionicHistory.nextViewOptions({historyRoot: true});
$state.go('app.viewMyList');
})
};
Any help will really be appreciated.
A:
Install whitelist plugin first.
cordova plugin add cordova-plugin-whitelist
add following code in your config.xml file under your root directory of project
<allow-navigation href="http://example.com/*" />
or:
<allow-navigation href="http://*/*" />
If still you are facing any issue, then you can check console while you are running in android device using chrome remote debugging
Connect your device with your machine.(Make sure USB debugging should be enable on your mobile).
write chrome://inspect in browser in your desktop chrome.
you will see connected device, select inspect and check console for log.
| {
"pile_set_name": "StackExchange"
} |
Q:
Laurentian Litany - How pray it?
During a prayer service in our Church, we sang the Laurentian Litany to Our Lady. Does someone knows if there exist any rules that say how the litany should be prayed (ex: standing up during liturgical holiday)?
This litany is also known as the Litany of Loreto and the Litany of the Blessed Virgin Mary.
A:
There are six approved litanies within the Roman Rite of the Catholic Church. There are no officials rules as to how to pray any of these litanies. That said, however, there are a few traditional ways of praying them.
For example the Litany of the Saints is usually chanted while kneeling during the Mass of Ordination or a solemn profession of a religious. It may be chanted or recited while making a procession as on Rogation Days.
The Litany of Loreto may be said privately or in public as a novena either recited, sung or chanted as the circumstances call.
I know of several monastic communities that will make a procession on the Feast of the Assumption and will chant the Litany of Loreto (in Latin) and end the procession at a statue of Mary (all the time remaining either walking or standing). If remaining in a church, these same communities will take the traditional posture of kneeling while reciting this litany. Once again this is only a traditional way of praying any of the litanies and the faithful are quite free to choose otherwise.
One last note: Please make sure to conclude the Litaniae Lauretanae with the appropriate closing prayer according to the liturgical season (Christmas, Easter, Advent, etc.).
| {
"pile_set_name": "StackExchange"
} |
Q:
Is the entire reducer step in hadoop map reduce thread safe?
If I have member variable in the reducer class, and have the reduce function mutate the member variable, do I have to take extra caution to make it thread safe?
A:
If I have member variable in the reducer class, and have the reduce function mutate the member variable, do I have to take extra caution to make it thread safe?
No, you don't have to take any extra caution - the member variable will be mutated inside the current Reducer but that would be isolated from any other instances of the same reducer class.
| {
"pile_set_name": "StackExchange"
} |
Q:
What is the reason of clearing of thread's interrupt status by Thread.interrupted()?
In many sources I found that Thread.interrupted() method clears interrupt status of thread, but in none of them there was explanation of the reason why this method works exactly in this way.
I still feel confused a little because of lack of understanding of this reason and lack of understating of what problem designers of java tried to solve by clearing interrupt status.
I will appreciate very much if someone could explain that and show some example.
A:
The idea behind thread interruption is that one thread may signal another to request that it interrupt is regular processing to divert its attention to some thread-specific special action. What a thread actually does in response depends entirely on the code running in that thread.
There are two main ways in which a Thread can determine whether it has been interrupted:
Several Thread and Object methods will throw an InterruptedException if invoked in a thread whose interrupted status is set, or if a thread is interrupted while the method is executing. The interrupted status is cleared in this event, presumably because the exception is considered adequate notice of the interruption.
Code running in the thread can invoke Thread.interrupted() or Thread.currentThread().isInterrupted() to proactively test for an interrupt. The former also resets the interrupted status; the latter does not, likely because it is an instance method -- interrupts must not be lost in the event that one thread calls the isInterrupted() method of a different one.
The techniques that cause the interrupt status to be reset do so in order that the thread is able to handle subsequent interruptions. The key point here is perhaps that thread interruption is not intended to necessarily cause the interrupted thread to shut down (although that is indeed one response that a thread can make). It is a more general mechanism.
| {
"pile_set_name": "StackExchange"
} |
Q:
Starling - get touch event under another
I have 2 circle, both have touch listeners.
Some times one circle overlap the other in the stage and i want to trigger both listener but starling trigger only the circle in front.
How can i do?
My code
circle1.addEventListener(TouchEvent.TOUCH, touched1)
circle2.addEventListener(TouchEvent.TOUCH, touched2)
function touched1(e:TouchEvent):void{
trace("hi1")
}
function touched2(e:TouchEvent):void{
trace("hi2")
}
A:
Instead of having two separate listeners, have only one listener for one parent object. And add these two circles in this parent object. To detect which target is touched then, use e.target.name like so:
var parentClip:Sprite = new Sprite();
parentClip.x = 150; // x position
parentClip.y = 150; // y position
parentClip.name = "parentClip"; //This is not required in your case
parentClip.addChild(circle1);
parentClip.addChild(circle2);
parentClip.addEventListener(TouchEvent.TOUCH, onTouched);
function onTouched(e:TouchEvent):void
{
trace("circles parent is touched");
trace("Hit: " + e.currentTarget.name);
trace("Hit: " + e.target.name);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
А что(,) если...
А что(,) если полным семьям педагогов дать возможность внеочередного вступления в жилищно-строительный кооператив?
По правилам запятая вроде как нужна, однако есть случаи непостановки запятой. В каких случаях?
A:
Правильно: А что если…..
А. Вопросительное сочетание "А что если" состоит имеет значения предположения в вопросительном предложении, все слова здесь употреблены в значении частиц.Возможна замена «А если бы…», то есть слово ЧТО играет роль модальной частицы БЫ, придавая сообщению предположительное значение. В этом случае запятая перед ЕСЛИ не ставится
Б. Похожие варианты из словаря: Что если? (а что если, а если). Что если опоздаем? Что если бы? - обращение с нерешительной просьбой или выражением нерешительного желания. Что если бы передохнуть?
В. ПРИМЕЧАНИЕ (СМЕЖНАЯ ТЕМА)
В русском языке существуют вопросительно-восклицательные слова, которые образуются соединением слов ЧТО, КАК с местоимениями, частицами и союзами. Вопросительно-восклицательные слова произносятся с ударением и выделяются паузой в устной речи, на письме после них ставится запятая. Вопросительно-восклицательные слова следует отличать от сходных с ними вариантов, которые не обособляются.
НАПРИМЕР:
Что ж, я готов. Что же мне, хвалить его за это? Что же теперь будет?
Ну что ж, пошли. Ну что ж, ведь ты при шпаге! Ну что же ты так долго!
А что, разве я не так сделал? А что, ты разве не знал? А что ты на это скажешь?
| {
"pile_set_name": "StackExchange"
} |
Q:
Using Messengers and Binders for Inter Process Communication
I am facing the following problem.
My Application needs to communicate between 2 Services, 1 for Network, 1 for rudementary Phoneservices.
Now, I did the following to achieve IPC:
I extended the BinderClass as normal, and return an own Binder in my onBind() method. That works just fine. But now I want to have in addition to that the possibility to send Messages between my Service and my Activity. But here lies the problem. Since I return my own Binder in the onBind() method like this:
@Override
public IBinder onBind(Intent intent){
Log.d(this.getClass().getName(), "BIND");
return binderToThisProcess;
}
I can not return an additional Messenger like this:
...
return outMessenger.getBinder();
I mean of course this is obvious cause the return statement allows only one Object to be returned.
My Question is: Is there any way I can append the Messenger onto my own Binder? Or is there a similar way to achieve what I am looking for? Or did I miss something?
A:
The solution seems very obvious for me - just write your own class what extends Binder. You can add any custom fields or methods to your CustomBinder class. Simply instantiate your CustomBinder and return it in onBind() method.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there something like: Try Until?
I am trying to get a client which tries all 5 seconds to connect to a server which doesn't need to be online. Only if it is online it should connect. Well if the server is already online and the client starts then, the message will be sent without any problem. but if the client starts first it waits a certain time until timeout and stops trying to connect. So I am trying to get a Loop with the command:
Client = New TCPControl2(ip,64555)
I tried to do this:
Try
Client = New TCPControl2(ip, 64555)
Catch ex As Exception
MsgBox(ex.Message)
End Try
It could me in the MsgBox about Timeout, but I don't know how to do a kind of Try Until it is connected or just set up the timeout time but i don't know that either.
Private Client As TCPControl2
A:
I think what you are trying to achieve can be done with a do while loop. You can read more here: https://msdn.microsoft.com/en-us/library/eked04a7.aspx
Dim isConnected As Boolean = false
Do
Try
Client = New TCPControl2(ip, 64555)
' Condition changing here.
if Client.IsConnected = true ' <-- example!
' it's connected
isConnected=true
end if
Catch ex As Exception
MsgBox(ex.Message)
End Try
Loop Until isConnected = true
| {
"pile_set_name": "StackExchange"
} |
Q:
When to use MessageQueueTransaction for simple cases?
I'm new to MSMQ and trying to understand when to use MessageQueueTransaction class. For example, is there any value to create a simple transaction just for putting a message in the MSMQ queue like this?
using (MessageQueueTransaction t = new MessageQueueTransaction())
{
t.Begin();
Message m = new Message(myString, formatter);
queue.Send(m, t);
t.Commit();
}
I can't think of any and am tempted to reduce this code to....
Message m = new Message(myString, formatter);
queue.Send(m, t);
Am I losing anything? Any chance this is going to end up in a partially sent corrupted state?
-MSMQnfused
A:
That is fine, since you only have a single step
| {
"pile_set_name": "StackExchange"
} |
Q:
How to properly mock a struct with member functions in Golang?
I have two structs: FunctionalityClient and TestClient, both implementing Interface. I have a global variable Client of type Interface. I assign to Client either the actual client, or the mock client, depending on whether it's a test or a normal run.
Interface has a method Request that I want to mock in tests. That is, I want to:
record what were the argument passed to the function
return some arbitrarily defined return value from the function
So the struct looks like this:
type TestClient struct {
recordedArgs []interface{}
returnValues []interface{}
}
func (c *TestClient) Request(body io.Reader, method string, endpoint string, headers []Header) ([]byte, error) {
c.recordedArgs = append(c.recordedArgs, []interface{}{body, method, endpoint, headers}) // this can't be typed if I want the code to be reusable
if len(c.returnValues) != 0 {
last := c.returnValues[0]
c.returnValues = c.returnValues[1:]
return last.([]byte), nil
}
return nil, nil
}
And I use it like so:
testClient := TestClient{
returnValues: []interface{}{
[]byte("arbitrarily defined return value"),
[]byte("this will be returned after calling Request a second time"),
}
}
Client = &testClient
// run the test
// now let's check the results
r1 := testClient.recordedArgs[1].([]interface{}) // because I append untyped lists to recordedArgs
assert.Equal(t, "POST", r1[1].(string))
assert.Equal(t, "/file", r1[2].(string))
// and so on
Now the question.
I have a few structs that I want to mock like this. Currently I just copy and paste the code above for each struct. But that really sucks, I would like the mock logic to be abstracted away somehow. I would also accept something like Mockito's when: when the mocked function is called with specific arguments, return a specific value and record the call.
How can I properly mock a struct with member functions in Golang?
A:
If you're mocking out clients for HTTP APIs, you might want to just use httptest.Server, which would simplify this tremendously. Rather than mocking out the client, mock out the server the client connects to. It's really easy to use, and you can still record the request method, path, body, etc., as well as returning arbitrary response values the same way you're doing with the mock client.
If that's not an option, you can abstract out your mock method to make it reusable:
type TestClient struct {
recordedArgs [][]interface{}
returnValues []interface{}
}
func (c *TestClient) mock(args ...interface{}) interface{} {
c.recordedArgs = append(c.recordedArgs, args)
if len(c.returnValues) != 0 {
last := c.returnValues[0]
c.returnValues = c.returnValues[1:]
return last
}
return nil
}
func (c *TestClient) Request(body io.Reader, method string, endpoint string, headers []Header) ([]byte, error) {
return c.mock(body,method,endpoint,headers).([]byte), nil
}
This cuts your usage-specific method down to one line.
| {
"pile_set_name": "StackExchange"
} |
Q:
scrapy itemloaders return list of items
def parse:
for link in LinkExtractor(restrict_xpaths="BLAH",).extract_links(response)[:-1]:
yield Request(link.url)
l = MytemsLoader()
l.add_value('main1', some xpath)
l.add_value('main2', some xpath)
l.add_value('main3', some xpath)
rows = response.xpath("table[@id='BLAH']/tbody[contains(@id, 'BLOB')]")
for row in rows:
l.add_value('table1', some xpath based on rows)
l.add_value('table2', some xpath based on rows)
l.add_value('main3', some xpath based on rows)
yield l.loaditem()
I am using an itemloader because I want to preprocess these fields and deal with any null values easily.
Each row of the table is supposed to be an entity which has the main1, 2, 3...etc fields plus its own fields.
However, the above code overwrites the l itemloader just returning the last row for each main page.
Question:
how can I combine the main page data with each table row entry using an itemloader? If I used 2 item loaders one for each section, how could they be combined?
For future reference:
def newparse:
for link in LinkExtractor(restrict_xpaths="BLAH",).extract_links(response)[:-1]:
yield Request(link.url)
ml = MyitemLoader()
ml.add_value('main1', some xpath)
ml.add_value('main2', some xpath)
ml.add_value('main3', some xpath)
main_item = ml.load_item()
rows = response.xpath("table[@id='BLAH']/tbody[contains(@id, 'BLOB')]")
for row in rows:
bl = MyitemLoader(item=main_item, selector=row)
bl.add_value('table1', some xpath based on row)
bl.add_value('table2', some xpath based on row)
bl.add_value('main3', some xpath based on row)
yield bl.loaditem()
A:
You need to instantiate a new ItemLoader in the loop providing an item argument:
l = MytemsLoader()
l.add_value('main1', some xpath)
l.add_value('main2', some xpath)
l.add_value('main3', some xpath)
item = l.loaditem()
rows = response.xpath("table[@id='BLAH']/tbody[contains(@id, 'BLOB')]")
for row in rows:
l = MytemsLoader(item=item)
l.add_value('table1', some xpath based on rows)
l.add_value('table2', some xpath based on rows)
l.add_value('main3', some xpath based on rows)
yield l.loaditem()
| {
"pile_set_name": "StackExchange"
} |
Q:
Text animation aliment issue with mobile browser
I have tried a text sliding animation in my website www.vaatasmart.com
Its playing good in tablet and desktop browser. But its going to left corner in mobile browser. I can't view the animation from the mobile browser.
HTML:
<link href='http://fonts.googleapis.com/css?family=Muli' rel='stylesheet' type='text/css'>
<div class="content">
<div class="visible">
<p>
SM
</p>
<ul>
<li>ART</li>
<li>allART</li>
</ul>
</div>
</div>
CSS:
#collection-54800bb4e4b0ff750b0f782c{
body {
width:50%;
height:50%;
position:fixed;
background-color:#F2F2F2;
}
.content {
width:537px;
font-size:62px;
line-height:80px;
font-family:'Muli';
color:#FE642E;
height:100px;
position:absolute;
top:50%;
left:50%;
margin-top:-40px;
margin-left:130px;
}
.visible {
float:left;
font-weight:800;
overflow:hidden;
height:80px;
}
p {
display:inline;
float:left;
margin:0;
}
ul {
margin-top:0;
padding-left:101px;
text-align:left;
list-style:none;
animation:6s linear 0s normal none infinite change;
-webkit-animation:6s linear 0s normal none infinite change;
-moz-animation:6s linear 0s normal none infinite change;
-o-animation:6s linear 0s normal none infinite change;
}
ul li {
line-height:80px;
margin:0;
}
@-webkit-keyframes opacity {
0% {opacity:0;}
50% {opacity:1;}
100% {opacity:0;}
}
@keyframes opacity {
0% {opacity:0;}
50% {opacity:1;}
100% {opacity:0;}
}
@-webkit-keyframes change {
0% {margin-top:0;}
15% {margin-top:0;}
25% {margin-top:-40px;}
35% {margin-top:-60px;}
45% {margin-top:-80px;}
55% {margin-top:-80px;}
65% {margin-top:-80px;}
75% {margin-top:-60px;}
85% {margin-top:-40px;}
100% {margin-top:0;}
}
@keyframes change {
0% {margin-top:0;}
15% {margin-top:0;}
25% {margin-top:-40px;}
35% {margin-top:-60px;}
45% {margin-top:-80px;}
55% {margin-top:-80px;}
65% {margin-top:-80px;}
75% {margin-top:-60px;}
85% {margin-top:-40px;}
100% {margin-top:0;}
}
}
A:
I've checked on your code, you can fix that by using Media Queries, apply below CSS, and have it displayed well;
@media all and (max-width: 658px) {
.content { margin-left: -150px; }
}
Try applying it for other dislocated elements you may have on mobile devices.
Hope it helped.
| {
"pile_set_name": "StackExchange"
} |
Q:
Very weird behavior showing a Youtube video on iPad
I have an array of youtube video links, and I put them in a tableview. When the user clicks on one row a WebView is pushed in, and I point it to the video URL like this
[web loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"https://www.youtube.com/watch?v=wQXIuYVNM9Q"]]];
this was working perfectly 'till yesterday, and the result was
but since today the behavior is different! What happens is that the firs time I click on a row, the video is displayed as always. But then if I go back and click on the same video again, it doesn't appear anymore, and istead I get the following screen
This is very weird! If I choose an other video from the list the first time it loads, then from the second time it doesn't and I get the same useless screen with the video thumbnails.
Even if I uninstall the app and start it again, the video that were already clicked don't work, while the others work just once. It looks like it's a cache problem or something similar...
Please help me, this is driving me mad!
A:
Found a weird workaround using arc4number:
NSString *s = @"https://www.youtube.com/watch?v=DLl92XBsYmc&feature=youtube_gdata"
s = [s stringByAppendingFormat:@"%f", arc4random()];
[web loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:s]]];
so every time the address is different, and the video gets displayed.
| {
"pile_set_name": "StackExchange"
} |
Q:
Textcontext property in MStest giving null reference exeption
I am trying to create a Unit test project in Visual studio 2017 . I want to use Testcontext class prorperties like TestName and etc in my test class and Test method . But when i run the project in debug mode i get null object reference for Testcontext object .
Below is the code :
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace UnitTestProject2
{
[TestClass]
public class UnitTest1
{
private TestContext _testcontext;
public TestContext Testcontext
{
get { return _testcontext; }
set { _testcontext = value; }
}
[TestMethod]
public void TestMethod2()
{
Console.WriteLine(Testcontext.TestName);
}
}
}
I am not able to find out how to fix this problem using Coded UI project it works fine.
the exception
A:
You need to change the definition for TestContext property.
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace UnitTestProject2
{
[TestClass]
public class UnitTest1
{
public TestContext TestContext { get; set; }
[TestMethod]
public void TestMethod2()
{
Console.WriteLine(Testcontext.TestName);
}
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to remove configure volumes in docker images
I use docker run -it -v /xx1 -v /xx2 [image] /bin/bash to create a container.
Then commit to image and push to docker hub.
use docker inspect [image]
The Volumes detail is
"Volumes": {
"/xx1": {},
"/xx2": {}
},
Now I want to remove volume /xx1 in this image.
What should I do?
A:
I don't think this is possible with the Docker tools right now. You can't remove a volume from a running container, or if you were to use your image as the base in a new Dockerfile you can't remove the inherited volumes.
Possibly you could use Jérôme Petazzoni's nsenter tool to manually remove the mount inside the container and then commit. You can use that approach to attach a volume to a running container, but there's some fairly low-level hacking needed to do it.
A:
There is a workaround in that you can docker save image1 -o archive.tar, editing the metadata json file, and docker import -i archive.tar. That way the history and all the other metadata is preserved.
To help with save/unpack/edit/load I have created a little script, have a look at docker-copyedit. Specifically for your question you would execute
./docker-copyedit.py from [image] into [image2] remove volume /xx1
| {
"pile_set_name": "StackExchange"
} |
Q:
Meteor - Angular 2 Flex Layout Mobile Issue
I'm developing with Meteor - Angular2 Using Angular 2's Flex Layout
I want a responsive result when screen size goes to smaller than 520px.
It works well just resize my screen, but using real device or chrome's device toolbar It goes to suck..
Here is my screen shot of result, and codes
import { Component } from '@angular/core';
@Component({
selector: 'demo-responsive-layout-direction',
template: `
<div class="containerX">
<div fxLayout="row" fxLayout.xs="column" fxLayout.sm="column" fxFlex class="coloredContainerX box" >
<div fxFlex> I'm above on mobile, and to the left on larger devices. </div>
<div fxFlex> I'm below on mobile, and to the right on larger devices. </div>
</div>
</div>
`
})
export class DemoResponsiveLayoutDirection { }
Result Images
Working with Screen Resizing
Not Working with Mobile mode
A:
I solved this issue by myself!!
You should add this meta tag to your main.html (or index.html) for detect mobile device properly (not just screen size)
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Your Site Title</title>
</head>
| {
"pile_set_name": "StackExchange"
} |
Q:
Reading and understanding AspectJ pointcuts?
/* 0 */ pointcut services(Server s): target(s) && call(public * *(..))
This pointcut, named services, picks out those points in the execution
of the program when Server objects have their public methods called.
It also allows anyone using the services pointcut to access the Server
object whose method is being called.
(taken from https://eclipse.org/aspectj/doc/released/progguide/language-anatomy.html)
I'm trying to understand AspectJ's pointcuts, and am quite a bit confused at the moment. My main question is: how do you read the above pointcut, and how do you "puzzle" its meaning together?
To illustrate my confusion, let's try to build things up from scratch:
The following pointcut would intercept all public method calls to any object, right?
/* 1 */ pointcut services() : call(public * *(..))
Now, what about this:
/* 2 */ pointcut services() : call(public * Server.*(..))
I assume that would intercept any points when public methods of the Server object are called.
Now, how do I get from here to the initial example 0? And how do I read it?
Would you first provide the parameter list when building the pointcut up?
/* 3a */ pointcut services(Server s) : call(public * *(..))
Is that the same as number 2 above? (I have a feeling it wouldn't work, and if it did, it would "intercept" every public method call, just like number 1.) Anyway, would the following be the same? (I'm not "capturing" s with a native pointcut yet, so I can't really define it, can I?)
/* 4a */ pointcut services(Server /* only type, no variable */) : call(public * *(..))
Or would you start by specifying a native pointcut, to "capture" the target object, like so:
/* 3b */ pointcut services() : target(s) && call(public * *(..))
I suppose that would still intercept all public method calls on any object?
Would the following work to only intercept calls on the Server object, and to "capture" that object (without making it available to be passed on later, e.g. to an advice)?
/* 5 */ pointcut services(/*nothing here*/) : target(s) && call(public * Server.*(..))
Now, going back to the original pointcut:
/* 0 */ pointcut services(Server s): target(s) && call(public * *(..))
Is that the same as
/* 6 */ pointcut services(Server s): target(s) && call(public * Server.*(..))
So, to summarize:
How do you start deciphering 0?
Do you first look at the target pointcut, then at the paramter type of the services pointcut and read it "inside out"/"from right to left"?
Or do you look at the parameter list first, and then look into the services pointcut to see where the argument came from (i.e. target(s))?
Or am I making this far too complicated? Am I missing an important bit somewhere to help me understand this?
Edit: the manual explains it left to right - but where does the argument to parameter Server s come from if I haven't "executed" target(s) yet?
A:
1: Yes, it intercepts any public method call.
2: It intercepts any public method call on an object declared as a Server, whereas 0 intercepts any public call on an object which is an instance of a Server. See the semantics.
3a: As s is not bound, it doesn't compile:
[ERROR] formal unbound in pointcut
.../src/main/aspect/MyAspect.aj:18
pointcut services(Server s): call(public * *(..));
4a: The syntax is not valid, just like you need to name parameters when declaring methods in an interface:
[ERROR] Syntax error, insert "... VariableDeclaratorId" to complete FormalParameterList
.../src/main/aspect/MyAspect.aj:18
pointcut services(Server): call(public * *(..));
^
3b: It's not valid either, s hasn't been declared:
[WARNING] no match for this type name: s [Xlint:invalidAbsoluteTypeName]
.../src/main/aspect/MyAspect.aj:18
pointcut services(): target(s) && call(public * *(..));
5: Like 3b, s hasn't been declared.
6: It's not the same as 0, it only matches public Server method calls (i.e. declared in Server) to a Server instance.
I have illustrated the different cases in a Github repository: switch between the branches to try them. There's an extra case in the aspect7 branch, based on 6, where I override hashCode() in Server.
You can (and should) try yourself, to get a better understanding.
To answer your final question, the argument to the pointcut comes from the fact that we want to (be able to) access the target of the call in the advice, by having it supplied as a parameter to the advice. The signature of the advice needs to contain parameters for all the referenced pointcuts, and the pointcut parameters need to reference parameters in the advice.
So, to have a Server parameter in the advice, I need it in the pointcut, and it needs to be bound to something in the pointcut definition.
| {
"pile_set_name": "StackExchange"
} |
Q:
Predefined values for user selection in custom post type (metadata vs taxonomy)
I'm building a plugin that allows visitors to submit software configurations to share with others. They input several bits of info (their name, the software and the machine) and then upload their XML profile, which is ultimately converted into a custom post type.
As of right now, I am storing everything they input like their name, the software, the machine type, etc. as metadata. I want to have predefined options for software/machine types though, allowing them to choose from these options when submitting.
What would be a good way to achieve this in Wordpress? Should I just keep these as pre-defined values in a select box via the form, then save the data as text in metadata or is there a better alternative?
function slicer_profile_form()
{
echo '<form action="' . esc_url( $_SERVER['REQUEST_URI'] ) . '" method="post" enctype="multipart/form-data">';
echo '<p>';
echo 'Your Name<br />';
echo '<input type="text" name="slicer-profile-author" pattern="[a-zA-Z0-9 ]+" value="' . ( isset( $_POST["slicer-profile-author"] ) ? esc_attr( $_POST["slicer-profile-author"] ) : '' ) . '" size="48" />';
echo '</p>';
echo '<p>';
echo 'Profile Name<br />';
echo '<input type="text" name="slicer-profile-name" pattern="[a-zA-Z0-9 ]+" value="' . ( isset( $_POST["slicer-profile-name"] ) ? esc_attr( $_POST["slicer-profile-name"] ) : '' ) . '" size="48" />';
echo '</p>';
echo '<p>';
echo 'Profile Description<br />';
echo '<textarea name="slicer-profile-description" pattern="[a-zA-Z0-9 ]+" value="' . ( isset( $_POST["slicer-profile-description"] ) ? esc_attr( $_POST["slicer-profile-description"] ) : '' ) . '" rows="4"></textarea>';
echo '</p>';
echo '<p>';
echo '3D Printer Model<br />';
echo '<select name="slicer-profile-model">';
echo '<option value="a8">Anet A8</option>';
echo '<option value="cr10">Creality CR-10</option>';
echo '<option value="mini">Monoprice Select Mini</option>';
echo '<option value="makerselect">Monoprice Maker Select</option>';
echo '<option value="ultimate">Monoprice Ultimate</option>';
echo '<option value="prusamk2">Prusa MK2/MK2S/MK3</option>';
echo '</select>';
echo '</p>';
echo '<p>';
echo 'Slicer Software<br />';
echo '<select name="slicer-profile-software">';
echo '<option value="cura">Cura</option>';
echo '<option value="s3d">Simplify3D</option>';
echo '<option value="slic3r">Slic3r</option>';
echo '</select>';
echo '</p>';
echo '<p>';
echo 'Slicer Profile<br />';
echo '<input type="file" name="slicer-profile" accept=".fff,.ini,.curaprofile">';
echo '</p>';
echo '<p><input type="submit" name="slicer-profile-submitted" value="Submit"/></p>';
echo '</form>';
}
A:
If you want to group items together, use a taxonomy. Aside from that being the literal definition of the word, it makes it easy to pull in all posts for the same software and keep those grouped. That's what a Taxonomy excels at.
If you just have a more overall CPT, that just need to have a bit of arbitrary information attached to them, that's what Custom Fields excel at. This is mainly for arbitrary information that's not categorically relatable, like Price, or Event Start Date, or Facebook Group/Page URL.
It sounds like you would be better suited with a taxonomy/term relationship for Software and Machine Type, though ultimately it's up to you. You can query posts based on custom fields, but categorically definable information is better suited for a taxonomy.
As an unrelated aside, is there any particular reason you're using an echo statement per line instead of just closing your PHP tag and echoing the few PHP variables you have inside standard HTML?
function slicer_profile_form(){ ?>
<form action="<?= esc_url( $_SERVER['REQUEST_URI'] ); ?>" method="post" enctype="multipart/form-data">
<label>
Your Name<br>
<input type="text" name="slicer-profile-author" pattern="[a-zA-Z0-9 ]+" value="<?= ( isset( $_POST["slicer-profile-author"] ) ? esc_attr( $_POST["slicer-profile-author"] ) : '' ); ?>" size="48" />
</label>
<label>
Profile Name<br>
<input type="text" name="slicer-profile-name" pattern="[a-zA-Z0-9 ]+" value="<?= ( isset( $_POST["slicer-profile-name"] ) ? esc_attr( $_POST["slicer-profile-name"] ) : '' ); ?>" size="48" />
</label>
<label>
Profile Description<br>
<textarea name="slicer-profile-description" pattern="[a-zA-Z0-9 ]+" value="<?= ( isset( $_POST["slicer-profile-description"] ) ? esc_attr( $_POST["slicer-profile-description"] ) : '' ); ?>" rows="4"></textarea>
</label>
<label>
3D Printer Model<br>
<select name="slicer-profile-model">
<option value="a8">Anet A8</option>
<option value="cr10">Creality CR-10</option>
<option value="mini">Monoprice Select Mini</option>
<option value="makerselect">Monoprice Maker Select</option>
<option value="ultimate">Monoprice Ultimate</option>
<option value="prusamk2">Prusa MK2/MK2S/MK3</option>
</select>
</label>
<label>
Slicer Software<br>
<select name="slicer-profile-software">
<option value="cura">Cura</option>
<option value="s3d">Simplify3D</option>
<option value="slic3r">Slic3r</option>
</select>
</label>
<label>
Slicer Profile<br>
<input type="file" name="slicer-profile" accept=".fff,.ini,.curaprofile">
</label>
<p>
<input type="submit" name="slicer-profile-submitted" value="Submit" />
</p>
</form>
<?php } ?>
| {
"pile_set_name": "StackExchange"
} |
Q:
Joining a transaction in Hibernate
I'm migrating the code from EJB to Spring-Hibernate. How do I join the transaction and rollback if failure occurs?
Below is the code in EJB :
entityManager.joinTransaction();
entityManager.persist(xyz);
entityManager.flush();
UPDATE 1:
How do we join two transactions happening on different databases?
There are 2 transactions which needs to performed atomically. If the second transaction fails, 1st transaction must be rollbacked. How to implement this?
A:
The purpose of entityManager.joinTransaction(); is to notify the persistence context to synchronize itself with the current transaction (reference)
Since the code is being migrated to the Spring consider leveraging the out-of-box transaction abstraction available via @Transactional. This will make the call to joinTransaction() redundant and the rollback / commit will be taken care by Spring.
Note - Ensure that the transaction settings are chosen appropriately so as to be inline with current implementation.
| {
"pile_set_name": "StackExchange"
} |
Q:
Split/Slicing in Python 3 three different string arrays
Code
def clouds_function():
"""
Extracts Cloud Height and Type from the data
Returns: Cloud Height and Type CCCXXX
"""
clouds1 = content[1]
clouds1 = clouds1[15:len(clouds1)]
clouds1 = clouds1.split()
clouds2 = content[2]
clouds2 = clouds2 + " "
clouds2=[clouds2[y-8:y] for y in range(8, len(clouds2)+8,8)]
clouds3 = content[3]
clouds3 = clouds3 + " "
print(clouds3)
clouds3=[clouds3[y-8:y] for y in range(8, len(clouds3)+8,8)]
return(clouds3)
print(clouds_function())
Sample Data
content[1] = 'OVC018 BKN006 OVC006 OVC006 OVC017 OVC005 OVC005 OVC016 OVC029 OVC003 OVC002 OVC001 OVC100'
content[2] =' OVC025 OVC010 OVC009 OVC200'
content[3] =' OVC100 '
I tried
def split(s, n):
if len(s) < n:
return []
else:
return [s[:n]] + split(s[n:], n)
It returns ['OVC100 '] for content[3]
I need
['','OVC100','','','','','','','','','','','']
The results
(['OVC018', 'BKN006', 'OVC006', 'OVC006', 'OVC017', 'OVC005', 'OVC005', 'OVC016', 'OVC029', 'OVC003', 'OVC002', 'OVC001', 'OVC100'], ['OVC025 ', ' ', ' ', ' ', 'OVC010 ', 'OVC009 ', ' ', ' ', ' ', ' ', ' ', 'OVC200 '], ['OVC100 '])
I need homogeneous arrays
It might be a problem with each being a uneven length to begin with so still.
A:
Your data has lenght-problems and different gap-sizes (2 or 1 character):
c[1] = 'OVC018 BKN006 OVC006 OVC006 OVC017 OVC005 OVC005 OVC016 OVC029 OVC003 OVC002 OVC001 OVC100'
c[2] =' OVC025 OVC010 OVC009 OVC200'
c[3] =' OVC100 '
c[2] and c[3] use 9 characters to the start of the 2nd value, c[1] only 8
between 'OVC005 OVC016' is only 1 space, normally 2
c[3] is much shorter then the others
Slicing is good if you have constant or predictable lengths (you haven't) - this can be better solved using simple string addition and replacements of space-streches by a character used to split it afterwards:
make all strings equally long - filling up with spaces
replace all [8,7,6,2,1] long stretches of spaces by '-' - a (new) artificial splitter character
split at '-'
content= ['OVC018 BKN006 OVC006 OVC006 OVC017 OVC005 OVC005 OVC016 OVC029 OVC003 OVC002 OVC001 OVC100',
' OVC025 OVC010 OVC009 OVC200',
' OVC100 ']
# extend data
max_len = max(len(data) for data in content)
for i,c in enumerate(content):
# fix legths
content[i] = c + " " * (max_len-len(c))
# replace stretches of spaces by a splitter character
content[i] = content[i].replace(" "*8,"-").replace(" "*7,"-").replace(" "*6,"-").replace(" ","-").replace(" ","-")
hom = [c.split("-") for c in content]
for c in hom:
print(c,"\n")
Output:
['OVC018', 'BKN006', 'OVC006', 'OVC006', 'OVC017', 'OVC005', 'OVC005', 'OVC016', 'OVC029', 'OVC003', 'OVC002', 'OVC001', 'OVC100']
['', 'OVC025', '', '', '', 'OVC010', 'OVC009', '', '', '', '', '', 'OVC200']
['', 'OVC100', '', '', '', '', '', '', '', '', '', '', '']
| {
"pile_set_name": "StackExchange"
} |
Q:
I have written a code to find reverse of array elements but it doesnot gives required output
# include <iostream>
using namespace std;
const int size=5;
void inputdata(int arr[], int n); //function prototype
void display(int arr[],int n); //function prototype
void Reverse(int arr[],int n); //function prototype
int main() //start of main function
{
int list[size]; //array declaration
inputdata(list ,size); //fuction call
display(list,size); //fuction call
Reverse(list,size); //fuction call
}
void inputdata(int arr[], int n) //function definition that takes input from user
{
int index;
for(index=0;index<n;index++) //loop to take input from user
{
cout<<"Enter element ["<<index<<"]"<<endl;
cin>>arr[index];
}
}
void display(int arr[],int n) //displays the input
{
int index;
for(index=0;index<n;index++) //loop to display output
{
cout<<"Element on ["<<index<<"] is:"<<arr[index]<<endl;
}
}
void Reverse(int arr[],int n) //function to find reverse
{
int i,temp; //here i have taken a variable temp of integer type for swapping
for(i=0;i<n/2;i++)
{
temp=arr[i];
arr[i]=arr[n-i-1];
arr[n-i-1]=arr[i];
}
cout<<"the reverse order array is:"<<endl;
for(i=0;i<n;i++) //this loop is used to display the reverse order
{
cout<<arr[i]<<endl;
}
return 0;
}
this above c++ code is meant to find the reverse of the elements of array which is taken as input from user.Input data function is used to take input from the user.Display function is used to display that input.Then there is a function reverse which finds the reverse.
But it does not gives proper reverse(output) e.g if i enter 5 array elements as 1,2,3,4,5 its output should be as 5,4,3,2,1.But this results as 5,4,3,4,5.
A:
Your swap code looks like:
temp=arr[i];
arr[i]=arr[n-i-1];
arr[n-i-1]=arr[i];
But it should be:
temp=arr[i];
arr[i]=arr[n-i-1];
arr[n-i-1]=temp;
A cleaner and simpler option would be to use the swap function in the algorithm library.
| {
"pile_set_name": "StackExchange"
} |
Q:
Push notification not working - Android
I am trying to send a push notification on a button click event. But don't know why, but it is not working. It doesn't show any error though. Can anybody find any error in the code?
public class Verify extends AppCompatActivity {
public static String TAG=Verify.class.getSimpleName();
NotificationCompat.Builder notification;
private static final int uniqueID=12345;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_verify);
notification=new NotificationCompat.Builder(this);
notification.setAutoCancel(true);
}
public void onVerify(View v)
{
this.defineNotification();
//other code follows
}
public void defineNotification()
{
notification.setContentTitle("Successfully Signed Up");
notification.setContentText("Hi, you just Signed Up as a Vendor");
notification.setWhen(System.currentTimeMillis());
Intent intent=new Intent(this,OtherActivity.class);
PendingIntent pendingIntent=PendingIntent.getActivity(this,0,intent,PendingIntent.FLAG_UPDATE_CURRENT);
notification.setContentIntent(pendingIntent);
NotificationManager nm=(NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE);
nm.notify(uniqueID,notification.build());
//Log.i(TAG, "coming here in notification");
}
Here is the Android Manifest Code :
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.sachinparashar.xyz">
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<uses-permission android:name="android.permission.INTERNET"/>
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity
android:name=".SignUp"
android:label="SignUp"
android:theme="@style/AppTheme"
>
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity
android:name=".Verify"
android:label="@string/title_activity_verify"
android:theme="@style/AppTheme"/>
<activity
android:name=".VendorDetails"
android:label="Vendor Details"
android:theme="@style/AppTheme"/>
<activity
android:name=".OrderTypes"
android:label="Orders"
android:theme="@style/AppTheme" />
</application>
A:
change this :
PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT);
with this :
PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, intent, 0);
| {
"pile_set_name": "StackExchange"
} |
Q:
How could get user friends count who also like a facebook page?
I need only the number of user friends count who also like a facebook page. I am already make a facebook app for that. And user should have go to that app so that we can received user details.
And then using FQL we run a query
SELECT uid FROM page_fan WHERE page_id = 'Page Id' AND uid IN (SELECT uid2 FROM friend WHERE uid1 = me())
But it only show the list of friends who go to that app and also like that page. I need also that friends who like the page but not use the app.
Is that possible?
A:
If you're using an app with Graph API v2.0 or greater, this is no longer possible to get via FQl.
What you can use is the new Social Context API, which should provide the info (the plain user count) you desire. Have a look at https://developers.facebook.com/docs/graph-api/reference/v2.0/page.context/friends_who_like
Sample request:
GET /{Page_ID}?fields=context{friends_who_like}
will generate an output like this:
{
"context": {
"friends_who_like": {
"data": [
{
"id": "8490902464564645645",
"name": "Friend 1"
}
],
"paging": {
"cursors": {
"before": "ODQ5MDkwMjQ2",
"after": "ODQ5MDkwMjQ2"
}
},
"summary": {
"social_sentence": "4 of your friends like this.",
"total_count": 4
}
}
},
"id": "40796308305"
}
The property context.friends_who_like.summary.total_count will contain the overall count of freinds liking the sample CocaCola page (ID 40796308305).
| {
"pile_set_name": "StackExchange"
} |
Q:
store the incoming messages in sorted order(as per the incoming numbers)
I am designing an app in that i am storing the incoming messages in my own created database and from this database i am displaying these messages in my application.
for this i am using this code,
To insert the data,
mydb = openOrCreateDatabase("db",MODE_PRIVATE, null);
mydb.execSQL("CREATE TABLE IF NOT EXISTS sms_inbox (SMS_TEXT varchar , SENDER_NUMBER varchar );");
mydb.execSQL("INSERT INTO sms_inbox VALUES('" + stBody + "', '" + stPhoneNumber + "');");
To read the data,
Cursor c = mydb.rawQuery("SELECT * from sms_inbox", null);
and then i am displaying these messages,
So now my problem is this that I want to store these messages according to the phone numbers,
example,
phoneno.-12345
"ALL the messages of 12345 will be shown here"
phone no.-23456
"ALL the messages of 23456 will be shown here"
A:
Well, you could store those messages as another entry in sms_inbox table, but it would be helpful to store a timestamp as well to be able to distinguish each message.
When you query you could pass the phone number as what is the phone number you're interested into: SELECT * from sms_inbox where 'SENDER_NUMBER' = '?'
Or you could make a select ordered by SENDER_NUMBER: SELECT * from sms_inbox order by 'SENDER_NUMBER' desc;
In the end it all depends on the use-cases you have. You could also use another table for keeping only the messages for a number and map a one-to-many relationship.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do i use SetPixel on a new Bitmap?
The code:
Bitmap newbmp = new Bitmap(512, 512);
foreach (Point s in CommonList)
{
w.WriteLine("The following points are the same" + s);
newbmp.SetPixel(s.X, s.Y, Color.Red);
}
w.Close();
newbmp.Save(@"c:\newbmp\newbmp.bmp", ImageFormat.Bmp);
newbmp.Dispose();
When it's trying to Save the new bmp I'm getting exception on the line:
newbmp.Save(@"c:\newbmp\newbmp.bmp", ImageFormat.Bmp);
The exception is:
ExternalException: A generic error occurred in GDI+
A:
Generally, when you have such save errors, you must make sure that the path where you want to save exists and you have the necessary privileges to effectuate your save.
| {
"pile_set_name": "StackExchange"
} |
Q:
Change colors of x-axis labels in dotplot from Lattice package R
I was wondering if there was a way to change the color of certain x-axis labels on a dotplot that I created using the lattice package in R.
This is an example of my data/code:
State <- factor(c("AZ", "AR", "NJ"))
Value <- c(1.2, 4.5, 2.0, 1.5, 4.0, 1.4)
Year <- c(2000, 2000, 2000, 2005, 2005, 2005)
p <- dotplot(Value ~ State, groups = Year, main = "Test Data",
xlab = "State/Territory", ylab = "Data", auto.key = TRUE)
I would like AZ and AR to be grouped together (be the same color text in the x-axis, we can make it blue)
I would like NJ to be its own group (be a different color in the x-axis text, we can make it pink)
Could I also draw a vertical line in the graph to better separate the groups?
Thank you for your help!
A:
Controlling this on legacy plotting systems is extremely complicated as can be seen here. One approach might be to use a modern plotting system like ggplot2.
library(ggplot2)
data <- data.frame(State = factor(c("AZ", "AR", "NJ")),Value = c(1.2, 4.5, 2.0, 1.5, 4.0, 1.4), Year = c(2000, 2000, 2000, 2005, 2005, 2005))
ggplot(data,aes(x=State,y=Value,color = State, shape = as.factor(Year))) +
geom_point(size = 3) + scale_color_manual(values=c("blue","blue","pink")) + scale_shape_discrete(name = "Year") +
theme(axis.text.x = element_text(colour = c("blue","blue","pink")), plot.title = element_text(hjust = 0.5)) +
geom_vline(xintercept = 2.5) + labs(title = "Test Data", x = "State/Territory", y = "Data")
| {
"pile_set_name": "StackExchange"
} |
Q:
nsuserdefaults synchronize method slows down the application
I am doing a calculation intensive operation in loops(hundreds for iterative formulas).In each loop the values are fetched from nsuserdefaults directly and calculated and saved back.my question is that should i use -synchronize method each time i write into nsuserdefaults?.i think without using this method. my application runs much faster. Does using synchronize slows down the calculations
A:
Does using synchronize slows down the calculations?
Yes, absolutely. synchronize writes the current user default values to the disk.
should i use -synchronize method each time i write into nsuserdefaults?.
No absolutely not. If you have a long loop, where you are changing user defaults, the values are saved in memory. It won't mess up your calculations. It is only necessary to save to disk after the loop is done.
synchronize is usually done:
manually, before the app is terminated or sent to background
automatically by the system every few minutes
manually by the program after some important changes are made that you don't want to risk losing in the event of a crash or sudden power off.
In your case, after the long loop, you want to do it for reason 3.
By doing it every time within the loop, you are just unnecessarily writing values to flash, which you likely immediately overwrite.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is my old Nvidia card failing to start?
I'd like to add a third monitor to my computer.
Unfortunately, my primary Nvidia graphics card only supports two DVI monitors, so I decided to take an old Nvidia graphics card from another computer and placed it in my motherboard's conventional PCI slot.
Once I restarted my computer, Windows was not able to find a driver from Windows Update, so it installed a "Standard VGA Graphics Adapter" driver with a warning icon on it. The error is a code 10 error (This device cannot start). I attempted to install an old driver that is compatible with the secondary card, but this simply ended up with a bugcheck (IRQL_NOT_LESS_OR_EQUAL) since the old drivers seemed to replace the newer ones. I had to reinstall the new drivers in Safe Mode in order to get the system operational.
However, this doesn't solve the problem. I suspected at first that the cause was an IRQ/resource conflict, but Device Manager does not quite explain the IRQs of nonworking devices. Using HWiNFO, the card name and rudimentary data is shown for the older card, but there is no indication that it is operational.
Is my card supposed to run correctly under generic drivers? Is this a WDDM problem? Or is this, as I had suspected, an IRQ conflict that cannot be resolved through normal means?
Specs:
Windows 7 64-bit
8 GB RAM
GeForce GTS 240 (primary)
PNY Technologies GeForce FX 5200 (problem card)
Other notes:
The old PC used 32-bit Windows Vista; the card handled it perfectly.
My motherboard only has one PCIe port, so I cannot add a more modern secondary video card.
I am only interested in a three-monitor setup. I am not interested in any gaming with the secondary card.
A:
The Verdict
The card won't work alongside my primary one. Okay sure just because it's old right? No.
The last driver that supports the GeForce FX 5200 is, in fact, WDDM 1.0-compatible. This is why people report to have working Aero, but only when the drivers are installed. As an aside, the 5200 was one of the first to support DirectX 9, at least at a rudimentary level. Because this final driver cannot be installed with the primary card (as the primary card's oldest compatible driver is many versions further), the primary card tries to load these old drivers and evokes a bugcheck.
I tried to splice the new and old drivers together, but it did something awkward that led to another type of BSoD that appeared for such a brief moment that I could not even read it.
The "Standard VGA Graphics Adapter" (vgapnp.sys) that Windows resorts to when it can't find a driver is apparently not compatible with WDDM. A non-WDDM driver cannot be loaded with a WDDM driver, so the driver fails to start, showing a Code 10 failure. This is the reason I will never be able to use the cards together. This is also the reason why the 5200 can start by itself, but not when the 240 is around.
The IRQ was a red herring: although ACPI gives Windows full control in assigning IRQs to devices, there is a feature in the PCI bus called IRQ steering, which means that multiple devices can occupy the same IRQ with no conflict. (If you run two PCIe x16 cards together, the two cards only use x8 bus width.)
While the card is capable (in performance) of working with my computer, there is no driver that is new enough to support it.
Sadly, I will have to scrap the whole three-monitor proposition. I have many spare monitors, but the PSU doesn't appear to be able to handle two middle-end cards simultaneously. Moreover, my motherboard only has one PCIe x16 port. There is physically no way to fit a cheap, new card into another slot.
| {
"pile_set_name": "StackExchange"
} |
Q:
Write Excel data directly to OutputStream (limit memory consumption)
I’m looking for a – simple – solution to output big excel files. Input data comes from the database and I’d like to output the Excel file directly to disk to keep memory consumption as low as possible. I had a look to things like Apache POI or jxls but found no way to solve my problem.
And as additional information I need to generate .xls files for pre 2007 Excel, not the new .xlsx xml format. I also know I could generate CSV files but I’d prefer to generate plain Excel…
Any ideas ?
I realize my question isn't so clear, I really want to be able to write the excel file without having to keep the whole in memory...
A:
The only way to do this efficiently is to use character-based CSV or XML (XLSX) format, because they can be written to the output line by line so that you can per saldo have only one line at once in the memory all the time. The binary-based XLS format must first be populated completely in memory before it can be written to the output and this is of course memory hogging in case of large amount of records.
I would recommend using CSV for this as it may be more efficient than XML, plus you have the advantage that the any decent database server has export capabilities for that, so that you don't need to program/include anything new in Java. I don't know which DB you're using, but if it were for example MySQL, then you could have used LOAD DATA INFILE for this.
| {
"pile_set_name": "StackExchange"
} |
Q:
Turning a python dict. to an excel sheet
I am having an issue with the below code.
import urllib2
import csv
from bs4 import BeautifulSoup
soup = BeautifulSoup(urllib2.urlopen('http://www.ny.com/clubs/nightclubs/index.html').read())
clubs = []
trains = ["A","C","E","1","2","3","4","5","6","7","N","Q","R","L","B","D","F"]
for club in soup.find_all("dt"):
clubD = {}
clubD["name"] = club.b.get_text()
clubD["address"] = club.i.get_text()
text = club.dd.get_text()
nIndex = text.find("(")
if(text[nIndex+1]=="2"):
clubD["number"] = text[nIndex:nIndex+15]
sIndex = text.find("Subway")
sIndexEnd = text.find(".",sIndex)
if(text[sIndexEnd-1] == "W" or text[sIndexEnd -1] == "E"):
sIndexEnd2 = text.find(".",sIndexEnd+1)
clubD["Subway"] = text[sIndex:sIndexEnd2]
else:
clubD["Subway"] = text[sIndex:sIndexEnd]
try:
cool = clubD["number"]
except (ValueError,KeyError):
clubD["number"] = "N/A"
clubs.append(clubD)
keys = [u"name", u"address",u"number",u"Subway"]
f = open('club.csv', 'wb')
dict_writer = csv.DictWriter(f, keys)
dict_writer.writerow([unicode(s).encode("utf-8") for s in clubs])
I get the error ValueError: dict contains fields not in fieldnames. I dont understand how this could be. Any assistance would be great. I am trying to turn the dictionary into an excel file.
A:
clubs is a list of dictionaries, whereas each dictionary has four fields: name, address, number, and Subway. You will need to encode each of the fields:
# Instead of:
#dict_writer.writerow([unicode(s).encode("utf-8") for s in clubs])
# Do this:
for c in clubs:
# Encode each field: name, address, ...
for k in c.keys():
c[k] = c[k].encode('utf-8').strip()
# Write to file
dict_writer.writerow(c)
Update
I looked at your data and some of the fields have ending new line \n, so I updated the code to encode and strip white spaces at the same time.
| {
"pile_set_name": "StackExchange"
} |
Q:
How should I show that for each $t>0$, $P(|X| \ge t) \le E_\phi(X) / \phi(t)$?
Suppose $X$ is a random variable and $\phi:(-\infty,\infty) \to(0,\infty)$ satisfies $\phi(-t)=\phi(t)$. Assume that $\phi(\cdot)$ is an increasing function on $(0,\infty)$. Show that for each $t>0$, $P(|X| \ge t) \le E_\phi(X) / \phi(t)$.
My work:
I first identified that $\phi(\cdot)$ is an even function. Since $\phi(\cdot)$ is increasing on $(0,\infty)$, then $E_\phi(X) / \phi(t) \to 0$ as $t \to \infty$, since the denominator is growing quite large and the numerator is a constant.
Working on the LHS of the inequality:
$P(|X| \ge t)=P(-X \le-t)$ and $P(X \ge t)$. However, I do not know where to go from here.
A:
This is a general approach using in bounds called the Cramer Chernoff bounds
\begin{align}
\Bbb P(|X|>t) &= 2\Bbb P(\phi(X)>\phi(t)) \; \; \;\text{(Because $\phi$ is increasing and even)}\\
&\le 2 \frac{\Bbb E(\phi(X))}{\phi(t)} \, \; \; \text{Using Markov's inequality} \\
\end{align}
| {
"pile_set_name": "StackExchange"
} |
Q:
javax.jms.InvalidClientIDException: Broker: localhost - Client: FS_Proceduer already connected from /127.0.0.1:port
How do you resolve this JMSException? Thanks!
Broker: localhost - Client: FS_Proceduer already connected
javax.jms.InvalidClientIDException: Broker: localhost - Client: FS_Proceduer already connected from /127.0.0.1:56556
This is triggered by this method:
private void connectAndInitActiveMQ() throws JMSException{
logger.debug("It's ready to connect to jms service");
if(null != connection){
try{
logger.debug("Closing connection");
connection.close();
}catch(Exception e){
logger.error(e.getMessage(), e);
}
}
logger.debug("Creating a new connection");
logger.debug("Is queueConnectionFactory null? "+(queueConnectionFactory==null));
connection = queueConnectionFactory.createConnection();
logger.debug("Is the new connection null? "+(connection==null));
logger.debug("Starting the new connection");
connection.start();
logger.debug("Connected successfully: " + connection);
session = connection.createSession(true, Session.AUTO_ACKNOWLEDGE);
queue = session.createQueue(queueName);
messageProducer = session.createProducer(queue);
}
Is it the factory problem? Or some other source?
A:
You would get this error if you configured your connections to have the same client ID. The JMS spec is explicit that only a single connection can connect to the remote with the same Client ID at any given time, resolve your configuration and things should work just fine.
| {
"pile_set_name": "StackExchange"
} |
Q:
Laravel database connection: Selecting from database name in snake case
I'm starting to learn Laravel. I've run through the example instructions from the site successfully and now I'm trying a second run through and I'm running into an issue.
I'm trying to connect to a database called zipCodes and has one table called zipCodeDetails.
In my Laravel project I have a model containing the following code:
<?php
class ZipCodeDetails extends Eloquent {}
And in my routes.php file I have the following code:
Route::get('zipCodes', function (){
$zipCodes = ZipCodeDetails::all();
return View::make('zipCodes')->with('zipCodes', $zipCodes);
});
The error I'm running into is when I try to load the URL:
http://localhost:8888/zipCodes
In my browser I'm getting the error code:
SQLSTATE[42S02]: Base table or view not found: 1146 Table 'zipcodes.zip_code_details' doesn't exist (SQL: select * from `zip_code_details`)
There's nothing written in my code where I define the database zipCodes as zipcodes or the table zipCodesDetails as zip_code_details. Something in laravel is changing the database and table names.
Does anyone know why this is happening and how I can prevent it? I don't want to just rename the database or table names because while that may get me by in testing it's not a viable solution in practice.
Thanks!
A:
This is the behaviour that uses if no table is being explicitly defined. In your ZipCodeDetails class, you can set the table name that this model will be using.
class ZipCodeDetails extends Eloquent
{
protected $table = 'zipCodesDetails';
}
| {
"pile_set_name": "StackExchange"
} |
Q:
What's special about currying or partial application?
I've been reading articles on Functional programming everyday and been trying to apply some practices as much as possible. But I don't understand what is unique in currying or partial application.
Take this Groovy code as an example:
def mul = { a, b -> a * b }
def tripler1 = mul.curry(3)
def tripler2 = { mul(3, it) }
I do not understand what is the difference between tripler1 and tripler2. Aren't they both the same? The 'currying' is supported in pure or partial functional languages like Groovy, Scala, Haskell etc. But I can do the same thing (left-curry, right-curry, n-curry or partial application) by simply creating another named or anonymous function or closure that will forward the parameters to the original function (like tripler2) in most languages (even C.)
Am I missing something here? There are places where I can use currying and partial application in my Grails application but I am hesitating to do so because I'm asking myself "How's that different?"
Please enlighten me.
EDIT:
Are you guys saying that partial application/currying is simply more efficient than creating/calling another function that forwards default parameters to original function?
A:
Currying is about turning/representing a function which takes n inputs into n functions that each take 1 input. Partial application is about fixing some of the inputs to a function.
The motivation for partial application is primarily that it makes it easier to write higher order function libraries. For instance the algorithms in C++ STL all largely take predicates or unary functions, bind1st allows the library user to hook in non unary functions with a value bound. The library writer therfore does not need to provide overloaded functions for all algorithms that take unary functions to provide binary versions
Currying itself is useful because it gives you partial application anywhere you want it for free i.e. you no longer need a function like bind1st to partially apply.
A:
But I can do the same thing (left-curry, right-curry, n-curry or partial application) by simply creating another named or anonymous function or closure that will forward the parameters to the original function (like tripler2) in most languages (even C.)
And the optimizer will look at that and promptly go on to something it can understand. Currying is a nice little trick for the end user, but has much better benefits from a language design standpoint. It's really nice to handle all methods as unary A -> B where B may be another method.
It simplifies what methods you have to write to handle higher order functions. Your static analysis and optimization in the language only has one path to work with that behaves in a known manner. Parameter binding just falls out of the design rather than requiring hoops to do this common behavior.
A:
As @jk. alluded to, currying can help make code more general.
For example, suppose you had these three functions (in Haskell):
> let q a b = (2 + a) * b
> let r g = g 3
> let f a b = b (a 1)
The function f here takes two functions as arguments, passes 1 to the first function and passes the result of the first call to the second function.
If we were to call f using q and r as the arguments, it'd effectively be doing:
> r (q 1)
where q would be applied to 1 and return another function (as q is curried); this returned function would then be passed to r as its argument to be given an argument of 3. The result of this would be a value of 9.
Now, let's say we had two other functions:
> let s a = 3 * a
> let t a = 4 + a
we could pass these to f as well and get a value of 7 or 15, depending on whether our arguments were s t or t s. Since these functions both return a value rather than a function, no partial application would take place in f s t or f t s.
If we had written f with q and r in mind we might have used a lambda (anonymous function) instead of partial application, e.g.:
> let f' a b = b (\x -> a 1 x)
but this would have restricted the generality of f'. f can be called with arguments q and r or s and t, but f' can only be called with q and r -- f' s t and f' t s both result in an error.
MORE
If f' were called with a q'/r' pair where the q' took more than two arguments, the q' would still end up being partially applied in f'.
Alternatively, you could wrap q outside of f instead of inside, but that'd leave you with a nasty nested lambda:
f (\x -> (\y -> q x y)) r
which is essentially what the curried q was in the first place!
| {
"pile_set_name": "StackExchange"
} |
Q:
Random forest error: Error in `[.data.frame`(data, , all.vars(Terms), drop = FALSE) : undefined columns selected
I am trying to build a time-series model using a random forest. However, I get the same mistake, everytime I run the code, which is:
Error in [.data.frame(data, , all.vars(Terms), drop = FALSE) :
undefined columns selected
I know most of the theory behind random forests pretty well, but haven't really run much code using it.
Here is my code:
library(randomForest)
library(caret)
fitControl <- trainControl(
method = "repeatedcv",
number = 10,
repeats = 1,
classProbs = FALSE,
verboseIter = TRUE,
preProcOptions=list(thresh=0.95,na.remove=TRUE,verbose=TRUE))
set.seed(1234)
rf_grid <- expand.grid(mtry = c(1:6))
fit <- train(df.ts[,1]~.,
data=df.ts[,2:6],
method="rf",
preProcess=c("center","scale"),
tuneGrid = rf_grid,
trControl=fitControl,
ntree = 200,
metric="RMSE")
For a reproducible example, you can run the code on the following dataset:
df.ts <- structure(list(ts.t = c(315246, 219908, 193014, 231970, 248246,
+ 247112, 268218, 263637, 264306, 245730, 256548, 227525, 304468,
+ 229614, 202985), ts1 = c(233913, 315246, 219908, 193014, 231970,
+ 248246, 247112, 268218, 263637, 264306, 245730, 256548, 227525,
+ 304468, 229614), ts2 = c(253534, 233913, 315246, 219908, 193014,
+ 231970, 248246, 247112, 268218, 263637, 264306, 245730, 256548,
+ 227525, 304468), ts3 = c(226650, 253534, 233913, 315246, 219908,
+ 193014, 231970, 248246, 247112, 268218, 263637, 264306, 245730,
+ 256548, 227525), ts6 = c(213268, 242558, 250554, 226650, 253534,
+ 233913, 315246, 219908, 193014, 231970, 248246, 247112, 268218,
+ 263637, 264306), ts12 = c(333842, 210279, 193051, 174262, 216712,
+ 144327, 213268, 242558, 250554, 226650, 253534, 233913, 315246,
+ 219908, 193014)), .Names = c("ts.t", "ts1", "ts2", "ts3", "ts6", "ts12"), row.names = 13:27, class = "data.frame")
I hope someone can spot my error(s)
Thanks,
A:
The formula should correspond to the names of the variables in data. E.g. y ~ . predicts y using all other variables in data. Alternatively you could use y = df.ts[,1], x = df.ts[, -1] instead of formula and data.
Thus the correct syntax would be:
fit <- train(ts.t ~ .,
data=df.ts,
method="rf",
preProcess=c("center","scale"),
tuneGrid = rf_grid,
trControl=fitControl,
ntree = 200,
metric="RMSE")
| {
"pile_set_name": "StackExchange"
} |
Q:
How do you put 4 buttons side by side (2 on top, 2 on bottom) for Android?
I am trying to create 4 image buttons so that their positioning is like this:
top left --- top right
bottom left --- bottom right
I am using a LinearLayout if it matters. Here is what I got so far (all 4 have the same code):
<ImageButton
android:background="@drawable/pic"
android:id="@+id/multiButton"
android:layout_width="150dip"
android:layout_height="150dip"
android:text="cool"
android:textSize="50sp"
android:textStyle="bold"/>
I have also tried setting the layout_width to "fill_parent" and the layout_height to "wrap_content" but that still did not help.
A:
Just copy this xml into your layout and you'll see all the buttons aligned in 4 different corners.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent" >
<Button
android:id="@+id/button1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Button1" />
<Button
android:id="@+id/button2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Button2"
android:layout_alignParentRight="true"/>
<Button
android:id="@+id/button3"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Button3"
android:layout_alignParentBottom="true" />
<Button
android:id="@+id/button4"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Button4"
android:layout_alignParentBottom="true"
android:layout_alignParentRight="true" />
</RelativeLayout>
| {
"pile_set_name": "StackExchange"
} |
Q:
PHP Converting Integer to Date, reverse of strtotime
<?php
echo strtotime("2014-01-01 00:00:01")."<hr>";
// output is 1388516401
?>
I am surprised if it can be reverse. I mean can I convert 1388516401 to 2014-01-01 00:00:01.
What I actually want to know is, what's the logic behind this conversion. How php convert date to a specific integer.
A:
Yes you can convert it back. You can try:
date("Y-m-d H:i:s", 1388516401);
The logic behind this conversion from date to an integer is explained in strtotime in PHP:
The function expects to be given a string containing an English date format and will try to parse that format into a Unix timestamp (the number of seconds since January 1 1970 00:00:00 UTC), relative to the timestamp given in now, or the current time if now is not supplied.
For example, strtotime("1970-01-01 00:00:00") gives you 0 and strtotime("1970-01-01 00:00:01") gives you 1.
This means that if you are printing strtotime("2014-01-01 00:00:01") which will give you output 1388516401, so the date 2014-01-01 00:00:01 is 1,388,516,401 seconds after January 1 1970 00:00:00 UTC.
A:
Can you try this,
echo date("Y-m-d H:i:s", 1388516401);
As noted by theGame,
This means that you pass in a string value for the time, and optionally a value for the current time, which is a UNIX timestamp. The value that is returned is an integer which is a UNIX timestamp.
echo strtotime("2014-01-01 00:00:01");
This will return into the value 1388516401, which is the UNIX timestamp for the date 2014-01-01. This can be confirmed using the date() function as like below:
echo date('Y-m-d', 1198148400); // echos 2014-01-01
A:
I guess you are asking why is 1388516401 equal to 2014-01-01...?
There is an historical reason for that. There is a 32-bit integer variable, called time_t, that keeps the count of the time elapsed since 1970-01-01 00:00:00. Its value expresses time in seconds. This means that in 2014-01-01 00:00:01 time_t will be equal to 1388516401.
This leads us for sure to another interesting fact... In 2038-01-19 03:14:07 time_t will reach 2147485547, the maximum value for a 32-bit number. Ever heard about John Titor and the Year 2038 problem? :D
| {
"pile_set_name": "StackExchange"
} |
Q:
Java thread as argument to a class
Can I pass a Thread (which runs an instance of a class) to another class which then runs as a Thread too and handle the first from the second?
This is some sample/explain code:
Sender sender = new Sender(client, topic, qos,frequency);
Thread t1;
t1= new Thread(sender);
t1.start();
Receiver receiver = new Receiver(frequency,client, qos, topic,t1);
Thread t2;
t2 = new Thread(receiver);
t2.start();
Both classes implement runnable and I want the sender to call wait himself but the receiver to notify it. I tried it but nothing happens, sender is still in waiting state.
I can provide the whole code if needed.
A:
Here's some stripped down code that does what I think you are asking:
public class WaitTest {
static class Waiter implements Runnable{
@Override
public void run() {
System.out.println("Waiting");
try {
synchronized(this){
this.wait();
}
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Running");
}
}
static class Notifier implements Runnable{
Object locked;
public Notifier(Object locked){
this.locked = locked;
}
@Override
public void run() {
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
synchronized(locked){
locked.notifyAll();
}
}
}
public static void main(String[] args){
Waiter waiter = new Waiter();
Notifier notifier = new Notifier(waiter);
Thread t1 = new Thread(waiter);
Thread t2 = new Thread(notifier);
t1.start();
t2.start();
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Send an HTTPS request to TLS1.0-only server in Alpine linux
I'm writing a simple web crawler inside Docker Alpine image. However I cannot send HTTPS requests to servers that support only TLS1.0 . How can I configure Alpine linux to allow obsolete TLS versions?
I tried adding MinProtocol to /etc/ssl/openssl.cnf with no luck.
Example Dockerfile:
FROM node:12.0-alpine
RUN printf "[system_default_sect]\nMinProtocol = TLSv1.0\nCipherString = DEFAULT@SECLEVEL=1" >> /etc/ssl/openssl.cnf
CMD ["/usr/bin/wget", "https://www.restauracesalanda.cz/"]
When I build and run this container, I get
Connecting to www.restauracesalanda.cz (93.185.102.124:443)
ssl_client: www.restauracesalanda.cz: handshake failed: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol
wget: error getting response: Connection reset by peer
A:
I can reproduce your issue using the builtin-busybox-wget. However, using the "regular" wget works:
root@a:~# docker run --rm -it node:12.0-alpine /bin/ash
/ # wget -q https://www.restauracesalanda.cz/; echo $?
ssl_client: www.restauracesalanda.cz: handshake failed: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol
wget: error getting response: Connection reset by peer
1
/ # apk add wget
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
(1/1) Installing wget (1.20.3-r0)
Executing busybox-1.29.3-r10.trigger
OK: 7 MiB in 17 packages
/ # wget -q https://www.restauracesalanda.cz/; echo $?
0
/ #
I'm not sure, but maybe you should post an issue at https://bugs.alpinelinux.org
| {
"pile_set_name": "StackExchange"
} |
Q:
Matlab Preallocation
I'm running a simulation of a diffusion-reaction equation in MATLAB, and I pre-allocate the memory for all of my vectors beforehand, however, during the loop, in which I solve a system of equations using BICG, the amount of memory that MATLAB uses is increasing.
For example:
concentration = zeros(N, iterations);
for t = 1:iterations
concentration(:,t+1) = bicg(matrix, concentration(:,t));
end
As the program runs, the amount of memory MATLAB is using increases, which seems to suggest that the matrix, concentration, is increasing in size as the program continues, even though I pre-allocated the space. Is this because the elements in the matrix are becoming doubles instead of zeros? Is there a better way to pre-allocate the memory for this matrix, so that all of the memory the program requires will be pre-allocated at the start? It would be easier for me that way, because then I would know from the start how much memory the program will require and if the simulation will crash the computer or not.
Thanks for all your help, guys. I did some searching around and didn't find an answer, so I hope I'm not repeating a question.
EDIT:
Thanks Amro and stardt for your help guys. I tried running 'memory' in MATLAB, but the interpreter said that command is not supported for my system type. I re-ran the simulation though with 'whos concentration' displayed every 10 iterations, and the allocation size of the matrix wasn't changing with time. However, I did notice that the size of the matrix was about 1.5 GB. Even though that was the case, system monitor was only showing MATLAB as using 300 MB (but it increased steadily to reach a little over 1 GB by the end of the simulation). So I'm guessing that MATLAB pre-allocated the memory just fine and there are no memory leaks, but system monitor doesn't count the memory as in use until MATLAB starts writing values to it in the loop. I don't know why that would be, as I would imagine that writing zeros would trigger the system monitor to see that memory as 'in use,' but I guess that's not the case here.
Anyway, I appreciate your help with this. I would vote both of your answers up as I found them both helpful, but I don't have enough reputation points to do that. Thanks guys!
A:
I really doubt it's a memory leak, since most "objects" in MATLAB clean after themselves once they go out of scope. AFAIK, MATLAB does not use a GC per se, but a deterministic approach to managing memory.
Therefore I suspect the issue is more likely to be caused by memory fragmentation: when MATLAB allocates memory for a matrix, it has to be contiguous. Thus when the function is repeatedly called, creating and deleting matrices, and over time, the fragmentation becomes a noticeable problem...
One thing that might help you debug is using the undocumented: profile on -memory which will track allocation in the MATLAB profiler. Check out the monitoring tool by Joe Conti as well. Also this page has some useful information.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to prevent a row from copying into another workbook when the range has no data while using 'LastRow' 'xlUp'
I have a Workbook called "INVOICE.xls" with Sheet "INVOICE" and another workbook called "DATABASE.xls" with sheet "DATABASE".
I have two Ranges of data in Workbook "INVOICE.xls" Sheet "INVOICE" which is assuming rngA-(A13 to I29) and rngB-(B23 to I29) both of which have headers above them, which I transfer to Workbook "DATABASE.xls" Sheet "DATABASE" using VBA code. The range rngB have data occasionally. The code I have now transfers successfully only if there is a row with data in rngB. On occasions when there is no data in rngB , it copies the row above the specified range i.e. the header labels. Pasting the code below. I'm not an expert, I have just pasted codes from various forums to get it to work until now. Screenshot-Invoice.xls Screenshot of Database.xls
EDIT - There's another error where I need some help. When both the ranges rngA & rngB are full of data, it doesn't paste that range. Instead, it pastes the range A3:I3 from the "INVOICE.xls" sheet "INVOICE" onto the "DATABASE.xls" sheet "DATABASE" column ranging J:R. Please help.
Sub SavingData()
Dim rngA As Range
Dim rngB As Range
Dim i As Long
Dim a As Long
Dim b As Long
Dim rng_dest As Range
Application.ScreenUpdating = False
Windows("DATABASE.xls").Activate
'Check if invoice # is found on sheet "DATABASE"
i = 2
Do Until Sheets("DATABASE").Range("A" & i).Value = ""
If ActiveWorkbook.Sheets("DATABASE").Range("A" & i).Value = Workbooks("INVOICE").Sheets("INVOICE").Range("H8").Value Then
'Ask overwrite invoice #?
If MsgBox("Invoice Number Already Exists - Do you want to overwrite?", vbYesNo) = vbNo Then
Exit Sub
Else
Exit Do
End If
End If
i = i + 1
Loop
i = 1
Windows("INVOICE.xls").Activate
Windows("DATABASE.xls").Activate
Set rng_dest = Sheets("DATABASE").Range("J:R")
'Delete rows if invoice # is found
Do Until Sheets("DATABASE").Range("A" & i).Value = ""
If Workbooks("DATABASE").Sheets("DATABASE").Range("A" & i).Value = Workbooks("INVOICE").Sheets("INVOICE").Range("H8").Value Then
Workbooks("DATABASE").Sheets("DATABASE").Range("A" & i).EntireRow.Delete
i = 1
End If
i = i + 1
Loop
' Find first empty row in columns B:I on sheet Sales
Windows("INVOICE").Activate
Do Until WorksheetFunction.CountA(rng_dest.Rows(i)) = 0
i = i + 1
Loop
'Copy range A13:I20 on sheet Invoice
With Sheets("INVOICE")
Dim lastRowA As Long
Dim lastRowB As Long
lastRowA = .Cells(20, 1).End(xlUp).Row
lastRowB = .Cells(29, 1).End(xlUp).Row
Set rngA = .Range(.Cells(13, 1), .Cells(lastRowA, 9))
Set rngB = .Range(.Cells(23, 1), .Cells(lastRowB, 9))
End With
' Copy rows containing values to sheet Sales
For a = 1 To rngA.Rows.Count
If WorksheetFunction.CountA(rngA.Rows(a)) <> 0 Then
rng_dest.Rows(i).Value = rngA.Rows(a).Value
'Copy Field 1
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("A" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("H8").Value
'Copy Field 2
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("B" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("C9").Value
'Copy Field 3
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("C" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("B10").Value
'Copy Field 4
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("D" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("E8").Value
'Copy Field 5
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("E" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("G10").Value
'Copy Field 6
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("F" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("C11").Value
'Copy Field 7
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("G" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("E11").Value
'Copy Field 8
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("H" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("H11").Value
'Copy Field 9
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("I" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("I11").Value
i = i + 1
End If
Next a
For b = 1 To rngB.Rows.Count
If WorksheetFunction.CountA(rngB.Rows(b)) <> 0 Then
rng_dest.Rows(i).Value = rngB.Rows(b).Value
'Copy Field 1
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("A" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("H8").Value
'Copy Field 2
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("B" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("C9").Value
'Copy Field 3
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("C" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("B10").Value
'Copy Field 4
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("D" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("E8").Value
'Copy Field 5
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("E" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("G10").Value
'Copy Field 6
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("F" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("C11").Value
'Copy Field 7
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("G" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("E11").Value
'Copy Field 8
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("H" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("H11").Value
'Copy Field 9
Workbooks("DATABASE.xls").Sheets("DATABASE").Range("I" & i).Value = Workbooks("INVOICE.xls").Sheets("INVOICE").Range("I11").Value
i = i + 1
End If
Next b
Application.ScreenUpdating = True
End Sub
A:
you could check for lastRowB to be greater then 23 before staring the rngB copy/pasting:
If lastRowB > 23 Then
For b = 1 To rngB.Rows.Count
' your code
Next b
End If
| {
"pile_set_name": "StackExchange"
} |
Q:
Event handler exists
How can i determine in a web server control (inherited from linkbutton) if an event handler is set for OnClick or OnCommand?
I rather not override the events and set variables... etc
Thanks all in advance
A:
Inside Page_Load or after:
if (ctrl.OnClick != null)
{
// ...
}
| {
"pile_set_name": "StackExchange"
} |
Q:
dart timer goes wrong?
It's my first program in Dart, and I just wanted to see it's asynchronous capabilities. Knowing javascript I wrote the following code:
import 'dart:async' show Timer;
import 'dart:math';
void main() {
//Recursion
fib1(x) => x > 1 ? fib1(x-1) + fib1(x-2) : x;
//Mathematical
num fi = (1 + sqrt(5)) / 2;
fib2(x) => x > 1 ? ((pow(fi, x) + pow(1 - fi, x)) / sqrt(5)).round() : x;
//Linear
fib3(x) {
if(x < 2) return x;
int a1 = 0;
int a2 = 1;
int sum = 0;
for(int i = 1; i < x; i++) {
sum = a2 + a1;
a1 = a2;
a2 = sum;
}
return sum;
}
Timer.run(() => print('Fib1:' + fib1(41).toString()));
Timer.run(() => print('Fib2:' + fib2(41).toString()));
Timer.run(() => print('Fib3:' + fib3(41).toString()));
}
and the output on the dart editor is:
Fib1:165580141
Fib2:165580141
Fib3:165580141
All 3 outputs are printed at the same time. Isn't that wrong? fib3 is much faster and should be printed first.
A:
Running asynchronous doesn't mean multithreaded. Dart runs singlethreaded. You can spawn isolates to run code in parallel.
When you add a print statement
{
//...
Timer.run(() => print('Fib1:' + fib1(41).toString()));
Timer.run(() => print('Fib2:' + fib2(41).toString()));
Timer.run(() => print('Fib3:' + fib3(41).toString()));
print('exit');
}
after your three Timer.run(... statements you get a glimpse what async is about.
The closure you provide with Timer.run(...) gets scheduled for later execution and the next statement of your main is executed.
As soon as the event loop has time to process scheduled tasks your closures are executed one by one.
You can find more in-depth information here: The Event Loop and Dart
** EDIT **
When you run it this way the output may make more sense for you
Timer.run(() => print('Fib1: ${new DateTime.now()} - result: ${fib1(41)}'));
Timer.run(() => print('Fib2: ${new DateTime.now()} - result: ${fib2(41)}'));
Timer.run(() => print('Fib3: ${new DateTime.now()} - result: ${fib3(41)}'));
print('exit');
** output **
exit
Fib1: 2014-01-07 12:00:46.953 - result: 165580141
Fib2: 2014-01-07 12:00:56.208 - result: 165580141
Fib3: 2014-01-07 12:00:56.210 - result: 165580141
It's not the case that the faster task ends first. Timer.run() schedules for later execution and the execution of main() continues. When the event loop get's back the control of the program flow it executes the scheduled tasks one at a time and one after the other.
Maybe the output is buffered somehow by DartEditor output window or shell and shown in batches.
This may lead to the impression that the results are printed all at once.
** EDIT 2 **
I just saw that the results are written one by one.
It's easy to verify if you move the slow Fib1 at the last position (after Fib3)
| {
"pile_set_name": "StackExchange"
} |
Q:
Does Cloud Firestore support React Native
I currently use Firebase real-time database for my react-native mobile apps and I have been looking for some alternatives that provide better querying capabilities. One of the strong points for Firestore is its querying capabilities. I wanted to check if Firestore supports React Native out of the box.
A:
The Oct 3rd release of React Native Firebase includes Cloud Firestore support.
First pass at support for the newly release Cloud Firestore beta, see
http://invertase.link/firestore for supported api's.
import firebase from 'react-native-firebase';
firebase.firestore()
.collection('posts')
.add({
title: 'Amazing post',
})
.then(() => {
// Document added to collection and ID generated
// Will have path: `posts/{generatedId}`
})
| {
"pile_set_name": "StackExchange"
} |
Q:
Variable mapping in IDA hotkey change
Is there a way to change hotkey for variable mapping ('=' by default)? For example: I'd like to bind it to 'Shift+Q'.
A:
please have a look at the second part of this blog post.
You can either manipulate your shortcuts.cfg or use the Options->Shortcuts GUI since version 6.2.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to version control PostgreSQL schema with comments?
I version control most of my work with Git: code, documentation, system configuration.
I am able to do that because all my valuable work is stored as text files.
I have also been writing and dealing with lot of SQL schema for our Postgres database. The schema includes views, SQL functions, and we will be writing Postgres functions in R programing language (via PL/R).
I was trying to copy and past the chunks schema that I and my collaborators write but I forget to do that. The copy and past action is repetitive and error prone.
The pg_dump / pg_restore method will not work because it looses comments.
Ideally I would like to have some way to extract my current schema into a file or files and preserve the comments so that I can do version control.
What is the best practice to version control schema with comments?
A:
Why don't you COMMENT ON the various SCHEMA components, that way your comments are in the schema, and will get dumped.
COMMENT stores a comment about a database object.
To modify a comment, issue a new COMMENT command for the same object. Only one comment string is stored for each object. To remove a comment, write NULL in place of the text string. Comments are automatically dropped when the object is dropped.
| {
"pile_set_name": "StackExchange"
} |
Q:
Cant find the percentage - JAVA
so I have this problem where finding the percentage doesn't work and I really don't know why,so my assignment is to find the number of candidates for election and the number of electors and at the end it should show the percentage of the votes example if there are 3 candidates and 6 electors and 1st candidate gets 3 votes,2nd gets 2 votes, and the 3rd gets 1 vote, it should show : 50.00%,33.33%,16.67%.
Below is my code, it gets right the number of votes but when it comes to percentage it just shows 0.0% in all cases.I hope you guys can help me out.
import java.util.Scanner;
public class ElectionPercentage {
public static void main(String[]args){
//https://acm.timus.ru/problem.aspx?space=1&num=1263
Scanner sc = new Scanner(System.in);
System.out.println("Enter how many candidates are : ");
int candidates = sc.nextInt();
int [] allCandidates = new int[candidates];
int startingCandidate = 1;
for(int i = 0; i < candidates;i++){
allCandidates[i] = startingCandidate++; //now value of the first element will be 1 and so on.
}
//for testing System.out.println(Arrays.toString(allCandidates));
System.out.println("enter the number of electors : ");
int electors = sc.nextInt();
int [] allVotes = new int[electors];
for(int i =0;i < electors;i++){
System.out.println("for which candidate has the elector voted for :");
int vote = sc.nextInt();
allVotes[i] = vote; //storing all electors in array
}
System.out.println();
int countVotes = 0;
double percentage;
for(int i = 0;i<allCandidates.length;i++){
for(int k = 0; k < allVotes.length;k++){
if(allCandidates[i]==allVotes[k]){
countVotes++;
}
}
System.out.println("Candidate "+allCandidates[i]+" has : "+countVotes+" votes.");
percentage = ((double)(countVotes/6)*100);
System.out.println(percentage+"%");
countVotes = 0;
}
}
}
A:
countVotes is an int 6 is also an int. Thus, (countVotes/6) which is in your code, near the end, is integer division. 11/6 in integer division is 1. 5/6 is 0. It rounds by lopping off all decimals. That's probably not what you want, especially because you try to cast it to double afterwards.
You're casting the wrong thing. But you don't even need the cast at all; if either side is double, the whole thing becomes double division. So, instead of: percentage = ((double)(countVotes/6)*100); try percentage = 100.0 * countVotes / 6.0;
Also, presumably, that 6 should really be a variable that counts total # of votes, no? i.e. electors, so: percentage = 100.0 * countVotes / electors;
The fact that we kick off the math with 100.0 means it'll be double math all the way down.
| {
"pile_set_name": "StackExchange"
} |
Q:
Deployment project uninstall - deleting files?
I have a project and deployment project that installs it. The software installed generates several files on the target PC (while used by the user). I was wondering if there was a way to instruct the Deployment Project to delete those files when uninstalling?
All the files are in the users Application Data folder. Can I instruct the uninstaller to delete the folder (inside Application Data) and all the files in it (recursively)?
A:
You can delete files during uninstall using a custom action. One of the easiest ways to set this up is with an Installer Class.
This article by Arnaldo Sandoval is a little out dated, but it is still a pretty good overview on how to use installer classes to implement custom actions. It even includes a section about cleaning up files on uninstall.
However, instead of overriding methods in the Installer class, it is better to add event listeners. Where you get the "saved state" is also a little different. For example, where the article describes overriding the Install method to capture the TargetDir, instead of:
public override void Install(System.Collections.IDictionary stateSaver)
{
base.Install(stateSaver);
stateSaver.Add("TargetDir", Context.Parameters["DP_TargetDir"].ToString());
}
You would create a method similar to:
private void onBeforeInstall(object sender, InstallEventArgs args)
{
// The saved state dictionary is a property on the event args
args.SavedState.Add("TargetDir", Context.Parameters["DP_TargetDir"].ToString());
}
And register it in the constructor:
public InstallerClass() : base()
{
this.BeforeInstall += new InstallEventHandler(onBeforeInstall);
}
You could also register the events via the Visual Studio Property editor, if that's more your thing.
The rest of the article is excellent, particularly the sections that discuss all the undocumented "features" of the various install events.
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.