text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
Prove that exist some base of V such that does not have any vector from subspace
Let $W$ is some m-dimensional subspace of n-dimensional space $V$, where $m<n$. Prove that exist some base of $V$ such that does not have any vector from subspace $W$
Can someone help me, I know that trivial subspace have vector that does not belong in base of some space, but what do you think?
A:
Let $(w_1,\ldots,w_m)$ be a base of $W$
Since $m<n$, you can find a non empty list of vectors $(v_1,\ldots,v_{n-m})$ such that $(w_1,\ldots,w_m,v_1,\ldots,v_{n-m})$ is a base of $V$ and no $v_i$ is in $W$
Now, verify that $(w_1+v_1,\ldots,w_i+v_1,\ldots,w_m+v_1,v_1,\ldots,v_{n-m})$ is also a base of $V$, and that none of the vector of this base belongs to $W$
For a more geometrical insight, consider a plan $P$ in the usual space $\mathbb R^3$. If you plan is spanned by $\{(1,0,0),(0,1,0)\}$, the space can be spanned by $\{(1,0,1),(0,1,1),(0,0,1)\}$, and no vector of this basis belongs to $P$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Alfresco Share Aikau PathTree to show documents not just folders
I am working on an Aikau Share Page where I a side bar that is using the Alfresco Share document library tree picker. The picker allows me to publish the nodeRef to another widget which will display information. I would like to use the tree view but i'm having trouble showing the documents and it is only showing the containers/folders. Anyone have any idea on what I need to do in order to solve this?
Here is the Aikua code i am using:
{
align: "sidebar",
name: "alfresco/layout/Twister",
config: {
label: "twister.library.label",
additionalCssClasses: "no-borders",
widgets: [
{
name: "alfresco/navigation/PathTree",
config: {
showRoot: true,
rootLabel: "Repository",
rootNode: "/app:company_home",
publishTopic: "ALF_ITEM_SELECTED"
}
}
]
}
}
I am wondering if I need to write an extension to the CoreXhr or what the steps would be in order to make this work.
Any help would be appreciated
A:
I was able to figure this out. The problem comes from the repository script in the alfresco explorer side "treenode.get.js". The solution was to do the following
Create a new webscript in alfresco explorer and copy treenode.get.js code into the new webscript. I ended up calling mine customtreenode.get.js.
Remove the logic check for IsContainer in the newly created webscript
Create new Aikau file that extends PathTree. Here is the code below
define(["dojo/_base/declare",
"alfresco/navigation/PathTree",
"alfresco/documentlibrary/_AlfDocumentListTopicMixin",
"service/constants/Default",
"dojo/_base/lang",
"dojo/_base/array",
"dojo/dom-class",
"dojo/query",
"dojo/NodeList-dom"],
function(declare, PathTree, _AlfDocumentListTopicMixin, AlfConstants, lang, array, domClass, query) {
return declare([PathTree, _AlfDocumentListTopicMixin], {
useHash: true,
getTargetUrl: function alfresco_navigation_Tree__getTargetUrl() {
var url = null;
if (this.siteId && this.containerId)
{
url = AlfConstants.PROXY_URI + "slingshot/doclib/treenodeCustom/site/" + this.siteId + "/documentlibrary";
}
else if (this.rootNode)
{
url = AlfConstants.PROXY_URI + "slingshot/doclib/treenodeCustom/node/alfresco/company/home";
}
else if (!this.childRequestPublishTopic)
{
this.alfLog("error", "Cannot create a tree without 'siteId' and 'containerId' or 'rootNode' attributes", this);
}
return url;
}
});
});
Change your code to use the new CustomPathTree
{
align: "sidebar",
name: "alfresco/layout/Twister",
config: {
label: "twister.library.label",
additionalCssClasses: "no-borders",
widgets: [
{
name: "CustomWidgets/widgets/CustomTreeNode",
config: {
showRoot: true,
rootLabel: "Repository",
rootNode: "/app:company_home",
publishTopic: "ALF_ITEM_SELECTED"
}
}
]
}
}
It works after that. I still need to change the icons from folders to documents.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Excel VBA copying rows that do not appear in two arrays and do not have a certain value
I would like to copy only those rows, whose value in column 1 do not appear in two separate arrays and whose value in column 3 = 0.
For example, my data looks like this:
Name ID flag
Alice 1232 0
Alice 885 0
Alice 8332 1
Bob 993 1
Dan 9932 0
Chet 12 1
Fiona 993 0
Array1 = (Bob, Fiona)
Array2 = (Dan)
So I don't wanna copy Dan, Fiona and Bob. Of the remaining, only the first two entries of ALice have 0 in the third column, so I want to copy and paste to a new sheet
Name ID flag
Alice 1232 0
Alice 885 0
I would like to do an Autofilter but my Arrays have 2000 to 4000 elements in them and I cannot to a Array11 = (<>Bob, <>Fiona) and so on.
I have an array of all names , say ArrayAll = (Alice, Bob, Chet, Dan, Fiona) but I don't know how to do set theory operations that would subtract Array1 and Array2 from ArrayAll, short of doing two long loops which will be super slow.
Right now, instead of Filtering and copying, I am copying everything and then Autofiltering and deleting based on the two arrays and the third condition. The problem is that my code is super super super slow.
Set wb1 = ActiveWorkbook
Set ws1 = wb1.Worksheets("SCL_FL")
' strSearch = "SCL_FL"
With ws1
.AutoFilterMode = False
lRow = .Range("A" & .Rows.Count).End(xlUp).Row
With .UsedRange
'Set copyFrom = .Offset(1, 0).SpecialCells(xlCellTypeVisible).EntireRow
Set copyFrom = .EntireRow
End With
.AutoFilterMode = False
End With
Set ws3 = ActiveWorkbook.Worksheets.Add
With ws3
If Application.WorksheetFunction.CountA(.Cells) <> 0 Then
lRow = .Cells.Find(What:="*", _
After:=.Range("A1"), _
Lookat:=xlPart, _
LookIn:=xlFormulas, _
SearchOrder:=xlByRows, _
SearchDirection:=xlPrevious, _
MatchCase:=False).Row
Else
lRow = 1
End If
copyFrom.Copy .Range("A1")
.Name = "Rest"
.AutoFilterMode = False
.UsedRange.AutoFilter Field:=2, Criteria1:=Array2()
.UsedRange.Offset(1, 0).SpecialCells(xlCellTypeVisible).EntireRow.Delete
.AutoFilterMode = False
.AutoFilterMode = False
.UsedRange.AutoFilter Field:=2, Criteria1:=Array1()
.UsedRange.Offset(1, 0).SpecialCells(xlCellTypeVisible).EntireRow.Delete
.AutoFilterMode = False
.AutoFilterMode = False
.UsedRange.AutoFilter Field:=Exceptions_Column, Criteria1:="1", Operator:=xlFilterValues
.UsedRange.Offset(1, 0).SpecialCells(xlCellTypeVisible).EntireRow.Delete
.AutoFilterMode = False
End With
Can someone please recommend a fast way to do this. Thank you!!
A:
Amatya, if I understand correctly you want to improve the part which gets you all the results with 0s only.
In that case I suggest first copying the sheet and then filtering it like this
Option Explicit
Sub Main()
Application.ScreenUpdating = False
AddWorksheet
FilterResults
Application.ScreenUpdating = True
End Sub
Private Sub AddWorksheet()
Sheets(1).Copy After:=Sheets(Sheets.Count)
Sheets(Sheets.Count).Name = "Working Copy"
End Sub
Private Sub FilterResults()
Dim ws As Worksheet
Dim rng As Range
Dim lastRow As Long
Set ws = ThisWorkbook.Sheets(Sheets.Count)
lastRow = ws.Range("C" & ws.Rows.Count).End(xlUp).Row
Set rng = ws.Range("A1:C" & lastRow)
' filter and delete all but header row
With rng
.AutoFilter Field:=3, Criteria1:="<>0"
.Offset(1, 0).SpecialCells(xlCellTypeVisible).EntireRow.Delete
End With
ws.AutoFilterMode = False
End Sub
In a new workbook I set up a sample data like this (on a Sheet1)
then ran the code and got a new sheet at the end called "Working Copy" with the following results
Note: to be fair, I do not know of a faster way to filter results. I created a new sheet instead of copy - pasting ranges which is much slower. Then applied the filter which removed all rows that their 3rd column ( column C ) is not equal to 0.
This run super fast < under 1 second but if you have more data then I understand it may be a bit slower - not much though :)
It took 1 second with 10,000 rows so it's still pretty fast
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Show Selection Highlight on UITableView Section Header When Touched
By default on a UITableView, when you touch a cell, it goes into selected mode, and is highlighted grey.
In our app, you can touch a header and it goes on to another page.
We want users to get the same kind of visual feedback when they touch a section, as when they they touch a cell. That is, it responds by going into a "selected" mode and making the background go a bit darker to indicate selection.
This does not occur when you touch a UITableView section. The section does not go into selection mode. It does not turn grey.
We've tried putting in a transparent button that spans a custom UIView with a background image and enabling "Reverses on Highlight", but the result is not very responsive and sucks.
What's the best strategy for implementing such functionality?
A:
I would suggest subclassing UIGestureRecognizer to get the touchDown event, as pointed out in this post: Touch Down Gesture
You can then add the gesture recognizer to your UITableViewHeaderFooterView in viewForHeaderInSection and when it fires, grab the index path of the "selected cell." It will be null for the first section for some reason, but will have the correct indexPath.section for all other sections:
CGPoint point = [touchDown locationInView:self.tableView];
NSIndexPath *indexPath = [self.tableView indexPathForRowAtPoint:point];
Then just change the background color of your header cell. You'll probably want to wipe out the color on your header when tapping another header or cell or whatever, so hold on to the cell whenever you change its background:
// declare an instance UITableViewHeaderFooterView earlier
if(header)
[[header contentView] setBackgroundColor:[UIColor lightGrayColor]]; // or whatever your default is
if(indexPath)
{
header = [self.tableView headerViewForSection:indexPath.section];
[[[self.tableView headerViewForSection:indexPath.section] contentView] setBackgroundColor:[UIColor greenColor]]; // or whatever color you want
}
else
{
header = [self.tableView headerViewForSection:0];
[[[self.tableView headerViewForSection:0] contentView] setBackgroundColor:[UIColor greenColor]];
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Target host must not be null, or set in parameters
I get this error "Target host must not be null, or set in parameters". My manifest file has internet permission set, and I have put 'http://' before my Url. It still gives the same error. My URL does not have a 'www.' attached to it.
Part of my Code:
HttpPost post = new HttpPost("http://infocreation.something_something1.xml");
Part of my manifest is like below:
<uses-permission android:name="android.permission.INTERNET/>
What do I do now?
A:
It should be
HttpPost post = new HttpPost("http://www.infocreation.something.xml");
A:
So I replaced the URL, with almost the same URL, except without the underscore and that worked. I realized from further searches (for example here) that URLs with _(underscore) are not valid, although that particular URL may work. Thanks for all your help.
A:
Are you putting a real and working url inot the HttpPost constructor?
Anyway this is your solution:
If you have the following code failing:
HttpGet httpget = new HttpGet("www.host.com");
Then the error is pretty easy to solve:
The problem is that you have not added a protocol to the URL, so change it to:
HttpGet httpget = new HttpGet("http://www.host.com");
And then it will work as wanted.
Source: h3x.no
|
{
"pile_set_name": "StackExchange"
}
|
Q:
In CouchDB, are there ways to improve performance of the View index process?
I have some basic views and some map/reduce views with logic. Nothing too complex. Not too many documents. I've tried with 250k, 75k, and 10k documents. Seems like I'm always waiting for view indexing.
Does better, more efficient code in the view help? I'm assuming it's basically processing the view at all levels of aggregation. So there must be some improvement there.
Does emit()-ing less data help? emit(doc.id, doc) vs specifying fewer fields?
Do more or less complex keys impact view indexing?
Or is it all about memory, CPU cores, and processor speed?
There must be some documentation out there, but I can't find anything referencing ways to improve performance.
A:
I would take a deeper look into the reduce function. Try to use the built-in Erlang functions like _sum, _count, instead of writing Javascript.
Complex views can take hours and more, that's normal.
Maybe post such not too complex map/reduce.
And don't forget: indexing all docs is only done once after changing the view (or pushing a whole bunch of new docs). Subsequent new docs are indexed incrementally.
Use a view with &stale=ok to retrieve the "old" data instantly, so you don't have to wait. (But pay attention: you always have to call a view without stale=ok at least once to trigger the indexing process). Or better: use stale=update_after.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
When does spring mvc decode the query string?
I have a Spring MVC application which is started by Jetty, and there is such a controller:
@RequestMapping(value = "/users/byIds", method = RequestMethod.GET)
public ResponseEntity<String> findUsersWithIds(@RequestParam("ids") String idsJson) throws IOException {
System.out.println(idsJson);
}
When I issue this url in browser:
http://localhost:8080/users/byIds?ids={%22userIds%22:[%22123456%22]}
I found the idsJson in the method is already decoded:
{"uerIds":["123456"]}
Just wondered when is the query string decoded? Is that done by Spring or Jetty? In some filters?
A:
The servlet container (here Jetty) does that.
When you call request.getParameter("x") (which Spring MVC is bound to do) it will already have been decoded for you.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I trigger AMP development mode without a URL
I'm building some tooling for the design of AMP pages. According to the docs, one should add #development=1 to the URL of a pages to trigger validations.
The tooling I've written generates an AMP HTML document and uses an iframe srcdoc attribute to render it on the fly. Thus there is no URL. Is there another way to trigger validations and development mode?
A:
Per the comment at https://github.com/ampproject/amphtml/issues/999#issuecomment-171787638, the validator can be used programmatically. A demo: https://jsfiddle.net/4yoxog13/5/
I believe this validator is the official one, source code here.
Some code because stackoverflow wants there to be code.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Where is the Heap created by JIT (C#)
I always had this question that where (RAM / HARD DISK / some other (?) ) is the Heap located ?
if i load a 2 GB file via code in memory then where will it go ?
Also where is this "stack" located in physical place ? RAM ?
Can someone from actual implementation team can let us know this ? because what most people says is its RAM . But i wanted to really really know where and how both ?
Please share some good articles if it's difficult to answer it here .
A:
When you run a .NET exe, it loads MSCorEE.dll which will host the CLR. CLR will create stack and heap in the process memory. It will look after asking for more memory for the process if it needs it.
They will be located in RAM, although operating system will abstract it from you (e.g. it could be on a swap file).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Quad voltage controlled resistance
In order to have a quad volume control, I thought of having a single log pot between a DC negative source and 4 gates of 4 matched JFET's as shown above. We're talking audio here, I haven't calculate the level of the input signal yet, but I assumed it's gonna be arround 500mvpk-pk. Will it work? I know I could just buy a 4 gang log pot, but they are hella expensive.
A:
First, the \$R_{DS(ON)}\$ vs. \$V_{GS}\$ of a FET depends on a number of quantities which are not normally defined in the datasheet, and which vary from part to part and over temperature. There is not a nice linear, unchanging relationship between \$V_{GS}\$ and \$R_{DS(ON)}\$.
Second, your FETs will never be matched close enough, probably not even if they all came on the same die.
Third, you're not really implementing an audio pot, because the resistance across the assembly won't be constant.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to write a cron for every 30 minutes between 2AM to 4AM?
I was trying to figure a cron for every 30 minutes between 2AM to 4AM?
So the cron run time will be: 2:00 2:30 3:00 3:30 4:00
Every hour will be something like this:
0 2,3,4 * * * command
Thank You.
A:
I'd just write it as two distinct rules:
0,30 2-3 * * * /run/this/command
0 4 * * * /run/this/command
If you're the sort that worries about this sort of thing (I'm not), you can use a conditional to get it onto one line:
0,30 2-4 * * * [[ "$(date +%H%M)" != "0430" ]] && /run/this/command
This will run the command given at 4:30 as well, but not actually call your script unless the time is something other than 4:30.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to convert Canvas element to string
In general we can convert the HTML elements to string and then we can insert it into DOM later when needed. Similarly, I want to convert the "CANVAS" element to string along with its context properties.
In the following example, I am getting the string value of span tag with outerHTML property. Likewise I want to get the "CANVAS"element along with context properties.
Is there any method or property for this support?
Example code snippets:
var sp=document.createElement("span");
sp.innerHTML = "E2"
var e2 = sp.outerHTML;
$("#test1").append(e2);
var c=document.createElement("CANVAS");
var ctx=c.getContext("2d");
ctx.beginPath();
ctx.moveTo(20,20);
ctx.lineTo(100,20);
ctx.arcTo(150,20,150,70,50);
ctx.lineTo(150,120);
ctx.stroke();
var cn = c.outerHTML;
$("#test2").append(cn);
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="test1">
<span>E1</span>
</div>
<div id="test2">
</div>
A:
Seems like you already know how to get dom properties of the canvas object.
Now you only need "context" infos (image data as I understand it)
You can get the image data as a base64 string like this:
function CreateDrawing(canvasId) {
let canvas = document.getElementById(canvasId);
let ctx = canvas.getContext('2d');
ctx.beginPath();
ctx.moveTo(20,20);
ctx.lineTo(100,20);
ctx.arcTo(150,20,150,70,50);
ctx.lineTo(150,120);
ctx.stroke();
}
function GetDrawingAsString(canvasId) {
let canvas = document.getElementById(canvasId);
let pngUrl = canvas.toDataURL(); // PNG is the default
// or as jpeg for eg
// var jpegUrl = canvas.toDataURL("image/jpeg");
return pngUrl;
}
function ReuseCanvasString(canvasId, url) {
let img = new Image();
img.onload = () => {
// Note: here img.naturalHeight & img.naturalWidth will be your original canvas size
let canvas = document.getElementById(canvasId);
let ctx = canvas.getContext('2d');
ctx.drawImage(img, 0, 0);
}
img.src = url;
}
// Create something
CreateDrawing("mycanvas");
// save the image data somewhere
var url = GetDrawingAsString("mycanvas");
// re use it later
ReuseCanvasString("replicate", url);
<canvas id="mycanvas"></canvas>
<canvas id="replicate"></canvas>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Do we need both the [comparative-linguistics] and [typology] tags?
As of right now, we have both the typology and comparative-linguistics tags. Currently typology lacks usage recommendations, which I think we see reflected in the questions that have been tagged with it. The comparative-linguistics tag does have usage recommendations, which describe the tag as being for "questions about the similarities and/or differences between two conlangs or a conlang and a natlang." However, what this describes is usually considered a subset of linguistic typology, which is concerned with classifying languages based on their features and with describing common structural and featural properties of language and their distribution throughout the world's languages.
Based on this, is it necessary to have both of these tags?
A:
I believe these tags should be merged into a single tag and that typology should be the master tag, since the way these tags have been used on this site, which has so far been to compare languages with similar features or to discuss feature-based classifications like "analytic" and "synthetic", is pretty much exactly what would be described as "linguistic typology."
Linguists typically use "comparative linguistics" to describe the subfield of historical linguistics that evaluates the relatedness of languages using the comparative method, referring more to the genetic relatedness of two languages rather than possession of similar features. Comparative linguistics under this definition isn't really particularly relevant to conlanging, so I don't think having comparative-linguistics as a tag synonym for typology is a bad thing, but because "typology" is more technically accurate I believe it should be the master tag.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do usb-c connectors carry 100W at 20V without melting?
USB-c connectors are tiny. How is it possible for them to carry 100W at 20V without melting? I understand that power = VA and with V=20V then A only needs to be 5. But 5A through most tiny wires makes them melt. So how does USB-c pull off this trick?
Wikipedia has several articles on USB-c power delivery, including https://en.wikipedia.org/wiki/USB_hardware#PD, but I still don’t understand how the standard avoids this problem. It’s not like the cable is made from room temperature super-conductors...
A:
If you look at the American Wire gauge reference you can see that a 24 AWG of 1.5 mm diameter can withstand 5 Amps. However, becauce of RI^2 loss they will likely use a bigger wire, Based on the teardown picture below, I'd say they probable use 20 AWG.
You only need 2 of these wires, one for Vcc and the other for ground. They can easily fit in a USB-C cable.
https://www.multicable.com/resources/reference-data/current-carrying-capacity-of-copper-conductors/
https://www.mvdesignlabs.com/tear-down-tuesday/teardowntuesday-of-a-usb-c-cable/
the 4 VCC pin are connected to the red wire which seems good enough to withstand 5 A (probably more). The 4 GND pins are connected the black wire.
A:
There are two sides in this question.
One side is the Type-C connector itself, and the other is about wires along the cable.
First, the Type-C connector dedicates 4 contact groups for VBUS and 4 for return GND. Therefore each contact carries only 1.25A, which is reasonable, and this is the solution on connector side.
The second concern is about the cable. As specified (Section 3.3.3 Wire Gauges and Cable Diameters (Informative)), the power and ground cables can have AWG from 20 to 28. Standard Ampacity of a AWG20 copper wire is 5 A. So the standard allows multiple gauges to be used, resulting in cables that have different ampacity rating.
It is obvious that two AWG20 wires will make the C-C cable quite thick and stiff, so a thicker-looking cable is more likely to be capable of 5A. However, it is impossible to determine what kind of wires are used inside a cable from external look, even if something is imprinted on the cable jacket. To solve the problem of insufficient wire gauges and possibility of melting, the Type-C standard defines and mandates the use of "electronic markers", see Section 4.9 "Electronically Marked Cables" of Type-C Standard.
Electronic marker is a tiny special microprocessor that should be embedded into the cable's overmold and be connected to one of CC lines. The marker is supposed to communicate with Type-C port electronics via a defined set of serial messages, so the power provider and power consumer know the connected cable capability, and negotiate their "power contract" accordingly.
The markers are designated as " SOP' " in drawings. If the cable doesn't report that it is 5A capable during "Discover Identity" inquiry, the Type-C PD system won't engage the 5-A mode, and power consumer won't consume this current.
Every full-featured Type-C-Type-C cable must have the electronic marker to be USB Type-C/PD compliant. This is how the standard avoids the melting problem.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Printing OpenXml Documents in C#
I have searched for this but cant find anything to help.
I have a WordprocessingDocument - using DocumentFormat.OpenXml.Wordprocessing.
Is there a way to print this straight off to a printer, I dont want to save it at all?
Thanks
A:
No, you'll need some sort of program to take the Open XML and translate into how it should be displayed for the printer. In its raw form its just XML and without the translation the printer won't print it how you would expect.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Python String into 8 byte Array
My problem is to convert a string in python into an array in the following way. I have to split the string into 8byte parts. I didn't find anything like this online. Basically, I want to create the following PHP code in python:
$eight_byte_packages_array=str_split($data, 8);
A:
In PHP function str_split
will split into bytes, rather than characters when dealing with a multi-byte encoded string.
If you want to simulate this function in Python 3 first you have to convert strings to bytes.
def str_split(string, length):
byte_string = string.encode('utf-8')
return [byte_string[i:i+length] for i in range(0, len(byte_string), length)]
str_split("This is a test.", 8)
>>> [b'This is ', b'a test.']
str_split("これはテストです。", 8)
>>> [b'\xe3\x81\x93\xe3\x82\x8c\xe3\x81',
b'\xaf\xe3\x83\x86\xe3\x82\xb9\xe3',
b'\x83\x88\xe3\x81\xa7\xe3\x81\x99',
b'\xe3\x80\x82']
|
{
"pile_set_name": "StackExchange"
}
|
Q:
trying to understand jsonp with the flickr example
I'm trying to get my head around how I can make json request to a json file stored on my server from jsfiddle.
html:
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<div id="images">
</div>
</body>
</html>
jquery:
$.getJSON("http://www.shopsheep.com/groupon/json/test.json?jsoncallback=?", function(data) {
$.each(data.items, function(i, item) {
$("<img/>").attr("src", item.media.m).appendTo("#images");
if (i == 0) {
return false;
}
});
});
$.getJSON("http://api.flickr.com/services/feeds/photos_public.gne?format=json&jsoncallback=?", function(data) {
$.each(data.items, function(i, item) {
$("<img/>").attr("src", item.media.m).appendTo("#images");
if (i == 0) return false;
});
});
I downloaded the flicker json file and uploaded it to my server as test.json. If I paste it in the browser it returns just as the flicker file.
However when I try to display the image only the original flicker example is working? Any idea why this is the case?
http://jsfiddle.net/stofke/DJQ5g
Ok I have found out how to do this. The getJSON function adds a random
named callback functionname to jsoncallback=? something like this
jQuery160188050875203142_1309437718540&_=1309437718551
In order to wrap your json file with this callbackfunction you need of course to know the name of this function, so if you convert your jsonfile into a php file than you can get the callbackfunctionname like this:
<?php echo $_GET["jsoncallback"];?>(
ADD JSON CONTENT HERE
)
This php file will get the name of the callbackfunction via a GET variable and wraps the json content with it. This way it works fine.
A:
Your JSON file missed function name. It should start with function name.
If you see here http://api.flickr.com/services/feeds/photos_public.gne?format=json, it starts with jsonFlickrFeed.
Your JSON should be like this:
callback({
"title": "Uploads from everyone",
"link": "http://www.flickr.com/photos/",
"description": "",
"modified": "2011-06-29T21:43:16Z",
"generator": "http://www.flickr.com/",
"items": [
{....
Maybe you need to understand more about JSONP. http://en.wikipedia.org/wiki/JSONP
|
{
"pile_set_name": "StackExchange"
}
|
Q:
BigQuery - Grant Access to Other Google Cloud Platform Projects
I'm trying to setup customer access to some of my BigQuery data. I'll start off with my requirements, then what I think the solution needs to be, though I'm not sure how to execute.
Requirements
Separate billing per customer for queries
I don't want to make my dataset public
Read only access to specific datasets
Accessible via Excel connector
No access rights to my main project
They manage their own access privileges, I don't want to have to add and remove individual users from direct dataset access on behalf of all our clients.
Nice to have - Web UI access
What I've Done
Created a new Google Developer Project
Added a view-only user on that project
Added a service account
Granted access to my BigQuery dataset to the service account
Here are the options for granting dataset access from the documentation:
I imagine that I need to setup some sort of special group, but I can't figure out how to do it.
Thanks in advance!
A:
In BigQuery there are two different concepts:
The first one is billing (for queries and any other billable
activity) that is linked with a Google Cloud Project.
The second one is access to a dataset.
Having said that, to fulfil your requirements you'd create a separate project for each of the customers, and grant access to the datasets in the granularity that you would want.
That way you would have the costs for each of the projects separated but billed to you. Be careful to give them only read access to the project, unless you want them to be able to create other services like VM or deploy GAE apps, as they'd be billed to you as well.
For example dataset [MyDatasetA] to users X and Y in projects Project1 and Project2, but access to [MyDatasetB] to users Y and Z in projects Project2 and Project3.
Thus, each project is accountable for the queries their users run, and you have your access control on each dataset without it being public.
Separate billing per customer for queries. Done with the independent projects.
I don't want to make my dataset public. Done with fine grained control access.
Read only access to specific datasets. Same as above.
Accessible via Excel connector. It should work without problems as they'd be first class BQ users.
No access rights to my main project. Again possible if they are restricted to their own projects.
They manage their own access privileges. This is trickier. I think they'd need more than read access to the datasets or more than read access to the projects to be able to add new users, if you use the project groups as access control.
Nice to have - Web UI access. Check out https://bigquery.cloud.google.com/
The project groups are groups that allow to select members with Viewer, Developer or Owner roles in one click, without the hassle of adding each member manually.
You get already three groups set-up for you to use: Viewers, Editors and Owners of the original project.
But you may create your own Google Groups and give those groups the permission you want.
The hint when doing so, is that new users will usually need to Display your project so that it appears in the BQ online browser. This is done by clicking on the arrow to the side of the project name in the BQ online browser followed by Switch to project then Display project with the project name that the Dataset belongs to.
Edit: Improved the explanation about Group access
|
{
"pile_set_name": "StackExchange"
}
|
Q:
DPSK vs PSK Error Probability
BACKGROUND
The bit error probability for BPSK under AWGN is easily derived from tail probabilities of Gaussian distributions and results in
$$P_e = Q\biggr(\sqrt{\frac{2E_b}{N_o}}\biggl)$$
The equivalent bit error probability for DBPSK is given as follows but much more complicated to derive:
$$P_e = \frac{1}{2}e^{-E_b/N_o}$$
A complete derivation for the DBPSK case is here:
http://staff.ustc.edu.cn/~jingxi/Lecture%209_10.pdf
With the same formula and plotted in comparison to BPSK on Wikipedia (https://en.wikipedia.org/wiki/Phase-shift_keying#/media/File:DPSK_BER_curves.svg):
MY QUESTION
I incorrectly thought I could simplify this derivation by extending the simpler BPSK $P_e$ result through understanding what occurs when you multiply two signals with independent noise (Matt L has provided that here: SNR After Multiplying Two Noisy Signals), since such a product results when performing non-coherent demodulation for DBPSK.
I show this in the block diagram below:
This is the non-coherent structure for DBPSK demodulation. The transmitter is also differentially encoded to minimize error propagation (so that errors always occur in pairs rather than propagate until the next transition).
Here we can see that given an input DBPSK signal with $SNR = SNR_1$, the signal after being delayed one bit period $T$ will also have $SNR = SNR_1$, but the noise component will be independent (assuming AWGN, the noise is one symbol period is independent of the noise in the next symbol period). With reference to Matt L's result linked above, the predicted SNR at the output of the multiplier would be:
$$SNR_2 = \frac{SNR_1 SNR_1}{SNR_1+SNR_1+1}$$
For real signals, the frequency at the output of the multiplier is the sum and the difference of the input frequencies, so in this case the difference is baseband signal of interest while the sum is the double of the carrier that we filter out with the low pass filter (LPF). The power of both the signal and noise components would be effected the same way in this process, so the SNR at the output of the LPF would still be $SNR_2$.
Note for SNR>>1, $SNR_2$ approaches $SNR_1/2$ or 3 dB worst.
Given this, combined with the double error property that a single bit error always results in 2 errors assuming we use differential encoding in the transmitter- I can convince myself that the predicted bit error rate for DBPSK would be as follows (reducing the SNR by 2 when SNR >> 1 and doubling the resulting P_e) but from the detailed derivation this is clearly incorrect. I understand the detailed derivation- my question is not with that but what is the flaw with this alternate approach?
$$P_e = 2Q\biggr(\sqrt{\frac{x}{2x+1}}\biggl)$$
where $x = \frac{2E_b}{N_o}$
It is interesting to note that for higher order M-PSK, this 3 dB result does match (note the difference between QPSK and DQPSK in the plot above). Perhaps this is a clue that real versus complex is a factor?
A:
The flaw in the problem posed starts right from the block diagram that you have posted: it is not the case in DPSK receiver that the RF signal is multiplied with a delayed version of itself and then passed through a low-pass filter followed by slicing etc.
Instead, the RF signal is "demodulated" through a variant of a standard coherent QPSK receiver whose local oscillator is assumed to be synchronized in frequency to the incoming carrier signal but not necessarily synchronized in phase. Thus, the incoming $$r(t) = g(t)\cos(2\pi f_c t+\theta) + n(t), 0 \leq t < T,$$ where $g(t)$ is the baseband pulse, $T$ the bit duration, $\theta$ the unknown RF phase, and $n(t)$ the AWGN (or band limited AWGN if you like to think that the incoming signal has been bandpass-filtered, e.g. as in a tuned amplifier in an IF stage of a superheterodyne system), is multiplied by $2\cos(2\pi f_c t)$ in the I branch and separately by $-2\sin(2\pi f_c t)$ in the Q branch. Since
\begin{align}
2\cos A \cos B &= \cos(A+B) + \cos(A-B)\\
2\sin A \cos B &= \sin(A+B) + \sin(A-B)
\end{align}
we get that the signal parts of the two mixer outputs are
\begin{align}
2\cos(2\pi f_c t) g(t)\cos(2\pi f_c t+\theta) &= g(t)\cos(4\pi f_c t+\theta) + g(t)\cos(-\theta)\\
-2\sin(2\pi f_c t) g(t) \cos(2\pi f_c t+\theta)&= -g(t)\sin(4\pi f_c t+\theta) - g(t)\sin(-\theta).
\end{align}
The double-frequency terms in the mixer outputs can be eliminated explicitly by low-pass filtering the results before doing anything else, or they get eliminated implicitly when we do matched filtering on the two mixer outputs. Now, matched filtering is typically done by via correlation
(multiply by $g(t)$ and integrate from $0$ to $T$) and thus we effectively get a complex-valued decision variable $Xe^{j\theta} + \mathcal CN(0,\sigma^2)$ at the end of the integration period. We can't make a decision just on this decision variable because we don't know the value of $\theta$; depending on what $\theta$ is, the decision might be wholly bass ackwards from what it should be! But what we can do is save the decision variable and use it $T$ seconds later to answer the question
Did the incoming signal change phase from the unknown $\theta$ during $[0,T)$ to (also unknown) $\theta+\pi$ during $[T,2T)$, or did the phase stay the same unknown $\theta$ during $[T,2T)$?
The final expression for the error probability of DBPSK is quite simple:
$$P_{\text{e, DBPSK}} = \frac 12 e^{-E_b/N_0}$$
but, as you say, is harder to derive than the $Q
\left(\sqrt{\frac{2E_b}{N_0}}\right)$ error probability for coherent BPSK.
Turning to the question of why the trick of using the coherent BPSK error formula as a guide and simply replacing the SNR for BPSK with the SNR for DBPSK in the error probability formula doesn't give the right answer for the DBPSK error probability, the issue is that the two systems make decisions differently and there is no obvious reason why the error probabilities of the two systems should be given by the same formula in terms of SNR. Note, for example, that $P_e = Q(\sqrt{2E_b/N_0})$ for coherent BPSK but only $Q(\sqrt{E_b/N_0})$ for coherent FSK.
Finally, note that for $x > 0$,
$Q(x) < \frac 12 \exp\left(-\frac{x^2}{2}\right)$
and so, setting $x=\sqrt{2E_b/N_0}$, we get that
$$P_{\text{e, BPSK}} = Q(\sqrt{2E_b/N_0}) < P_{\text{e, DBPSK}} = \frac 12 e^{-E_b/N_0}$$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Fatal error: Call to a member function query() on a non-object in admin.php on line 219
I've searched a lot of basically the same questions on SO which haven't seemed to help. Been a while since i've touched php so i'm guessing there's a simple solution but really can't figure it out.
config.php: (included into admin.php)
$mysqli = new mysqli($mHost, $mUser, $mPass, $db);
admin.php:
$sqlQuery = "INSERT INTO `category` (`id`, `name`) VALUES ('', '$_POST[name]')";
$result = $mysqli->query($sqlQuery);
var_dump($result) returns:
NULL
and gives error:
Fatal error: Call to a member function query() on a non-object in
A:
You are not checking the result of the call to new mysqli. If that fails, then $mysqli will be null and not a valid object you can query against.
Also, by building SQL statements with outside variables, you are leaving yourself open to SQL injection attacks. Also, any input data with single quotes in it, like a name of "O'Malley", will blow up your SQL query. Please learn about using parametrized queries, preferably with the PDO module, to protect your web app. My site http://bobby-tables.com/php has examples to get you started, and this question has many examples in detail.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I find the marginal probability density function when the interval is dependent of one of the variables?
I'm trying to find $f_x$ and $f_y$ given a joint probability distribution $$f(x,y) = \frac18 (y^2 -x^2)e^{-y}$$ defined on the interval $0 \leq y \leq \infty$, $-y \leq x \leq y$
Naturally I've tried integrating on the intervals and found:
$$ f_x(x) = \int_0^\infty \mathrm{d}y\ f(x,y)= \frac18 (2 -x^2)$$
$$ f_y(y) = \int_{-y}^y \mathrm{d}x\ f(x,y)= \frac16 e^{-y}y^3$$
But $f_x$ isn't normallized to one.
I believe the error is on the integration interval but I don't know what exactly is wrong. Can anybody point the correct direction?
A:
It always helps to sketch the support of the joint distribution; i.e., the region of $(X,Y)$ such that the density is positive. In your case, you can see that this region must be in the upper half plane ($y \ge 0$), and for a given value of $y$, $x$ must be between $-y$ and $y$; so this region is triangular and is bounded to the right by $y = x$ and to the left by $y = -x$.
Thus, the marginal density of $X$ is given by the integral $$f_X(x) = \int_{y=|x|}^\infty f_{X,Y}(x,y) \, dy$$ since we require $y$ to be at least as large as $|x|$ (otherwise the joint density is zero). This corresponds to picking an $x$-value, drawing a vertical line, and integrating over a $y$-interval that corresponds to the intersection of this vertical line and the support of the joint distribution; in a sense, it is like "collapsing" the support onto the $x$-axis, where the marginal density at each $x$-value is the integral of the joint density for all corresponding $y$-values.
The marginal density of $Y$ is easier to see: $$f_Y(y) = \int_{x=-y}^y f_{X,Y}(x,y) \, dx$$ as you wrote. This is analogous to "collapsing" the support onto the $y$-axis, "summing" up the joint density in horizontal rather than vertical lines.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is it possible to export both an animated object and camera from Maya to Blender together?
Is it possible to export both an animated object and camera from Maya to Blender together? .mmd can export the object and collada can do the camera. Is there anything that can export both?
A:
Nope, the fbx importer isn't up to scratch yet. Your best bet is to keep everything in collada and bake your camera animation to a cube within maya first. Pain in the ass, but it always seems to be a struggle to get things between softwares, not just blender.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
performSelectorOnMainThread Embedded in didUpdateToLocation
I am trying to call a method from the didUpdateToLocation method like below. In my buttonUpdate method, I am updating the interface, and am trying to avoid the lag that would be caused if I were to put the block of code directly in the didUpdateToLocation method. For some reason, the code below is causing my app to crash. Does anyone know why? Thank you!
- (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation
*)newLocation fromLocation:(CLLocation *)oldLocation {
NSLog(@"didUpdateToLocation: %@", newLocation);
CLLocation *currentLocation = newLocation;
if (currentLocation != nil) {
[self performSelectorOnMainThread:@selector(buttonUpdate:) withObject:nil
waitUntilDone:NO];
}
}
A:
One thing I see right away is that you're calling your method via this selector:
"buttonUpdate:"
The colon in that method signature implies there's some object that's supposed to be passed along (e.g. "- (void) buttonUpdate: (NSString *) maybeAString". And you're passing nil, which may be the problem (if the method is expecting something real - and not nil - to be passed along).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Тело в http ответе приходит только после множества запросов
#include "iostream"
#include "unistd.h"
#include "netinet/in.h"
#include "sys/types.h"
#include "sys/socket.h"
#include "fcntl.h"
#include "tools.h"//мой файл. там strlen() и ip2int()
using namespace std;
int main(int argc, char** argv){
struct sockaddr_in addr;
addr.sin_family = AF_INET;
addr.sin_port = htons(80);
addr.sin_addr.s_addr = htonl(ip2int(192,168,0,1));
int sockfd = socket(AF_INET, SOCK_STREAM, 0);
connect(sockfd,(struct sockaddr*) &addr, sizeof(addr);
char result[1024];
char request_buffer[] = "GET / HTTP/1.1\r\n\r\n";//Это исправлено
for(int i = 0; i < 20; i++){
write(sockfd,request_buffer,strlen(request_buffer));
}
read(sockfd, result, 1024);
cout << result << endl;
close(sockfd);
return 0;
}
Тело приходит если посылать запросы по несколько раз. С первого раза приходят только заголовки с пустым телом.
A:
Отправляйте два раза символы новой строки:
"GET / HTTP/1.1\r\n\r\n"
Дело в том, что по спецификации HTTP 1.1 окончание запроса — \r\n после пустой строки, то есть \r\n\r\n. Это сделано для возможности отсылать заголовки запроса. Например:
(request)
GET / HTTP/1.1
Host: example.com
(response)
HTTP/1.1 200 OK
Server: server version
Date: date
...
<HTML>
...
Попробуйте телнетом на 80-й порт — станет совсем понятно.
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <arpa/inet.h>
int main(int argc, char** argv){
struct sockaddr_in addr;
int status;
status = 0;
addr.sin_family = AF_INET;
addr.sin_port = htons(80);
addr.sin_addr.s_addr = inet_addr ("127.0.0.1");
int sockfd = socket(AF_INET, SOCK_STREAM, 0);
connect(sockfd, (struct sockaddr*) &addr, sizeof(addr));
char result[1025];
char request_buffer[] = "GET / HTTP/1.1\r\nHost: localhost\r\n\r\n";
write(sockfd,request_buffer,strlen(request_buffer));
while (status = read (sockfd, result, 1024)) {
printf ("=============== READ STATUS %d =============\n", status);
printf ("%s\n", result);
memset (&result, 0, sizeof (result));
}
close(sockfd);
return 0;
}
Я бы не стал тащить всю плюсплюсную тяжесть ради одного только cout.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
JPA criteria query, order on class
Is there a way with JPA criteria queries to order on class? Imagine the following domain objects:
abstract class Hobby { ... }
class Coding extends Hobby { ... }
class Gaming extends Hobby { ... }
Using regular QL I was able to do
from Hobby h order by h.class
But when I apply the same logic on a criteria query, the runtime exception "unknown attribute" occurs.
CriteriaQuery<Hobby> criteriaQuery = builder.createQuery(Hobby.class);
Root<Hobby> hobbyRoot = criteriaQuery.from(Hobby.class);
criteriaQuery.orderBy(builder.asc(hobbyRoot.get("class"));
List<Hobby> hobbies = entityManager.createQuery(criteriaQuery).getResultList();
JPA implementation used: Hibernate-EntityManager v3.5.5-Final
A:
JPA 2.0 introduces a new TYPE expression that allow a query to restrict results based on class types.
You can use a type expression with the Criteria API using Path#type(). So you could try:
CriteriaQuery criteriaQuery = builder.createQuery(Hobby.class);
Root hobbyRoot = criteriaQuery.from(Hobby.class);
criteriaQuery.orderBy(builder.asc(hobbyRoot.type());
List hobbies = entityManager.createQuery(criteriaQuery).getResultList();
While this code compiles, I didn't test it (I'll give it a try tomorrow).
Actually, I wonder if this is legal or if the type() should be part of the select in order to order by it (maybe that's what the criteria query is supposed to generate). Need to check that.
References
JPA 2.0 specification
Section 4.6.17.4 "Entity Type Expressions"
More resources
Java Persistence 2.0 Proposed Final Draft
|
{
"pile_set_name": "StackExchange"
}
|
Q:
JSTL ForEachTag contains no methods named setVar
Situation:
Making an simple exercice, reading countries from a bidimensional array.
Objective:
Loop a String[ ][ ] in JSP file, and print its content.
Problem:
Type org.apache.taglibs.standard.tag.rt.core.ForEachTag contains no methods named setVar.
<c:forEach items="${array}" var="country">
^-------^
Tests:
I saw a few examples from Stackoverflow and around the net, and all have the same syntax. Don't know what's going on.
Code:
<%@ page import="service.CountryFinderSoapBindingStub"%>
<%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%>
<%@ page import="java.net.URL" %>
<%@ page import="java.util.ArrayList" %>
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>Insert title here</title>
</head>
<body>
<%
CountryFinderSoapBindingStub c = new CountryFinderSoapBindingStub(new URL("http://localhost:7001/World_Countries/services/CountryFinder"),null);
String[][] array = c.findAllCountries();
%>
The Countries are:
<br>
<c:forEach items="${array}" var="country">
<p>City: <c:out value="${array[country][0]}"></c:out></p> <br>
<p>Country: <c:out value="${array[country][1]}"></c:out></p>
</c:forEach>
</body>
</html>
A:
Solved. Changed the jstl-1.2.jar for another more heavy (405KB). It seems I downloaded something older.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
No repository found under this GitHub path
I want to store an Android library on jcenter.
Before this, I created a repository on bintray. After creating the repository, I had to fill package details. In package details there was a text field GitHub repo (user/repo) in which I had given github link of my id https://github.com/kishlayk.
But after update package, it is showing No repository found under this GitHub path. What should I do to correct it?
A:
Github repo (user/repo) text field should be filled with github "user/repo" (Example: myGithubUser/myGithubRepository).
This will import Github README file and RELEASE to Bintray under the Readme and Release Notes tabs.
You can also provide the full github url path in the VCS field located in the package details.
I am with JFrog, the company behind bintray and artifactory.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Facebooker2 how save a new register?
I am using facebooker2 (Rails 2.3. and RESTFUlL Authentication) for facebook connect functionality. In my controller I use this action:
def create_facebook_user
if current_facebook_user
@user = User.find_by_fb_user_id(current_facebook_user.id.to_i)
end
if @user.blank?
@facebook_user = current_facebook_user.fetch
@user = User.new :login => @facebook_user.email, :email => @facebook_user.email, :name => @facebook_user.name
@user.fb_user_id = @facebook_user.id.to_i
@user.state = "active"
if @user.save(:validate=> false)
@user.profile = Profile.create(:benefactor_id => nil, :benefactor_invites => Setting.find_by_identifier("benefactor_invites").value.to_i)
redirect_to :controller => "profiles", :action => "show", :id => @user.profile.id
else
render "new"
end
elsif @user.fb_user_id.nil?
@user.update_attribute :fb_user_id, current_facebook_user.id
redirect_to :controller => "dashboard", :url => "index"
else
redirect_to :controller => "dashboard", :url => "index"
end
My problem is then assign state to user. When User use the save method @user.save(:validate => false) its don´t "jump" the validator. Also I modify a module of RESTFul Authentication "ByPassword" the method password_required but save method return false.
I post my code
Controller
class UsersController < ApplicationController
skip_before_filter :verify_authenticity_token, :only => :create
before_filter :find_user,
:only => [:profile,
:destroy,
:edit_password, :update_password,
:edit_email, :update_email]
layout 'application'
def create_facebook_user
if current_facebook_user
@user = User.find_by_fb_user_id(current_facebook_user.id.to_i)
end
if @user.blank?
@facebook_user = current_facebook_user.fetch
@user = User.new :login => @facebook_user.email, :email => @facebook_user.email, :name => @facebook_user.name
@user.fb_user_id = @facebook_user.id.to_i
@user.state = "active"
if @user.save(:validate=> false)
@user.profile = Profile.create(:benefactor_id => nil, :benefactor_invites => Setting.find_by_identifier("benefactor_invites").value.to_i)
redirect_to :controller => "profiles", :action => "show", :id => @user.profile.id
else
render "new"
end
elsif @user.fb_user_id.nil?
@user.update_attribute :fb_user_id, current_facebook_user.id
redirect_to :controller => "dashboard", :url => "index"
else
redirect_to :controller => "dashboard", :url => "index"
end
end
end
View (important Fragment)
o ingresa con Facebook Connect
<%= fb_login_and_redirect("/users/create_facebook_user") %>
<%#= fb_login_and_redirect('/users/link_user_accounts', :perms => 'email,user_birthday') %>
<%#= fb_login_button("window.location = '/users/link_user_accounts'") %>
USer model
require 'digest/sha1'
class User < ActiveRecord::Base
include Authentication
include Authentication::ByPassword
include Authentication::ByCookieToken
include Authorization::AasmRoles
...
end
ByPassword module
module Authentication
module ByPassword
# Stuff directives into including module
def self.included(recipient)
recipient.extend(ModelClassMethods)
recipient.class_eval do
include ModelInstanceMethods
# Virtual attribute for the unencrypted password
attr_accessor :password
validates_presence_of :password, :message => :"user.password.blank", :if => :password_required?
validates_presence_of :password_confirmation, :message => :"user.password_confirmation.blank", :if => :password_required?
validates_confirmation_of :password, :message => :"user.password.confirmation", :if => :password_required?
validates_length_of :password, :within => 5..40, :message => :"user.password.too_short", :if => :password_required?
before_save :encrypt_password
end
end
# #included directives
#
# Class Methods
#
module ModelClassMethods
# This provides a modest increased defense against a dictionary attack if
# your db were ever compromised, but will invalidate existing passwords.
# See the README and the file config/initializers/site_keys.rb
#
# It may not be obvious, but if you set REST_AUTH_SITE_KEY to nil and
# REST_AUTH_DIGEST_STRETCHES to 1 you'll have backwards compatibility with
# older versions of restful-authentication.
def password_digest(password, salt)
digest = REST_AUTH_SITE_KEY
REST_AUTH_DIGEST_STRETCHES.times do
digest = secure_digest(digest, salt, password, REST_AUTH_SITE_KEY)
end
digest
end
end # class methods
#
# Instance Methods
#
module ModelInstanceMethods
# Encrypts the password with the user salt
def encrypt(password)
self.class.password_digest(password, salt)
end
def authenticated?(password)
crypted_password == encrypt(password)
end
# before filter
def encrypt_password
return if password.blank?
self.salt = self.class.make_token if new_record?
self.crypted_password = encrypt(password)
end
def password_required?
if fb_user_id.blank?
crypted_password.blank? || !password.blank?
else
return false
end
end
def has_fb_user_id?
fb_user_id.nil?
end
end # instance methods
end
end
A:
First I use save_with_validations(false) method of ActiveRecord and then I use normal save method
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Unable to connect Sql server using knex
I am using Knex.js to connect Sql server using Node.js. Here is what i tried so far
npm init
npm install knex --save
npm install mssql -g
nodemon index.js
and index.js file have
var knex = require('knex')({
client: 'mssql',
connection: {
user: 'sa',
password: 'Password@123',
server: 'manikant/SQLEXPRESS',
database: 'myDb' ,
options: {
port: 1433
}
}
});
app.get('/getData',(req,res)=>{
const data = {};
knex.select("*").from("usr")
.then(function (depts){
depts.forEach((dept)=>{ //use of Arrow Function
console.log({...dept});
data = {dept}
});
}).catch(function(err) {
console.log(err);
}).finally(function() {
knex.destroy();
});
res.json({data})
})
But getting errors message: 'Failed to connect to manikant/SQLEXPRESS:1433 - getaddrinfo ENOTFOUND manikant/SQLEXPRESS',
code: 'ESOCKET'
},
TCP/IP protocol is enabled on port 1433 and Sql server browser is running.
followed similar question on SO
Note:- Able to connect with java application.So seems like problem with knex only.
A:
My bad , i am setting wrong server name. Why so because when i executes
select @@SERVERNAME
query on sql Server management studio it gives me manikant/SQLEXPRESS but SQLEXPRESS is not required to fill over here. So now config will look like
var knex = require('knex')({
client: 'mssql',
connection: {
user: 'sa',
password: 'Password@123',
server: 'manikant',
database: 'myDb' ,
options: {
port: 1433
}
}
});
Hope some one else avoid this mistake.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Passing parameters to a Visualforce email template within Apex class
I am working on setting up a scheduled apex class to send out emails to me every day. I have a visualforce template that takes a user and an opportunity as parameters that I would like to use. The scheduler works great and an email gets sent but the message body is blank. Here is my code that builds the email. Is there something I am doing wrong? I have tested the visualforce template by sending it to myself through the salesforce UI and it works fine.
public static void mail60(Opportunity op)
{
Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage();
string [] toaddress= New string[]{'my email address'};
email.settemplateid(' template ID here'); //In the real code I have the correct IDs in place
email.setSubject( 'Blah - Maintenance and Support Renewal');
email.setToAddresses(toaddress);
email.setTargetObjectId(' contact ID here '); //In the real code I have the correct IDs in place
email.setWhatId(' opportunity ID here '); //In the real code I have the correct IDs in place
email.saveAsActivity = false;
Messaging.sendEmail(New Messaging.SingleEmailMessage[]{email});
}
A:
There are few things wrong,
when using template you dont have to set subject.
When you use setTargetObjectId, your email is sent to that contact/lead or user. You dont have to set toAddress
Instead of such complex code, you can do it
Messaging.SingleEmailMessage email =
Messaging.renderStoredEmailTemplate(templateId, whoId, whatId);
So your code will be like
public static void mail60(Opportunity op){
Messaging.SingleEmailMessage email =
Messaging.renderStoredEmailTemplate('templateId', ' contact ID here ', op.Id);
string [] ccAddress= new string[]{'my email address'};
email.setccAddresses(ccAddress);
email.saveAsActivity = false;
Messaging.sendEmail(New Messaging.SingleEmailMessage[]{email});
}
SRC: https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_classes_email_outbound_single.htm
SRC:https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_classes_email_outbound_messaging.htm
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to get MATLAB xlsread to read until a last row of a contiguous <>?
I want to use xlsread in MATLAB to read an Excel file.
While I know which columns I want to read from, and which row I want to start reading from, the file could contain any number of rows.
Is there a way to do something like:
array = xlsread( 'filename', 'D4:F*end*' ); %% OR ANY SIMILAR SYNTAX
Where F*end* is the last row in column F?
A:
Yes. Try this:
FileFormat = '.xls' or '.xlsx'; % choose one
% ( by default MATLAB
% imports only '.xls' )
filename = strcat( 'Filename you desire', FileFormat );
array = xlsread( filename ) % This will read all
% the Matrix ( by default
% MATLAB will import all
% numerical data from
% file with this syntax )
Then you can look to the size of the matrix to refine the search/import.
[nRows,nCols] = size( array );
Then if the matrix you want to import just parts of the matrix, you can do this:
NewArray = xlsread( filename, strcat( 'initial cell',
':',
'ColumnLetter',
num2str( nRows )
)
);
% for your case:
NewArray = xlsread( filename, strcat( 'D3', ':', 'F', num2str( nRows ) ) );
Hope this helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can an upside-down USB3 header damage the motherboard?
So I bought an Asus ROG Strix X470-F Gaming motherboard, routed my cables, and realized I could do better cable management so I went to pull out the USB3 cable from the case front panel from the motherboard, and the plastic socket around the pins came out with it. And so I returned it to the store, expecting to get a new one. However, after about a week of "testing", they tell me that it boots into windows with no problems, and that the socket might fall off and this is totally normal, so after telling me it's totally normal that new products fall apart, they offered me a $50 gift card as compensation. I was thinking I'd take it and just deal with the socket being loose, but today I recieved the motherboard, and well, they put the socket on upside-down as seen in these three pictures.
As you can also see in the last picture, they didn't put it all the way down, and it's not on straight. This is a snippet of the manual, showing the USB3 header (item 14) with the notch for the plug pointing upwards, not downwards like they put it on.
They said they tested it and that it works. But couldn't it be damaged if they tested it plugging the USB3 cable upside-down?
I can definitely fix it myself, so it's not a huge deal, really, but I'm worried about the board being damaged if they tested the USB3 port with it connected upside-down.
A:
Unfortunately these plastic shrouds are kept in place by friction alone... given that some of the shrouds are slightly under-sized and some of the connectors going into them are slightly over-sized, it's possible that the friction between connector and shroud will be higher than the shroud to the pins - and the shroud will come away with the connector...
As you've already identified, the shroud should have the key oriented correctly ("upwards" in your case). But there is also a pin-key too:
It should be impossible to attach a compliant connector the wrong way round, even with the missing or inverted shroud, so I'd guess (if there are no bent or damaged pins) that you're fine. Note: it is possible to attach the connector off-by-one pin (or more) without the shroud.
I would also guess that their "testing" didn't include functional test of these USB 3.x ports - if it did, they'd discover a big problem and replace the board (i.e: the USB devices don't enumerate), or they'd realise their mistake and re-fit the shroud.
Either way, if you were to connect this backwards, I don't think that catastrophic failure would occur. GND and Vbus won't be swapped (potentially causing a "short" via protection diodes), and the signal lines should be tolerant of 5v anyway.
TL;DR: It's quite likely that you're fine. Re-fit the shroud the correct way and make sure it works. If it doesn't work, send it back... if it does, then keep it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Flask CORS - no Access-control-allow-origin header present on a redirect()
I am implementing OAuth Twitter User-sign in (Flask API and Angular)
I keep getting the following error when I click the sign in with twitter button and a pop up window opens:
XMLHttpRequest cannot load https://api.twitter.com/oauth/authenticate?oauth_token=r-euFwAAAAAAgJsmAAABTp8VCiE. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access.
I am using the python-Cors packages to handle CORS, and I already have instagram sign in working correctly.
I believe it has something to do with the response being a redirect but have not been able to correct the problem.
My flask code looks like this:
app = Flask(__name__, static_url_path='', static_folder=client_path)
cors = CORS(app, allow_headers='Content-Type', CORS_SEND_WILDCARD=True)
app.config.from_object('config')
@app.route('/auth/twitter', methods=['POST','OPTIONS'])
@cross_origin(origins='*', send_wildcard=True)
#@crossdomain(origin='')
def twitter():
request_token_url = 'https://api.twitter.com/oauth/request_token'
access_token_url = 'https://api.twitter.com/oauth/access_token'
authenticate_url = 'https://api.twitter.com/oauth/authenticate'
# print request.headers
if request.args.get('oauth_token') and request.args.get('oauth_verifier'):
-- omitted for brevity --
else:
oauth = OAuth1(app.config['TWITTER_CONSUMER_KEY'],
client_secret=app.config['TWITTER_CONSUMER_SECRET'],
callback_uri=app.config['TWITTER_CALLBACK_URL'])
r = requests.post(request_token_url, auth=oauth)
oauth_token = dict(parse_qsl(r.text))
qs = urlencode(dict(oauth_token=oauth_token['oauth_token']))
return redirect(authenticate_url + '?' + qs)
A:
The problem is not yours. Your client-side application is sending requests to Twitter, so it isn't you that need to support CORS, it is Twitter. But the Twitter API does not currently support CORS, which effectively means that you cannot talk to it directly from the browser.
A common practice to avoid this problem is to have your client-side app send the authentication requests to a server of your own (such as this same Flask application that you have), and in turn the server connects to the Twitter API. Since the server side isn't bound to the CORS requirements there is no problem.
In case you want some ideas, I have written a blog article on doing this type of authentication flow for Facebook and Twitter: http://blog.miguelgrinberg.com/post/oauth-authentication-with-flask
|
{
"pile_set_name": "StackExchange"
}
|
Q:
It's 2011 - why do I still have to use tables for email?
This might seem like a rant, but I am curious.
CSS replaced tables for layout a long time ago. But we still have to use tables for layout when creating rich emails.
Why is this? Are there any other options?
Are there really technical constraints that prevent CSS from working in an email. What are they? I can see how linked or embedded style sheets might be a problem, but not even inline styles work.
Is this ever going to change?
A:
Outlook 2007 switched from Internet Explorer’s HTML rendering engine to Word’s HTML rendering engine. I wish I was kidding.
Lots of people use webmail clients. If webmail clients fully rendered CSS, bad things could happen (Internet Explorer, for example, lets JavaScript run in CSS files).
Lotus Notes is inexplicably popular in businesses, and Lotus Notes, including its HTML rendering, is terrible.
A:
CSS replaced tables for layout a long
time ago. But we still have to use
tables for layout when creating rich
emails.
Why is this?
Lotus Notes and Outlook, mainly.
Not sure how Outlook 2010 is, but I believe Outlook 2007 still lacks support for CSS in many ways.
Is there really technical constraints
that prevent CSS from working in an
email. What are they?
Yes. The constraint is the rendering engine in a couple email clients that are still popular enough to have to be concerned with.
Is this ever going to change?
Probably. But, also probably, it will be a while.
Outlook should be catching up, but we may still be a couple years from a new version .. and if that still has lackluster support we'll be looking at another 2-3 years of this.
And then there's Lotus. Who knows if they'll ever support CSS or if they'll ever die off enough to ignore.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Running activity from remote service
iam trying to run an activity from diffrent package from my remote service:
this is how i implement the service.java
public class CurrencyService extends Service
{
public class CurrencyServiceImpl extends ICurrencyService.Stub
{
int CALL_PUSH_SERVICE_ACTIVITY=10;
@Override
public void callSomeActivity(int activityId) throws RemoteException
{
Intent pushActivity=new Intent("com.pushservice.PushActivity");
pushActivity.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(pushActivity);
}
.....
}
ive also added a line in the manifest of the service:
the service works fine, but i cant run the activity -> PushActivity which is in diffrent package of diffrent application,
this is the error:
Activity not found Exception: No Activity found to handle Intent {act=com.pushservice.PushServiceActivity flq=0x10
...
thanks.
A:
You are attempting to open an Activity that has an intent-filter with an action of "com.pushservice.PushActivity". You do not have an Activity that has an intent-filter with an action of "com.pushservice.PushActivity".
The best answer is to not display an activity from a service, since users will be very irritated with you if you interrupt them when they are using the device.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Copy method fail due to memory
In my workbook, I copy the current sheet to keep as a record of a sale. Eventually, the workbook fills up with sales and at some point throws an error when I try to copy another sheet. After saving, then completely exiting Excel, then reloading the file, I can continue without problems. I'm guessing it's a memory issue, but I'm not quite sure how to solve it without restarting Excel. I can't remember the wording of the error exactly, but it went along the lines of "Copy method of worksheet failed". FWIW I use "Application.CutCopyMode = False" at the end of the macro that copies the sheet.
1st edit:
I'd like to post all of the code, but there's just so much of it (mostly not related to updating values, input verification, etc. etc.); if I post everything, I'd have to post all of the other functions for it to make sense. Suffice it to say, here's what I think is applicable:
ActiveSheet.Copy After:=Sheets(3)
...(more code)...
Call resetInterface(True, True, (wasScreenUpdating), (wasProtected))
and for the "resetInterface" function:
' Final operations for a typical function/sub '
Function resetInterface(Optional calc As Boolean = False, Optional ccmode As Boolean = False, Optional scrUpdate As Boolean = True, Optional protectWS As Boolean = False)
With Application
If calc Then
.Calculation = xlCalculationAutomatic
.Calculate
End If
If ccmode Then .CutCopyMode = False
.ScreenUpdating = scrUpdate
End With
If protectWS Then ActiveSheet.Protect
End Function
A:
There used to be a problem when copying sheets in Excel that the CodeName property of the Worksheet object would be appended with a 1 and get to be too long. I think it's been fixed, but it would depend on what version you're using.
Open the VBA (Alt+F11) and show the Project Explorer (Ctl+R). Look at the CodeNames of your copied sheets. Are they Sheet1, Sheet2, etc..? Or are they Sheet1, Sheet11, Sheet111, etc...? If the latter, this may be causing the problem. See http://support.microsoft.com/kb/177634
Or it could be that you have a workbook level name, see http://support.microsoft.com/?kbid=210684
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Div to fill 100% height of available space
I am working on webpage which has 2 main rows, the top row has a fixed height of say 300px and does not change.
The bottom row needs to fill the height of the viewport/screen (the rest if the averrable space.)
Please see this wireframe as basic example: https://wireframe.cc/aUePUH
I have tried setting the body/html to 100% then the bottom row container to 100% making the 3 cols in that bottom row 100% height too but they only seem to go to 100% height of the content.
ideally I would like to set a minimum height on the bottom row and then for it to just expand and fill the viewport if more vertical space is available
I also ha d ago with height: 100vh but that didn't seem to do it.
<div class="container">
<div class="row top-row">
<p>Top Row with a fixed heigh - 300px</p>
</div>
<div class="row bottom-row">
<div class="col-xs-4">
<p>Col 1</p>
<p>
Should fill viewport/avaialable screen height
</p>
</div>
<div class="col-xs-4">
<p>Col 2</p>
<p>
Should fill viewport/avaialable screen height
</p>
</div>
<div class="col-xs-4">
<p>Col 3</p>
<p>
Should fill viewport/avaialable screen height
</p>
</div>
</div>
</div>
top-row {
height: 300px;
background-color: black;
color: white;
margin-bottom: 15px;
}
.bottom-row {
height: 100%;
}
.col-xs-4 {
background-color: red;
color: white;
}
Here is a JS FIDDLE example: https://jsfiddle.net/DTcHh/16054/
A:
Here's a solution using height:calc(); https://jsfiddle.net/DTcHh/16058/
It was kind of hard to set due to your body padding but it works.
.bottom-row {
height:calc(100vh - 335px);
}
.col-xs-4 {
height:100%;
background-color: red;
color: white;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is it OK to replace a method by a plain function?
This works as expected, but I am somehow unsure about this approach. Is it safe? Is it pythonic?
class Example:
def __init__(self, parameter):
if parameter == 0:
# trivial case, the result is always zero
self.calc = lambda x: 0.0 # <== replacing a method
self._parameter = parameter
def calc(self, x):
# ... long calculation of result ...
return result
(If there is any difference between Python2 and Python3, I'm using Python3 only.)
A:
This is very confusing. If someone else reads it, they won't understand what is going on. Just put a if statement at the beginning of your method.
def calc(self, x):
if self.parameter == 0:
return 0
# ... long calculation of result ...
return result
Also if you change self.parameter after it was initialized with 0, your function wouldn't work anymore.
A:
You'll have a problem should parameter ever changes, so I don't consider it good practice.
Instead, I think you should do this:
class Example:
def __init__(self, parameter):
self._parameter = parameter
def calc(self, x):
if not self._parameter:
return 0.0
# ... long calculation of result ...
return result
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to define a default handler in Spring Boot 2
How can I define a default handler, or controller action, that will be invoked when no other handler is found for a given URL?
I tried by using a catch-all pattern /** (syntax is Kotlin):
@Controller
class DefaultController {
@RequestMapping("/**")
fun default(...) {
...
}
}
But this gets matched with higher precedence that Spring's own handlers, for example the static file path configured in spring.mvc.static-path-pattern is no longer available. I need my default handler to have the lowest precedence.
A:
You would have to add your implementation of HandlerMapping and add it to list of handlers. You need to specify the order of handlers taking care of request as well:
@Bean
public SimpleUrlHandlerMapping simpleUrlHandlerMapping() {
SimpleUrlHandlerMapping simpleUrlHandlerMapping
= new SimpleUrlHandlerMapping();
Map<String, Object> urlMap = new HashMap<>();
urlMap.put("/**", defaultController());
simpleUrlHandlerMapping.setUrlMap(urlMap);
simpleUrlHandlerMapping.setOrder(1);
return simpleUrlHandlerMapping;
}
Here the defaultController() method returns a @Controller, that you have defined for the given mapping, so DefaultController. The setOrder method defines a priority (order) of handlers, starting from 0. Of course some default HandlerMapping must be defined as a @Bean as well. More about such configuration you can find here.
Edit with some thoughts from @Tobia:
You need to remove the @RequestMapping annotation so that the controller is not picked up by RequestMappingHandlerMapping and implement the AbstractController interface to override the definition of controller in handleRequestInternal().
|
{
"pile_set_name": "StackExchange"
}
|
Q:
XML file to start a tcp/ip connection
I am working on client server communication which is to be established using XML. I am relatively new to the programming world and don't know how to go about it. Can anyone please help on how to write and what should be the contents of the XML file. The connection is a TCP/IP one and port number can be anything.
A:
XML is pure data syntax. It isn't a programming language. It isn't even a data format, by itself; it's a base for creating data formats.
So XML will not start a TCP/IP connection. However, a program which does so might read an XML file to obtain its configuration information.
You need to find or create that program.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Run Javascript before Page_Load event
I have the following JS, currently it's just before my closing body tag
<script type='text/javascript' src='https://www.mydomain.com/jtrack.js'></script>
<script type='text/javascript' >
if (typeof trackPage == 'function') trackPage();
</script>
Part of this script sets a cookie, which I need to read in the Page_Load event of my asp.net page. But that event is firing before the cookie is set. How can I change this so the script runs first?
Thanks
A:
Simple answer, you can't. It is technically impossible.
If you look at the ASP.NET page lifecycle overview then you'll see that the page load event occurs before the page begins to render - which would make it completely impossible for the client to have executed JavaScript on the page at this point, the user agent (browser) hasn't even began to have received the page. With the sole exception of unload, all of the ASP.NET page lifecycle events happen on the server and before any response has been sent to the user.
The unload event is highly unlikely to ever execute after your JavaScript, unless you are streaming a response to the user, had the JavaScript (without dependencies) at the first possible point on the page, and were building a really complicated page response. Even if the JavaScript did somehow execute before the unload event fired, it wouldn't matter, as cookies are sent with the page request and that has already happened. The cookie will not be sent until the next request from that domain (although it doesn't have to be a page that is requested, images, scripts or stylesheet requests will all include the cookie in their request headers).
You can have your JavaScript set the cookie on the first request which will then be available to all subsequent requests in the Load event handler, and use an Ajax request (if necessary) to log that initial page and the assigned cookie value - which will be sent (assuming it has been set at that point) in the headers for the Ajax request.
What you can't do is set a cookie on the browser of a user before the user has visited your site, execute JavaScript before it has been sent to the user, or send new request data part way through a response.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Network operation no longer completes when placed in an operation queue
I have some code I want to add to an operation queue, the problem is the code functions when its not in the queue, but once added to the queue nothing happens.
Here's the code I want to add to the queue:
NSString* graphRequest = @"https://graph.facebook.com/redacted/picture?type=square";
FBRequest *fbRequest = [FBRequest requestForGraphPath: graphRequest];
[fbRequest startWithCompletionHandler:
^(FBRequestConnection *connection, id result, NSError *theError)
{
NSLog(@"Completed");
}];
When the code above is executed then its completion block is invoked in a couple of seconds or so.
However if I try to execute the same code in an operation queue then the completion block is never invoked:
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
[queue addOperationWithBlock:
^{
NSString* graphRequest = @"https://graph.facebook.com/redacted/picture?type=square";
FBRequest *fbRequest = [FBRequest requestForGraphPath: graphRequest];
[fbRequest startWithCompletionHandler:
^(FBRequestConnection *connection, id result, NSError *theError)
{
NSLog(@"Completed");
}];
}];
With this code nothing happens.
A:
The problem is that FBRequest will only work on the main thread for some reason. You really should not worry too much about it, since startWithCompletionHandler runs async anyway. If you really want to use NSOperationQueue you will have to to a dispatch_async to the main thread.
Hope that helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to spy on a sub-function of a named export in ES6
I want to spy on a sub-function that is exported as a named export, but, it seems like we cannot spy on it.
Let's say I have two functions called add and multiply in operations.js and export them as named exports:
const add = (a, b) => {
return a + b
}
const multiply = (a, b) => {
let result = 0
for (let i = 0; i < b; i++) {
result = add(a, result)
}
return result
}
export { add, multiply }
And the test file uses sinon-chai to try to spy on the add function:
import chai, { expect } from 'chai'
import sinon from 'sinon'
import sinonChai from 'sinon-chai'
import * as operations from './operations'
chai.use(sinonChai)
describe('multiply', () => {
it('should call add correctly', () => {
sinon.spy(operations, 'add')
operations.multiply(10, 5)
expect(operations.add).to.have.been.callCount(5)
operations.add.restore()
})
})
The result is
AssertionError: expected add to have been called exactly 5 times, but it was called 0 times
But, if I calls operations.add directly like the following, it passes the test:
describe('multiply', () => {
it('should call add correctly', () => {
sinon.spy(operations, 'add')
operations.add(0, 5)
operations.add(0, 5)
operations.add(0, 5)
operations.add(0, 5)
operations.add(0, 5)
expect(operations.add).to.have.been.callCount(5)
operations.add.restore()
})
})
It seems like sinon-spy creates a new reference for operations.add but the multiply function still uses the old reference that was already bound.
What is the correct way to spy on the add function of this multiply function if these functions are named exports?
Generally, how to spy on a sub-function of a tested parent function which both are named exports?
[UPDATE]
multiply function is just an example. My main point is to test whether a parent function calls a sub-function or not. But, I don't want that test to rely on the implementation of the sub-function. So, I just want to spy that the sub-function is called or not. You can imagine like the multiply function is a registerMember function and add function is a sendEmail function. (Both functions are named exports.)
A:
I have a workaround for my own question.
Current, the multiply() function is tightly coupling with add() function. This makes it hard to test, especially, when the exported functions get new references.
So, to spy the sub-function call, we could pass the sub-function into the parent function instead. Yes, it's dependency injection.
So, in operations.js, we will inject addFn into multiply() and use it as follows:
const add = (a, b) => {
return a + b
}
const multiply = (a, b, addFn) => {
let result = 0
for (let i = 0; i < b; i++) {
result = addFn(a, result)
}
return result
}
export { add, multiply }
Then, in the test, we can spy on add() function like this:
describe('multiply', () => {
it('should call add correctly', () => {
sinon.spy(operations, 'add')
operations.multiply(10, 5, operations.add)
expect(operations.add).to.have.been.callCount(5)
operations.add.restore()
})
})
Now, it works for the purpose of testing whether the sub-function is called correctly or not.
(Note: the drawback is we need to change the multiply() function)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
WebLogic Server always taking a quick dip in service threads
we have a HP BAC probe attached on one of our WebLogic servers and we have noticed that the server is often always taking a sudden and deep "dip" in the number of service threads available.
Does anyone has any encounter or anything to share how i may track this? Currently my thread dump capturing process is unable to capture this because often it is too late. Or is there any continuous thread dump capturing process i may consider?
A:
You can define a JMX Notification via the Admin Console
which will be picked up by WebLogic Diagnostics Framework (WLDF)
This creates a diagnostic snapshot which you can analyze later.
See how to create notification for watches
http://download.oracle.com/docs/cd/E21764_01/apirefs.1111/e13952/taskhelp/diagnostics/CreateNotificationsForWatches.html
and specifically in admin console
http://download.oracle.com/docs/cd/E21764_01/apirefs.1111/e13952/taskhelp/diagnostics/CreateWatchesForADiagnosticModule.html
Select a watch type from the Watch Type list:
•Select Collected Metrics to set a watch based on metrics collected
from MBean attributes. •Select Server Log to set a watch based on data
written to server logs. •Select Event Data to set a watch based on
data generated from a specified instrumentation event.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
H1 tag CSS Issue - One Side Hanging low on WordPress website
I am scratching my head as to what I've done here in my H1 Logo
The first image shows what is messed up and the second image shows how it should be on the same plane. I have to give link to image as I don't have enough reps yet. Link to image
Here is the website, as as I am not sure what code would fix this: afirewithin.me
A:
Your CSS:
#logo h1 {
font-size: 48px;
float: left;
margin: -7px 0px 0px;
color: #262728;
background: url('images/h1_border.png') no-repeat scroll right 25px transparent;
padding-right: 10px;
}
Change it to:
#logo h1 {
font-size: 48px;
float: left;
margin: 16px 0px 0px;
color: #262728;
background: url('images/h1_border.png') no-repeat scroll right 4px transparent;
padding-right: 10px;
}
Adjust the margins and the background 4px for exact spacing etc.
Screenshot:
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Extracting properties from array
I have this function which takes a json parameter which contains an array of search objects.
function receiveSearch(search, json) {
return {
type: RECEIVE_SCHOOL_SEARCH,
items: json.packages,
receivedAt: Date.now(),
search: Object.assign({}, search, { next: json.next, start: search.next }),
};
}
My json
property looks like:
>0:Object
>1:Object
>2:Object
>3:Object
...{ more }
I would like to return to the search object two properties from json i.e name and suburb. How do I do this? I would prefer to use something neat like lodash/ramda/underscore but plain js is fine.
And each object contains the following properties:
id:"10360"
centreId:776
name:"ABBOTSFORD"
suburb:"TARNEIT"
A:
The easiest solution to you problem using JavaScript could be make a new object which contains only required property and return that.
Just assume your json looks like this:
var x = {search:{id:"10360",centreId:776,name:"ABBOTSFORD",suburb:"TARNEIT"},otherProp:val}
To get required properties, you can make another function and return the object with required fields:
function ReturnRequiredPropertyObject(anyObj)
{
var newObj = {};
newObj.search = {};
newObj.search.name = anyObj.search.name;
newObj.search.suburb = anyObj.search.suburb;
return newObj;
}
You can put the above code in loop if you are getting an array of search object.
To make the above code generic, we can do the following:
function ReturnRequiredPropertyObject(anyObj,requiredPropertyArray)
{
var newObj = {};
for(var counter = 0;counter<requiredPropertyArray.length;counter++)
{
newObj[requiredPropertyArray[counter]] = anyObj[requiredPropertyArray[counter]];
}
return newObj;
}
Hope this will help.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
PDE method of characteristics problem
Question:
Find the solution to $$u_t+u_x=-u+g(x,t),~-\infty<x<\infty, 0<t, $$$$u(x,0)=f(x)$$
My attempt using method of characteristics:
$$ x_0=s,~t_0=0,~u_0=f(s)$$
$$\frac{dx}{d\tau}=1,~\frac{dt}{d\tau}=1,~\frac{du}{d\tau}=-u+g(x,t)$$
Solving for $\tau$ and $s$:
$$\tau=t,~s=x-t$$
Now when I solve for u:
$$\int_{f(s)}^{u}\frac{du}{-u+g(x,t)}=\int_0^\tau d\tau=t$$
Treating g as a constant with respect to u, I get
$$-\ln(-u+g(x,t))+\ln(-f(x-t)+g(x,t))=t$$
$$u=g(x,t)+\frac{f(x-t)-g(x,t)}{e^t}$$
However, the answer gives
$$u=\frac{f(x-t)+\int_0^tg(\xi+x-t,\xi)d\xi}{e^t}$$
May I know where did I go wrong in my attempt?
A:
The solution for $u$ is not right. To solve $\frac{du}{d\tau}=-u+g(x,t)$, consider it as a first order linear ODE. It is not separable, since $g(x,t)$ also involves $\tau$, so you cannot simply divide and integrate.
$$\frac{du}{d\tau}=-u+g(x,t)\\
\frac{du}{d\tau}+u=g(x,t)\\
e^{\tau}(\frac{du}{d\tau}+u)=e^{\tau}g(x(\tau),t(\tau)) \quad \text{ multiplying by an integration factor }e^{\tau}\\
\frac{d(e^{\tau}u)}{d\tau}=e^{\tau}g(x(\tau),t(\tau))\\
e^{\tau}u-f(x_0)=\int^{\tau}_0 e^{\xi}g(x(\xi),t(\xi))\,d\xi.$$
Now you can proceed to get the answer (I believe your answer missed a $e^{\xi}$ in the integral):
$$u=\frac{f(x-t)+\int_0^t e^{\xi}g(\xi+x-t,\xi)d\xi}{e^t}$$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Stateful udfs in spark sql, or how to obtain mapPartitions performance benefit in spark sql?
Using map over map partitions can give significant performance boost in cases where the transformation incurs creating or loading an expensive resource (e.g - authenticate to an external service or create a db connection).
mapPartition allows us to initialise the expensive resource once per partition verses once per row as happens with the standard map.
But if I am using dataframes, the way I apply custom transformations is by specifying user defined functions that operate on a row by row basis- so I lose the ability I had with mapPartitions to perform heavy lifting once per chunk.
Is there a workaround for this in spark-sql/dataframe?
To be more specific:
I need to perform feature extraction on a bunch of documents. I have a function that inputs a document and outputs a vector.
The computation itself involves initialising a connection to an external service. I don't want or need to initialise it per document. This has non trivial overhead at scale.
A:
In general you have three options:
Convert DataFrame to RDD and apply mapPartitions directly. Since you use Python udf you already break certain optimizations and pay serde cost and using RDD won't make it worse on average.
Lazily initialize required resources (see also How to run a function on all Spark workers before processing data in PySpark?).
If data can be serialized with Arrow use vectorized pandas_udf (Spark 2.3 and later). Unfortunately you cannot use it directly with VectorUDT, so you'd have to expand vectors and collapse later, so the limiting factor here is the size of the vector. Also you have to be careful to keep size of partitions under control.
Note that using UserDefinedFunctions might require promoting objects to non-deterministic variants.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is regex the best way to extract data from log
I've got a file full of log and I'm trying to extract some data from those log, a log look like:
IP_adress - - [Date_time] "method" response_nb time "page" "UA" "IP_adress"
I want to extract the IP_adress and UA.
Is using a regex a good idea to extract data from those log or is there some other way to do it properly?
A:
Just split the string and get last two elements.
>>>
>>> str = 'IP_adress - - [Date_time] "method" response_nb time "page" "UA" "IP_a
dress"'
>>> tmp_list = str.split()
>>>
>>> tmp_list
['IP_adress', '-', '-', '[Date_time]', '"method"', 'response_nb', 'time', '"page
"', '"UA"', '"IP_adress"']
>>> tmp_list[-1]
'"IP_adress"'
>>> tmp_list[-2]
'"UA"'
>>>
If first IP Adress is required...
>>> tmp_list[0]
'IP_adress'
>>>
Replace double quotes as below from last IP Adress.
>>>
>>> tmp_list[-1].replace('"','')
'IP_adress'
>>>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Exporting a global variable from child to parent in shell scripts
Is it possible to export a variable from a parent shell script to its children ?
Trying the execute the two following scripts it always returns to me 0 but I want it to return 3. I`ve also tried to export, set and add the variable error to the .bash_profile without success...
test.sh
$ cat test.sh
#!/bin/bash
error=0
./envtest.sh
echo $error
envtest.sh
$ cat envtest.sh
#!/bin/bash
source ./test.sh
test=3
error=$test
echo $error
A:
Like @chepner commented, no.
When you invoke ./envtest.sh from within test.sh, a new process is created to run /bin/bash ./envtest.sh, and that process's environment is initialized with a copy of every environment variable from the parent process. No matter what you do inside envtest.sh, it can only impact the variables within its own environment; it cannot touch the variables in the parent's environment.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Angular HttpClient HttpErrorResponse Unexpected token in JSON at position 0
I have an angular 7 app POST-ing an excel file and expecting an excel file back from my Express server, like so:
myAngular.service.ts
const url = 'myEndpoint';
const formData: FormData = new FormData();
formData.append('xlsx', postedExcelFile, 'myFilename');
const httpOptions = {
headers: new HttpHeaders({
responseType: 'blob'
})
};
return this.http.post(url, formData, httpOptions);
Here is the code from the Express server that is sending the file:
server.js
res.download(pathToMyFile);
On the front end, the response is an HttpErrorResponse and the error states:
SyntaxError: Unexpected token P in JSON at position 0
I can see that the content of the error text is the contents of the Excel file, which means the file is indeed being sent back to the browser. But, for some reason, Angular seems to be expecting JSON and attempts to parse it.
As you can see, I have added the responseType: 'blob' header to the POST request so that it can expect a file back, but I still get this error. Is there something I am forgetting to add to the post request?
A:
Response type doesn't go in headers
Send them as:
this.http.post(url, formData, {headers: yourHeaders, responseType: 'blob'});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
javascript function not redirecting to the given link
In the JavaScript function below when I debug line by line using firebug 'windows.location' works perfectly and it will redirect to the given link. But when I run it without debugging it using firebug,even though the password and username is correct, the page redirection will not happen. Can any one explain me why?
<script type="text/javascript">
$(document).ready(function () {
$('#btnSubmit').click(function () {
var uname = $('#txtUserName').val();
var Password = $('#txtPassword').val();
if (uname != null && Password != null) {
$.ajax({
type: "get",
data: { type: "login", Uname: uname, Pword: Password,ran:Math.random() },
contentType: "application/json",
url:"/CreatePdf.ashx",
success: function (result) {
if(result=='1')
{
window.location = "/home.aspx";
}
}
});
}
else {
document.getElementById('spPassword').innerHTML = "Please Enter Username/Password";
}
});
});
</script>
A:
Either the success function is never called (because of problems with ajax), or the result argument is not the string "1". We have no crystal balls to see what is happening, so you will have to debug it yourself:
$.ajax({
…,
success: function (result) {
console.log("result: ", result);
if(result=='1')
window.location = "/home.aspx";
},
error: function(jqXHR, status, error) {
console.log("Ajax failed with "+status+" due to "+error);
}
);
Also use the network panel of your developer tools to check whether the server sent the expected response.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
function or sub that clears the text box when it gets focus
I want to create a function or sub that clears the text box when it gets focus. for example:
textbox1 contains: "Your name here."
and when the user clicks it. the "Your name here." would disappear. I've done it by placing textbox1.clear in the GotFocus event of a textbox.
Now I'm planning to put some more codes in it. but the thing is coding would be repetitive and long because I'm planning to do this in many text boxes. I want to minimize the coding so I want to create a function that clears textboxes when they got focused, so that I'll just call the function in GotFocus event and reduce the coding somehow.
I don't have an idea how to do that right now so If anyone has then I'm really thankful for the advice you can give me.
I'm using visual studio 2010, and creating a windows form application project.
A:
Solution1
First know you can, In VB.Net, connect Multiple Events to a Single Event Handler in Windows Forms.
Private TextBox1_GotFocus(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles TextBox1.GotFocus, TextBox2.GotFocus
' Add event-handler code here.
End Sub
Solution2
Then, generic event handler that clears the text of a textbox (thanks @Miky Dinescu for original solution and C# code) is another possible solution to start to reduce code and share some methods .
Just put this code in your Form.vb, or in an new class HelperClass if you want to share methods between more that one form.
In Form.vb:
Private Sub ClearTextBox(ByVal sender As Object, ByVal e As EventArgs)
If TypeOf sender Is TextBox Then
(DirectCast(sender, TextBox)).Text = ""
End If
End Sub
Or in HelperClass.vb:
Public Shared Sub ClearTextBox(ByVal sender As Object, ByVal e As EventArgs)
If TypeOf sender Is TextBox Then
(DirectCast(sender, TextBox)).Text = ""
End If
End Sub
Then, you just attach this handler to all your text boxes, basically in constructor of the form, in FormLoad Event or in an "Initialize" method that you call before to show the form:
AddHandler textbox1.GotFocus, AddressOf Me.ClearTextBox
AddHandler textbox2.GotFocus, AddressOf Me.ClearTextBox
or
AddHandler textbox1.GotFocus, AddressOf HelperClass.ClearTextBox
AddHandler textbox2.GotFocus, AddressOf HelperClass.ClearTextBox
But it means you need to attach this handler to all your TextBox. If you have more than one handler, you need to apply each method to each TextBox...
And you should also remove all these event handler if you explictly call AddHandler when you close the form if you want to prevent memory leak...
RemoveHandler textbox1.GotFocus, AddressOf HelperClass.ClearTextBox
RemoveHandler textbox2.GotFocus, AddressOf HelperClass.ClearTextBox
So I would recommend this only in order to share methods between a bit of controls.
Edit
Of course code can be reduced again here by using a loop, like @DonA suggests.
But do not forget RemoveHandler.
Solution3
Another solution would consist to create your own custom TextBox class that inherits from TextBox, and then use this Custom class in remplacement of TextBox, or replace existing TextBox if your have already created the project.
Public Class MyTextbox
Inherits TextBox
Protected OverridesSub OnGotFocus(ByVal e As System.EventArgs)
MyBase.OnGotFocus(e)
Me.Text = String.Empty
End Sub
End Class
Then Build project.
You should now see MyTextbox in ToolBox.
You can use it or replace your existing TextBox with MyTextBox
With this approach, you need to maintain only the methods of one Class. And no need to worry about handlers...
A great feature of oriented object programing is inheritance: Do not deprive yourself!
Finaly, I am not sure that clear the Text on GetFocus is a good approach for what you are trying to do. It seems to be a "Watermark TextBox" or "Cue Banner" in WinForms. So you may be interested by this:Watermark TextBox in WinForms or maybe consider using of Enter Event and prevent for deletion of user inputs.
Hope this helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I create a Temporary Table for Returning?
I'm building a stored procedure that will serve as a Configuration String code reader that will take a @config varchar(255) variable.
This Configuration String determines 26 settings in our model.
Take one of the Configuration String entries:
declare @casingType varchar(50);
set @casingType=case SubString(@config, 34, 1)
when '1' then 'U-Flange - All Around w/ Stacking Flanges'
when '2' then 'U-Flange - All Around'
when '3' then 'U-Flange - No Top & Bottom'
when '4' then 'U-Flange - Flat Top & Bottom'
when '5' then 'Box Bracket - End Plates Only'
when '6' then 'Box Bracket - All Around'
when '7' then 'Slip & Drive Bracket'
when '8' then 'L Flange'
when '0' then 'No Casing'
when 'A' then '3 Sided Box - No Top & Bottom'
when 'B' then '3 Sided Box - Top & Bottom'
when 'C' then '3 Sided Box - Top or Bottom'
when 'D' then 'U-Flange w/ Stacking Plates'
when 'E' then 'U-Flange Temp Top & Bottom'
when 'F' then 'Flat Bracket'
when 'G' then 'A Coil Slab Bracket'
when 'H' then '2 Sided Box'
when 'I' then '3 Sided Box w/ Temp Top & Bottom'
when 'O' then 'One Plus One Casing'
when 'X' then 'Special'
when 'Y' then 'Auto Braze'
else 'Error' end;
What I want to return is a table of one (1) row containing the text fields of each item.
Do I create a temporary table to return or create some other type of table to return?
I hope this makes sense. It is possible that I worded it incorrectly or used inappropriate SQL jargon.
Solution: So, I got this to work, and here is the code I used to do it.
Often our Sales guys or Engineers working from the Configuration String do not know what a certain letter means, so they have to look it up. Over 70% of the time, this results in more letter code searches. So, we want to simply return all of the details associated with a particular configuration.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: Joe Pool
-- Create date: 21-22 January 2013
-- Description: This returns a DataTable representation of the Model Configuration
-- =============================================
CREATE PROCEDURE sp1_Configurator(@config varchar(255)) as
BEGIN
SET NOCOUNT ON;
declare @len int;
declare @coilType varchar(50), @coilPattern varchar(50), @rowsDeep varchar(50);
declare @finHeight varchar(50), @finLength varchar(50), @finThickMat varchar(50);
declare @finPerInch varchar(50), @finTreatment varchar(50), @finCoating varchar(50);
declare @tubeWallThk varchar(50), @tubeType varchar(50), @qty varchar(50);
declare @tubeCoat varchar(50), @gauge varchar(50), @material varchar(50);
declare @casingType varchar(50), @customerCode varchar(50), @caseCoat varchar(50);
declare @arrangement varchar(50), @connType varchar(50), @connSize varchar(50);
declare @distributor varchar(50), @circuitry varchar(50), @coilApp varchar(50);
declare @agency varchar(50), @outsideCoat varchar(50);
declare @table table (
CoilType varchar(50) null, CoilPattern varchar(50) null, RowsDeep varchar(50) null,
FinHeight varchar(50) null, FinLength varchar(50) null, FinThickMat varchar(50) null,
FinPerInch varchar(50) null, FinTreatment varchar(50) null, FinCoating varchar(50) null,
TubeWallThk varchar(50) null, TubeType varchar(50) null, Qty varchar(50) null,
TubeCoat varchar(50) null, Gauge varchar(50) null, Material varchar(50) null,
CasingType varchar(50) null, CustomerCode varchar(50) null, CaseCoat varchar(50) null,
Arrangement varchar(50) null, ConnType varchar(50) null, ConnSize varchar(50) null,
Distributor varchar(50) null, Circuitry varchar(50) null, CoilApp varchar(50) null,
Agency varchar(50) null, OutsideCoat varchar(50) null
);
set @len=Len(@config)
if (@len=53) begin
set @coilType=case SubString(@config, 1, 1)
when 'C' then 'Slab'
when 'B' then '1 + 1'
when 'A' then 'A Coil (OBS)'
when 'X' then 'Special'
else 'Error' end;
set @coilPattern=case SubString(@config, 2, 1)
when '7' then '7mm Tube (.827 x .472 Staggered)'
when '6' then '5/16" Tube (1 x 5/8 Staggered)(OBS)'
when '3' then '3/8" Tube (1 x .866 Staggered)'
when 'P' then '1/2" Tube (1 1/4 x 1.08 Staggered)'
when '5' then '5/8" Tube (1 1/2 x 1.299 Staggered)'
else 'Error' end;
set @rowsDeep=SubString(@config, 3, 2);
set @finHeight=SubString(@config, 6, 5);
set @finLength=SubString(@config, 12, 6);
set @finThickMat=case SubString(@config, 19, 1)
when 'A' then '.0045 AL (OBS)'
when 'J' then '.0055 AL (OBS)'
when 'B' then '.0060 AL'
when 'C' then '.0075 AL'
when 'D' then '.0100 AL'
when 'E' then '.0045 CU (OBS)'
when 'K' then '.0050 CU (OBS)'
when 'F' then '.0060 CU'
when 'G' then '.0075 CU (OBS)'
when 'H' then '.0100 CU (OBS)'
when 'P' then '.0065 Pre-Coated'
when 'X' then 'Special'
else 'Error' end;
set @finPerInch=SubString(@config, 20, 2);
set @finTreatment=case SubString(@config, 22, 2)
when 'FS' then 'Flat / Straight (OBS)'
when 'FR' then 'Flat / Rippled (OBS)'
when 'CS' then 'Corrugated / Straight (OBS)'
when 'CR' then 'Corrugated / Rippled'
when 'SS' then 'Sine / Straight (OBS)'
when 'SR' then 'Sine / Rippled'
when 'LS' then 'Louvered / Straight (OBS)'
when 'LR' then 'Louvered / Rippled (OBS)'
when 'RL' then 'Embossed Arch / Rippled*'
when 'XX' then 'Special'
else 'Error' end;
set @finCoating=case SubString(@config, 24, 1)
when 'N' then 'See Coil Coating'
when 'A' then 'Alodine (OBS)'
when 'K' then 'Technicoat'
when 'P' then 'Paint Bond'
when 'X' then 'Special'
else 'Error' end;
set @tubeWallThk=case SubString(@config, 26, 1)
when 'A' then '.012 Smooth (5/16)(OBS)'
when 'B' then '.014 Smooth (3/8)(OBS)'
when 'C' then '.017 Smooth (1/2)'
when 'D' then '.018 Smooth (5/8)'
when 'E' then '.025 Smooth (1/2, 5/8)'
when 'F' then '.035 Smooth (1/2, 5/8)'
when 'G' then '.049 Smooth (5/8)'
when 'H' then '.012 Rifled (3/8)'
when 'K' then '.012 Rifled (7mm)'
when 'L' then '.016 Rifled (7mm) (OBS)'
when 'P' then '.06 Rifled (1/2)'
when 'X' then 'Special (.016 Rifled 3/8)(OBS)'
else 'Error' end;
set @tubeType=case SubString(@config, 27, 1)
when '1' then 'ST Flexpand'
when '2' then 'ST ALL DL Flexpand'
when '3' then 'ST w/DLST Flexpand'
when '5' then 'HP Flexpand'
when '6' then 'HP w/ST Flexpand'
when 'A' then 'ST w/DL*'
when 'B' then 'HP w/DL*'
when 'C' then 'Hairpin w/Straight*'
when 'D' then 'Hairpin 1/Hairpin .8* (OB)'
when 'E' then 'HP / DL / ST*'
when 'F' then 'HP / DL / ST / ST DL*'
when 'G' then 'HP / DL ST*'
when 'H' then 'Hairpint (HP)'
when 'I' then 'HP / ST / DL HP*'
when 'J' then '.8 HP* (OBS)'
when 'K' then 'One Short HP / One Long HP*'
when 'L' then 'HP w/Special DL*'
when 'M' then 'HP w/Special DL / ST*'
when 'N' then 'HP 1/HP .8/ST* (OBS)'
when 'O' then 'HP 1/HP .8/DL 1 HP/DL .8 HP* (OBS)'
when 'P' then 'All DL ST'
when 'Q' then 'ST w/Special DL ST*'
when 'R' then 'HP .8 / ST* (OBS)'
when 'S' then 'Straight (ST)'
when 'T' then 'Hydro Ball (HB)'
when 'U' then 'HB w/ DL*'
when 'V' then 'ST w/Spin Down*'
when 'W' then 'HP 7mm .827/.627 Angle*'
when 'Y' then 'HP 7mm All .627*'
when 'X' then 'Special*'
else 'Error' end;
set @qty=SubString(@config, 28, 2);
set @tubeCoat=case SubString(@config, 30, 1)
when 'N' then 'No Coating'
when 'T' then 'Tinned Plating (OBS)'
when 'X' then 'Special'
else 'Error' end;
set @gauge=case SubString(@config, 32, 1)
when '2' then '20 Gauge'
when '8' then '18 Gauge'
when '6' then '16 Gauge'
when '4' then '14 Gauge'
when '5' then '0.050" THK'
when '3' then '0.063" THK'
when '9' then '0.090" THK'
when 'X' then 'Special'
else 'Error' end;
set @material=case SubString(@config, 33, 1)
when 'G' then 'Galvanized'
when 'S' then 'Stainless'
when 'C' then 'Copper'
when 'A' then 'Aluminum'
when 'P' then 'Paint Bond'
when 'X' then 'Special'
else 'Error' end;
set @casingType=case SubString(@config, 34, 1)
when '1' then 'U-Flange - All Around w/ Stacking Flanges'
when '2' then 'U-Flange - All Around'
when '3' then 'U-Flange - No Top & Bottom'
when '4' then 'U-Flange - Flat Top & Bottom'
when '5' then 'Box Bracket - End Plates Only'
when '6' then 'Box Bracket - All Around'
when '7' then 'Slip & Drive Bracket'
when '8' then 'L Flange'
when '0' then 'No Casing'
when 'A' then '3 Sided Box - No Top & Bottom'
when 'B' then '3 Sided Box - Top & Bottom'
when 'C' then '3 Sided Box - Top or Bottom'
when 'D' then 'U-Flange w/ Stacking Plates'
when 'E' then 'U-Flange Temp Top & Bottom'
when 'F' then 'Flat Bracket'
when 'G' then 'A Coil Slab Bracket'
when 'H' then '2 Sided Box'
when 'I' then '3 Sided Box w/ Temp Top & Bottom'
when 'O' then 'One Plus One Casing'
when 'X' then 'Special'
when 'Y' then 'Auto Braze'
else 'Error' end;
set @customerCode=case SubString(@config, 35, 5)
when '00' then 'Standard'
when '14' then 'AAON Damper'
when '15' then 'AAON Cond.'
when '16' then 'AAON Evap.'
when 'XX' then 'Special'
else 'Error' end;
set @caseCoat=case SubString(@config, 37, 1)
when 'N' then 'See Coil Coating'
when 'A' then 'Alodine (OBS)'
when 'C' then 'Ceramic'
when 'X' then 'Special'
else 'Error' end;
set @arrangement=SubString(@config, 39, 2);
set @connType=case SubString(@config, 41, 1)
when '0' then 'No Connection'
when 'M' then 'MPT'
when 'F' then 'FPT'
when 'S' then 'Sweat'
when 'W' then 'Water Bead (OBS)'
when 'B' then 'Barbed FTG (OBS)'
when 'N' then 'Male Flare'
when 'G' then 'Female Flare'
when 'O' then 'Male O-Ring (OBS)'
when 'P' then 'Female O-Ring (OBS)'
when 'X' then 'Special'
else 'Error' end;
set @connSize=case SubString(@config, 42, 1)
when '0' then 'No Connection'
when '1' then '3/8 OD'
when '2' then '1/2 OD'
when '3' then '5/8 OD'
when '4' then '7/8 OD'
when '5' then '1-1/8 OD'
when '6' then '1-3/8 OD'
when '7' then '1-5/8 OD'
when '8' then '2-1/8 OD'
when '9' then '2-5/8 OD'
when 'A' then '3-1/8 OD'
when 'B' then '5/16 OD'
when 'C' then '3/4 OD'
when 'D' then '4-1/8 OD'
when 'E' then '3/16 OD'
when 'X' then 'Special'
else 'Error' end;
set @distributor=case SubString(@config, 44, 4)
when 'N000' then 'None Required'
when 'X001' then 'Special'
else 'Factory Assigned' end;
set @circuitry=case SubString(@config, 49, 2)
when 'SS' then 'Single Circuit'
when 'FF' then 'Full'
when 'HH' then 'Half'
when 'QQ' then 'Quarter'
when 'DD' then 'Double'
when 'DH' then '1-1/2'
when 'II' then 'Intertwined <2'
when 'RS' then 'Row Split <2'
when 'FS' then 'Face Split <2'
when '00' then 'No Circuitry'
when '01' then 'One Circuit'
when '02' then 'Two Circuits'
when '03' then 'Three Circuits'
when '04' then 'Four Circuits'
when '0S' then 'No Circuitry + SubCooler'
when '1S' then 'One Circuit + SubCooler'
when '2S' then 'Two Circuits + SubCooler'
when '3S' then 'Three Circuits + SubCooler'
when 'XX' then 'Special'
else 'Error' end;
set @coilApp=case SubString(@config, 51, 1)
when 'D' then 'Drainable Water'
when 'W' then 'Water'
when 'G' then 'Cond / SubCooler'
when 'C' then 'Condenser'
when 'E' then 'Evaporator'
when 'S' then 'Steam'
when 'N' then 'Steam Distributor'
when 'B' then 'Booster'
when 'H' then 'Heat Reclaim'
when 'P' then 'Heat Pipe (OBS)'
when 'L' then 'Glycol'
when 'O' then 'Oil (OBS)'
when 'X' then 'Special'
else 'Error' end;
set @agency=case SubString(@config, 52, 1)
when '0' then 'None'
when 'A' then 'ARI'
when 'B' then 'ARI + UL / CSA'
when 'C' then 'UL / CSA'
when 'E' then 'ETL / DOE'
else 'Error' end;
set @outsideCoat=case SubString(@config, 53, 1)
when 'N' then 'No Coating'
when 'A' then 'Ceramic'
when 'C' then 'Chromocoat'
when 'E' then 'Epoxy'
when 'G' then 'Americoat Grey (OBS)'
when 'H' then 'Heresite'
when 'K' then 'Phenolic (Technicoat)'
when 'L' then 'ElectroFin'
when 'P' then 'Phenolic (OBS)'
when 'X' then 'Special'
else 'Error' end;
insert into @table
(CoilType, CoilPattern, RowsDeep, FinHeight, FinLength, FinThickMat, FinPerInch, FinTreatment, FinCoating,
TubeWallThk, TubeType, Qty, TubeCoat, Gauge, Material, CasingType, CustomerCode, CaseCoat, Arrangement,
ConnType, ConnSize, Distributor, Circuitry, CoilApp, Agency, OutsideCoat)
values
(@coilType, @coilPattern, @rowsDeep, @finHeight, @finLength, @finThickMat, @finPerInch, @finTreatment, @finCoating,
@tubeWallThk, @tubeType, @qty, @tubeCoat, @gauge, @material, @casingType, @customerCode, @caseCoat, @arrangement,
@connType, @connSize, @distributor, @circuitry, @coilApp, @agency, @outsideCoat);
end
select * from @table;
END
GO
Here is what an actual Configuration String looks like:
CP06-51.25-051.50-B12SRN-CT00N-8G2XXN-A2S8-N000-HHWAN
A:
Have you thought about creating a user-defined function:
create function CastingTypeFunction(@config varchar(255))
returns varchar(50)
as
begin
declare @casingType varchar(50)
set @casingType=case SubString(@config, 34, 1)
when '1' then 'U-Flange - All Around w/ Stacking Flanges'
when '2' then 'U-Flange - All Around'
when '3' then 'U-Flange - No Top & Bottom'
when '4' then 'U-Flange - Flat Top & Bottom'
when '5' then 'Box Bracket - End Plates Only'
when '6' then 'Box Bracket - All Around'
when '7' then 'Slip & Drive Bracket'
when '8' then 'L Flange'
when '0' then 'No Casing'
when 'A' then '3 Sided Box - No Top & Bottom'
when 'B' then '3 Sided Box - Top & Bottom'
when 'C' then '3 Sided Box - Top or Bottom'
when 'D' then 'U-Flange w/ Stacking Plates'
when 'E' then 'U-Flange Temp Top & Bottom'
when 'F' then 'Flat Bracket'
when 'G' then 'A Coil Slab Bracket'
when 'H' then '2 Sided Box'
when 'I' then '3 Sided Box w/ Temp Top & Bottom'
when 'O' then 'One Plus One Casing'
when 'X' then 'Special'
when 'Y' then 'Auto Braze'
else 'Error' end
return @casingType
end;
You will then pass your @config value into the function similar to this:
select dbo.CastingTypeFunction(@config)
Your value will then be returned.
See SQL Fiddle with Demo.
The other option that I can see that you have is to create a table that contains each of these values and you would join on that table to return the casingType.
The tables would be similar to this:
CREATE TABLE CasingType
([CasingTypeId] varchar(1), [CasingValue] varchar(41))
;
And then you would join it similar to this:
select
case
when c.casingtypeid is null
then 'Error'
else c.casingvalue end
from
(
select '12323212334234231211231212121qwe1212312334234234' config -- replace with your config values
) src
left join CasingType c
on SubString(src.config, 34, 1) = c.casingtypeid
See SQL Fiddle with Demo
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What can I do to make the following `.then()` to only trigger once?
In the following code. I'm running a loop for the properties of formFields. As you can see, I'm using a counter to only run this.updatePanoCollection() when all the files have been uploaded with api.uploadFieldFile:
// formFields = { logo: { file: ... }, thumbnail: { file: ... } }
let toUploadCount = 0
let uploadedCount = 0
Object.entries(formFields).forEach(([key, upload]) => {
if (!upload || !upload.file) return
toUploadCount++
api.uploadFieldFile('logo', upload.file).then(downloadUrl => {
formFields[key] = downloadUrl
const updatePayload = {
id: this.currentPanoCollection.id,
data: formFields
}
uploadedCount++
if (toUploadCount === uploadedCount) {
// This only runs once right now
return this.updatePanoCollection(updatePayload)
}
}).then(() => {
// But this runs twice. It should only run once.
}).catch(err => this.handleError(err))
})
Now the problem is that the code inside .then() runs twice.
How to change this code so it only runs once (after all the files have been uploaded)?
A:
Use Promise.all rather than having to maintain a count of completes, like this:
Promise.all(
Object.entries(formFields).map(([key, upload]) => {
if (!upload || !upload.file) return;
return api.uploadFieldFile('logo', upload.file)
.then(downloadUrl => {
formFields[key] = downloadUrl
})
})
)
.then(() => {
// all have been uploaded
const updatePayload = {
id: this.currentPanoCollection.id,
data: formFields
}
return this.updatePanoCollection(updatePayload);
})
.then(() => {
// update is completed as well
})
.catch(err => this.handleError(err))
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why is this resistor connected in such way?
Why is the R2 resistor connected to bias instead of ground? As far as I know, only one of the op-amp's inputs has to be biased, so why such configuration? Also, how does this affect the gain?
Edit #1: Vcc here is 5V.
Edit #2:
R2 connected as shown above:
R2 in series with a 10uF cap connected to ground:
A:
The gain of the amplifier (about 670) and the DC offset that would result from having R2 return to the negative supply, would have the unfortunate
effect of clipping the output signal. Output (maximum) with a 5V supply
is maybe 4V, and that maximum output voltage would at most only drive
the negative input to $$V_{input} = 4V \times {150\over {100k + 150}} = 0.06 V$$
The bias potential at the (-) input is some 2.44V lower than this voltage applied at the (+) input. The op amp will be saturated (with the output driven high)
until that (+) input pin is driven lower than 60 millivolts...
The op amp, while saturated, offers no voltage gain, of course.
It is confusing to refer to the negative power pin of the amplifier
as 'ground' unless small signals hover about that potential. For the purposes of this circuit, small signal input (and output) of the amplifier are most easily visualized as being near the R1-R2-R3-R4
common junction, the 'bias' point. Signal generator and load are
capacitor decoupled, their 'ground' point is useless as a DC bias for
the LM324 signal.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to cutomize TFS build task
I create custom task for tfs build 2017 that work on windows os, the task logic was written using powershell script and it's work ok.
When I try to implement the same logic for linux using node I have some problems:
pickList input type, I can't get value from this input
var tl = require('vso-task-lib');
let project = tl.getInput('project', true);
echo.arg(project);
Is there other way to read value from pickList?
multiLine input type, When I print the value I don't see the first line.
var tl = require('vso-task-lib');
var json = tl.getInput('json', true);
echo.arg(json);
if you know good Docomantation how to create custom task for TFS 2017/8,
How to debug custom task (set up environment) it will be very helpful.
Thanks
A:
You could first go through Visual Studio Team Services Marketplace if there are some 3rd-party extension meet your requirement. Most of the extensions are open source, you could check and learn their source code.
VSTS and Microsoft also has created a GitHub repo with a number of samples and reading material to get you started, some tutorials for your reference:
VSTS Extensions Samples
Develop Extensions for VSTS
About how to debug and test in Linux environment, suggest you take a look at colin's blog: Developing a Custom Build vNext Task
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Convert cmp to python 3
I am trying to convert a python 2 function to python 3. The problem is it makes of the cmp keyword when sorting. I understand I can resolve this by using functools.cmp_to_key, which would look like key=functools.cmp_to_key(agency_label_cmp). However, my function uses the cmp and 'key' keywords:
results = sorted(results.items(), cmp=agency_label_cmp, key=operator.itemgetter(0))
So I don't understand how I can convert this to make it compatible with python 3. Here is the full code:
def build_salary_results(agency_type):
def agency_label_cmp(a, b):
"""
Key that uses `agency_type_lookup` order to determine how
everything is presented on the page.
"""
L = map(operator.itemgetter(1), agency_type_lookup)
return (L.index(a) > L.index(b)) -(L.index(a) < L.index(b))
results = defaultdict(lambda: dict(agencies=[], navletters=set()))
navlinks = set()
if agency_type in special:
it = Jurisdiction.objects.filter(kind=agency_type).order_by('name').iterator()
else:
it = Jurisdiction.objects.exclude(kind__in=special).order_by('name').iterator()
for obj in it:
if agency_type in special:
label = obj.category
else:
label = dict(agency_type_lookup).get(obj.kind)
if agency_type == 'SP' and not include_special_district(label):
continue
available_years = obj.available_years()
if agency_type in special and not available_years:
continue
results[label]['agencies'].append((obj, available_years))
results[label]['navletters'].add(obj.name[0].upper())
navlinks.add(label)
if agency_type in special:
results = sorted(results.items(), key=operator.itemgetter(0))
navlinks = sorted(navlinks)
else:
results = sorted(results.items(), cmp=agency_label_cmp, key=operator.itemgetter(0))
navlinks = sorted(navlinks, cmp=agency_label_cmp)
return navlinks, results
A:
You can start by removing your key parameter.
This
results = sorted(results.items(), cmp=agency_label_cmp, key=operator.itemgetter(0))
Is the same as
results = sorted(results.items(), cmp=lambda x, y: agency_label_cmp(x[0], y[0]))
since the operator.itemgetter(0) is simply a fancy way of mapping a list or equivalent to the element at 0.
Then you can put that into the conversion:
results = sorted(results.items(),
key=functools.cmp_to_key(lambda x, y: agency_label_cmp(x[0], y[0])))
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to detect .NET WPF memory leak or GC long run?
I have the next very strange situation and problem:
.NET 4.0 application for diagram editing (WPF).
Runs ok in my PC: 8GB RAM, 3.0GHz, i7 quad-core.
While creating objects (mostly diagram nodes and connectors, plus all the undo/redo information) the TaskManager show, as expected, some memory usage "jumps" (up and down).
These mem-usage "jumps" also remains executing AFTER user interaction ended. Maybe this is the GC cleaning/regorganizing memory?
To see what is going on, I've used the Ants mem profiler, but somewhat it prevents those "jumps" to happen after user interaction.
PROBLEM: It Freezes/Hangs after seconds or minutes of usage in some slow/weak laptos/netbooks of my beta testers (under 2GHz of speed and under 2GB of RAM). I was thinking of a memory leak, but...
EDIT 1: Also, there is the case that the memory usage grows and grows until collapse (only in slow machines).
In a Windows XP Mode machine (VM in Win 7) with only 512MB of RAM Assigned it works fine without mem-usage "jumps" after user interaction (no GC cleaning?!).
EDIT 2: The problem is worse when the system has some other heavy programs running (like Visual Studio, Office and Web pages open). Not even the first symbol of diagram can be created while the memory usage takes off like a rocket to space (hundreds of MBs consumed in seconds). Anyone with similar experiences? what were they strategies?
So, I really have a big trouble because I cannot reproduce the error, only see these strange behaviour (mem jumps), and the tool supposed to show me what is happening is hiding the problem (like the "observer's paradox").
Any ideas on what's happening and how to solve it?
EDIT 3: This screenshot of the Ants memory profiler shows that the huge consumption of ram (in crescendo) if from unmanaged resources.
But, what can be consuming so much memory, so fast??!!!
A:
What you describe is all entirely normal behavior for a .NET program, there is no indication that there's anything wrong with your code.
By far the biggest issue is that TaskMgr.exe is just not a very good program to tell you what's happening in your process. It displays the "working set" for a process, a number that has very little to do with the amount of memory the process uses.
Working set is the amount of RAM that your process uses. Every process gets 2 gigabytes of virtual memory to use for code and data. Even on your virtual XP box with only 512 MB of RAM. All of those processes however have only a set amount of RAM to work with. On a lowly machine that can be as little as a gigabyte.
Clearly having multiple processes running, each with gigabytes of virtual memory with only a gigabyte of real memory takes some magic. That's provided by the operating system, Windows virtualizes the RAM. In other words, it creates the illusion for each process that it is running by its own on a machine with 2 gigabytes of RAM. This is done by a feature called paging, whenever a process needs to read or write memory, the operating system grabs a chunk of RAM to provide the physical memory.
Inevitably, it has to take away some RAM from another process so that it can be made available to yours. Whatever was previously in that chunk of RAM needs to be preserved. That's what the paging file does, it stores the content of RAM that was paged out.
Clearly this does not come for free, disks are pretty slow and paging is an expensive operation. That's why lowly machines perform poorly when you ask them to run several large programs. The real measure for this is also visible in TaskMgr.exe but you have to add it. View + Select Columns and tick "Page fault delta". Observe this number while your process runs. When you see it spike, you can expect your program to slow down a great deal and the displayed memory usage to change rapidly.
Addressing your observations:
creating objects ... the TaskManager show, as expected, some memory usage "jumps"
Yes, you are using RAM so the working set goes up.
These mem-usage "jumps" also remains executing AFTER user interaction ended
No slam dunk, but other processes get more time to execute, using RAM in turn and bumping some of yours out. Check the Page fault delta column.
I've used the Ants mem profiler, but somewhat it prevents those "jumps" to happen after user interaction.
Yes, memory profilers focus on real memory usage of your program, the virtual memory kind. They largely ignore working set, there isn't anything you can do about it and the number is meaningless because it really depends on what other processes are running.
there is the case that the memory usage grows and grows until collapse
That can be a side-effect of the garbage collector but that isn't typical. You are probably just seeing Windows trimming your working set, chucking out pages so you don't consume too many.
In a Windows XP Mode machine (VM in Win 7) with only 512MB of RAM Assigned it works fine
That's likely because you haven't installed any large programs on that WM that would compete for RAM. XP was also designed to work well on machines with very little memory, it is smooth on a machine with 256 MB. That's most definitely not the case for Vista/Win7, they were designed to take advantage of modern machine hardware. A feature like Aero is nice eye candy but very expensive.
The problem is worse when the system has some other heavy programs running
Yes, you are competing with those other processes needing lots of RAM.
Not even the first symbol of diagram can be created while the memory usage takes off like a rocket
Yes, you are seeing pages getting mapped back to RAM, getting reloaded from the paging file and the ngen-ed .ni.dll files. Rapidly increasing your working set again. You'll also see the Page fault delta number peaking.
So concluding, your WPF program just consumes a lot of memory and needs the horse power to operate well. That's not easy to fix, it takes a pretty drastic redesign to lower the resource requirements. So just put the system requirements on the box, it is entirely normal to do so.
A:
This would suggest that you're likely creating a lot of "garbage" - basically, creating and letting many objects go out of scope quickly, but which take long enough to get into Gen1 or Gen2. This puts a large burden on the GC, which in turn can cause freezes and hangs on many systems.
To see what is going on, I've used the Ants mem profiler, but somewhat it prevents those "jumps" to happen after user interaction.
The reason this profiler (ANTS), specifically, could mask this behavior is that it forces a full GC every time you take a memory snapshot. This would make it look like there is no memory "leak" (as there isn't), but does not show the total memory pressure on the system.
A tool like PerfView could be used to investigate the GC's behavior during the runtime of the process. This would help you track the number of GCs that occur, and your application state at that point in time.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to assign awk result to an variable
I need to assign the following awk result to an variable. Please guide me how to do in shell script
awk -F: '$1=="{root}" {print $3}' /etc/passwd
Like
variable = `awk -F: '$1=="{root}" {print $3}' /etc/passwd`
But it wont work
A:
You need to avoid spaces around the equal symbol. Since backticks are deprecated, you could put your code inside $(..) block.
variable=$(awk -F: '$1=="{root}" {print $3}' /etc/passwd)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is it RESTFul for an endpoint without a path parameter to always have a single resource as a result?
While creating tools for an api, I've noticed the following inside the api specification:
GET /cookie-orders
returns the following structure as a 200 result:
{
"first":true
"last":false
content:[
{
"id":0
"customerName":test
...
}
...
]
}
I find this a little contradicting as I think that a rest endpoint without path params should always have the potential of returning multiple resources. As I see it now this always returns a single resource with multiple resources inside of it.
I'm new to REST so I don't know if my view on this is correct or not.
The first and last property have been added for pagination.
A:
I think that a rest endpoint without path params should always have the potential of returning multiple resources
I'm new to REST so I don't know if my view on this is correct or not.
It isn't correct.
REST doesn't care about spelling, in particular it has no concern for how (or if) the origin server decomposes an identifier to find the mapping to the correct resource.
Technically speaking, each identifier in REST maps to a single resource. The representation of that single resource may describe a collection, but the resource is the conceptual mapping to the collection.
There is no REST rule that says particular URI must be collections. Some routing conventions will make those assertions. For example, Rails has strong opinions about encoding meaning into URI. However, you may notice in the example that /photos/new is an identifier without a path-param, and it returns a representation of a form, rather than a representation of a collection.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Изменение стиля шрифта в TextBox
Есть 3 кнопки: полужирный, курсив и подчеркивание. Нажимая на каждую из них, изменяется стиль текста. Хочется, чтобы было как в ворде: нажимая 1 раз - появлялся новый стиль, второй раз на эту же кнопку - исчезал. Вот и возникла проблема: как убрать, скажем, жирный шрифт, чтобы при этом все остальное осталось также?
Я так писал:
if (textBox.Font.Bold)
{
textBox.Font = new Font(textBox.Font, textBox.Font.Style | FontStyle.Regular);
}
else
{
textBox.Font = new Font(textBox.Font, textBox.Font.Style | FontStyle.Bold);
}
Я понимаю, что FontStyle.Regular - не подходит, но не знаю, как написать по-другому, поэтому скинул пока так. Т.е. конкретно в этой ситуации жирный шрифт добавляется нормально, но потом не убирается.
A:
Используйте поразрядную операцию ^ (XOR) для установки и удаления нужных флагов в свойстве Font.Style.
textBox.Font = new Font(textBox.Font, textBox.Font.Style ^ FontStyle.Bold);
Почему это будет работать:
Допустим имеем набор флагов A:10101010 и нам необходимо изменить значение 3 бита на противоположное. Тогда:
A: 10101010 C: 10100010
xor xor
B: 00001000 B: 00001000
= =
C: 10100010 A: 10101010
Таким образом эта операция позволяет инвертировать значение флага, не изменяя при этом значения остальных флагов.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Android OnClickListener collided?
Sorry for weird question, I am still Newbie for Android programming.
I have following code:
import android.content.DialogInterface.OnClickListener;
import android.view.View.OnClickListener;
And eclipse telling me the View.OnClickListener has collided with another import statement.
I am using Dialog.OnClickListener for my AlertDialog and View.OnClickListener for my Button. I am use this Button to showing the AlertDialog. How to do the good and right way for this?
Here's my code:
// This OnClickListener is still showing error in eclipse
btnListLocation.setOnClickListener(new OnClickListener() {
public void onClick(View arg0) {
// TODO Auto-generated method stub
showAllListLocation(location);
}
});
private void showAllListLocation(String[] location) {
AlertDialog.Builder b = new Builder(this);
b.setTitle("Choose your location");
b.setSingleChoiceItems(location, 0, new OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
// TODO Auto-generated method stub
dialog.dismiss();
}
});
b.show();
}
Thanks in advance.
A:
Button extends View. So for Button you can use View.OnClickListener
For Dialogs - DialogInterface.OnClickListener
Simple use:
Button b;
//b init
b.setOnClickListener(new View.OnClickListener()
{
@Override
public void onClick(View arg0)
{
new AlertDialog.Builder(_context_)
.setCancelable(false)
.setTitle("MyDialog")
.setMessage("MyMessage")
.setPositiveButton("Ok", new DialogInterface.OnClickListener()
{
@Override
public void onClick(DialogInterface dlg, int which)
{
//Logic
}
}).show();
}
});
A:
Instead of using import, you can explicitly declare the type of OnClickListener() to avoid the collision:
Button a = new Button(MainActivity.this);
a.setOnClickListener(new android.view.View.OnClickListener() {
@Override
public void onClick(View arg0) {
// TODO Auto-generated method stub
}
});
AlertDialog.Builder alertDialogBuilder = new AlertDialog.Builder(
MainActivity.this);
alertDialogBuilder.setPositiveButton("abc", new android.content.DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface arg0, int arg1) {
// TODO Auto-generated method stub
}
});
Hope this helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Any way to award a second +100 bounty to a second answer to the same question?
I've awarded a +100 bounty to this answer. but there is a second answer that uses a different approach to the derivation that I find equally helpful, and I would like to award a second +100. When I click the start a bounty button, the lowest available number is +200.
A work-around would be to "borrow" some, but I'm asking here if there is a way within the stackexchange system to do this.
Further, is there an existing rationale for blocking any value below 200, or is it just a bufeature? (a bug retroactively defined as a feature)?
A:
Subsequent bounties have to go up in value. It seems people were taking advantage of the system. See https://meta.stackexchange.com/a/64826/155668 for the details.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to Acces nested Json Array data in java
I was trying to extract data from JSON in Java.
have any idea about how to access these JSON data which is underlined in blue it is nested JSON array I guess? I know about JSON object and getstring(key) but it is working only for these data which is underlined in red. I want that blueline data. Any solution would be great. Any help would be useful for me I am working on this issue for the last 5 days.
Java Code
JSONObject object = (JSONObject) new JSONTokener(response).nextValue();
response is a string which contain json data.
{
"calories":115,
"totalWeight":223.0,
"dietLabels":["LOW_FAT"],
"healthLabels":["VEGAN","VEGETARIAN","PEANUT_FREE","TREE_NUT_FREE","ALCOHOL_FREE","SULPHITE_FREE"],
"cautions":["SULFITES"],
"totalNutrients":{
"ENERC_KCAL": {"label":"Energy","quantity":115.96,"unit":"kcal"},
"FAT":{"label":"Fat","quantity":0.37910000000000005,"unit":"g"},
"FASAT":{"label":"Saturated","quantity":0.06244,"unit":"g"},
"FAMS":{"label":"Monounsaturated","quantity":0.01561,"unit":"g"},
"FAPU":{"label":"Polyunsaturated","quantity":0.11373,"unit":"g"},
"CHOCDF":{"label":"Carbs","quantity":30.796300000000002,"unit":"g"},
"FIBTG":{"label":"Fiber","quantity":5.351999999999999,"unit":"g"},
"SUGAR":{"label":"Sugars","quantity":23.169700000000002,"unit":"g"},
"PROCNT":{"label":"Protein","quantity":0.5798,"unit":"g"},
"CHOLE":{"label":"Cholesterol","quantity":0.0,"unit":"mg"}
}
}
A:
You can call
JSONArray dietLabels = jo.getJSONArray("dietLabels");
To access ENERC_KCAL Data you can do something like that :
JSONObject totalNutrients = jo.getJSONObject("totalNutrients");
JSONObject ENERC_KCAL = totalNutrients.getJSONObject("ENERC_KCAL");
on the JsonObject where jo is the main json object (from the given json string) and "dietLabels" is the name of the array.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Virtual deconstructor - Memory leak
Now we have to create class X (as base class) and class Y (as derived). They both uses integer pointers to hold different size of allocations.
class X
{
int *p;
public:
X() {p=new int[2];}
~X() {delete[] p;}
};
class Y : public X {
int *q;
public:
Y() {q=new int[4];}
~Y() {delete[] q;}
};
And let's try it in a simple loop:
for(int i=0;i<8;i++){X *ptr =new Y; delete ptr;}
What causes the memory leak there and what can we do to fix it?
A:
Every execution for this loop
for(int i=0;i<8;i++){X *ptr =new Y; delete ptr;}
your program first creates an integer pointer with size 2 (2*4=8 bytes allocated) and latter creates an integer pointer with size 4 (4*4=16 bytes allocated) but just deletes the first one. So for every loop, it leaks 16 bytes just because of you cannot reach the deconstructor in derived class Y.
It's called Memory leak caused by lack of virtual deconstructor in base class. When you make the deconstructor in class X virtual, then compiler will be able to delete Y-type pointers. So you have to change this statement
~X() {delete[] p;}
into this:
virtual ~X() {delete[] p;}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
The Series of functions in Weierstrass M test
I have some questions regarding the sequences of functions in the Weierstrass M-test:
Weierstrass M-test:. Suppose that $\{f_n\}$ is a sequence of real- or complex-valued functions defined on a set $A$ .....
My question is specifically regarding the $f_n(x)$ in the definition. From all the example that I have seen so far, the sequence of function is usually related by the index. For example, one of the example I seen is:
$g_n(x) = \frac{x^n}{n}$
So it means $g_1(x) = \frac{x^1}{1}, g_2(x) = \frac{x^2}{2}, g_3(x) = \frac{x^3}{3}$
I am just wondering does the sequence of functions i.e. $f_1(x), f_2(x), f_3(x), ...$
have to be related like the above example? Or it could be like this:
$f_1(x)= \sin(x), f_2(x)=\cos(x), f_3(x)= x^2+x + 11, f_4(x)=\frac{3^x}{5}$, ...
Because when I was looking at the proof of the theorem, it seems it is using the partial sum of each of the above functions i.e. looking at $S_n(x)=f_1(x)+f_2(x)+f_3(x)+...+f_n(x)$
Does the $f_1(x), f_2(x),... f_n(x)$ have to be related? or they could be totally arbitary different function defined on the same domain?
Thank you
A:
There are two aspects to your question:
The functions $f_{n}$ in the $M$-test are assumed to satisfy
$$
\sup_{x \in A} |f_{n}(x)| = M_{n},\qquad
\sum_{n} M_{n} < \infty.
\tag{*}
$$
That is a relationship between $f_{n}$ and the index $n$, albeit one that's particularly weak. In this sense, it's inaccurate to say the $f_{n}$ are "totally arbitrary".
You can apply the Weierstrass $M$-test to a sequence $(f_{n})$ that starts off with the functions you mention. In order for a function sequence to be defined, however, $f_{n}$ has to be specified for all $n$. In practice, that's usually achieved by having $f_{n}(x)$ be given by a closed formula involving $n$. In this sense, you're unlikely to encounter
$$
\sin x + \cos x + (x^{2} + x + 11) + \tfrac{3^{x}}{5} + \cdots
$$
in practice, and very likely to see, e.g.,
$$
\sum_{n=0}^{\infty} \frac{x^{n}}{n!},\qquad
\sum_{n=1}^{\infty} e^{-nx},\qquad
\sum_{n=1}^{\infty} \frac{\sin(nx)}{n^{2}}.
$$
In case a bit of theory helps: Convergence of an infinite series is entirely determined by asymptotic behavior of the tail, not by any finite number of terms. To give a precise formulation, if $(a_{n})_{n=1}^{\infty}$ is a sequence (of complex numbers, say), then following are equivalent:
For some $N \geq 1$, $\displaystyle\sum_{n=N}^{\infty} a_{n}$ converges.
For every $N \geq 1$, $\displaystyle\sum_{n=N}^{\infty} a_{n}$ converges.
Consequently, condition (*) in the $M$-test is not a condition on any finite collection of the $f_{n}$, but a condition on the asymptotic behavior of the suprema. You can "fold in" or remove an arbitrary finite set of terms without changing whether or not the series converges.
Or, if you like, in the Weierstrass $M$-terst, an arbitrarily long initial finite sequence of terms is completely arbitrary, even though the asymptotic behavior of the tail is not.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Random generator & CUDA
I have a question regarding the random generators in CUDA . I am using Curand to generate random numbers with the following code:
__device__ float priceValue(int threadid){
unsigned int seed = threadid ;
curandState s;
curand_init (seed , 0, 0, &s);
float randomWalk = 2;
while (abs(randomWalk)> 1) {
randomWalk = curand_normal(&s);
}
return randomWalk;
}
I have tried to relaunch this code many times, I have always the same output. I could not find what’s wrong in this code. The threads give the same Ids but the curand_normal functions should change at each launching, right?
A:
You're running init each time you ask for a random value. Instead you should run curand_init() once, in a separate kernel at the start of your code. Then when you want a new random value, just call curand_normal(). Then the values will change each time you call your device function.
For an example see my answer here.
If you want to use time as a seed instead of thread ID, then pass the value returned by clock() or whatever is your favorite time function:
unsigned int seed = (unsigned int) clock64();
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can I install a Steam game without having to use the internet?
So I finished building my computer and I got Steam on it by trying to install Skyrim. It gives me Steam but it says I have to have internet to continue. I'm pretty sure that I don't need online mode to install the game. I want to log in offline or just be in Steam but how do I do it? I don't have internet on my computer yet.
A:
You can't log into Steam in Offline Mode without first being online. Step 1 of Valve's official docs for offline mode state that you should "Start Steam online."
Additionally, it's not possible to install games without being online in Steam.
Further, with most Steam games installing using the discs is a painful process as Steam tends to like to download the full game from Steam servers even if the disc is present.
Thus, even though you may have the discs for a game, if it requires Steam, you will need an internet connection in order to install it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
error: variable length array of non-POD element type 'string' (aka 'basic_string')
I know eventually I need to change trigram, whose one space contains 3 characters from the former string, into a dynamic array to solve this problem, but I tried to set my array's capacity large enough at first. However, when I compile my code, the error appears.
#error: variable length array of non-POD element type 'string' (aka 'basic_string<char>'#
Code:
//global variable
int CAPACITY = 1000;
int main()
{
//a string that reads in the language of the text
string language = "";
//a string that reads in the file name of the text
string filename = "text.txt";
//a string that reads in the original text characters
string original = "";
//a string that reads in the modified original array
string rid_of_spaces = "";
//an array with capacity that stores the trigrams
string trigrams[CAPACITY];
ifstream finput;
char c;
//the length of an array
int sLength = 0;
//the tracker for trigrams
int counter = 0;
cin >> language >> filename;
finput.open(filename.c_str());
while (finput.get(c)){
//to test if the character is alpha
if (isalpha(c)){
//change the alphabet to lowercase
c = tolower(c);
//store the modified letter in the array
original += c;
}
//change any other characters into a space
else original += ' ';
}
sLength = original.length();
//loop through the original array and change mutiple spaces into one
for (int i = 0; i < sLength; i++){
if (isalpha(original[i]))
rid_of_spaces += original[i];
else {
while (original[i] == ' ')
i++;
rid_of_spaces += ' ';
rid_of_spaces += original[i];
}
}
sLength = rid_of_spaces.length();
for (int i = 0; i < CAPACITY; i++)
trigrams[i] = 0;//initialize each element to 0
for (int i = 0; i < sLength - 2; i++){
trigrams[counter] += rid_of_spaces[i]
+ rid_of_spaces[i + 1]
+ rid_of_spaces[i + 2];
counter++;
}
cout << filename << endl;
cout << original << endl;
cout << rid_of_spaces << endl;
for (int i = 0; i < counter; i++)
cout << trigrams[i] << endl;
finput.close();
return 0;
}
A:
The variable
int CAPACITY = 1000;
should be a constant
const int CAPACITY = 1000; // or with c++11 constexpr int CAPACITY = 1000;
for
string trigrams[CAPACITY];
because "ISO C++ forbids variable length array 'trigrams'" (g++ message)
And this
for (int i = 0; i < CAPACITY; i++)
trigrams[i] = 0;//initialize each element to 0
should be
for (int i = 0; i < CAPACITY; ++i)
trigrams[i] = "";//initialize each element to 0
You don't "initialize [strings] to 0" but with a zero length C-string. A zero length C-string is not an invalid 0-pointer, but a (valid) pointer to a char with value 0;
Generally, it's better not to use C-arrays if there are STL-means to avoid them; with c++11, std::array<std::string, CAPACITY> would be preferable here if you want to stay with the "capacity large enough" approach.
live at Coliru's
I took the liberty to change all i++ to ++i in the for-loops' heads while at it; see eg. What is the difference between ++i and i++ for the rationale behind that.
For a dynamic (without pre-defined bounds) array use std::vector<std::string> trigrams;,
push_back or emplace_back your strings into that vector,
and for i- iterate
for (std::size_t i = 0; i < trigrams.size(); ++i) {/* ... */}
Or use the iterator-interface of std::vector, e.g.
std::for_each(trigrams.begin(), trigrams.end(),
some_function_or_functor_that_does_the_job);
(see std::foreach here ),
or with c++11 just
for (auto& s : trigrams) {/* ... */}
unless you need to customize the iteration like you do it inside your second loop.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Finding enclosed area
Hints only please !!
I gave a test today, and it was asked to find area enclosed by the curve $x^4 + y^4 = 2*x*y$. This is an implicit function. A quick obs. shows me that, the curve is entirely bound and (1,1)&(-1,-1) are the interesting points beside origin.
To find the area, I attempted
$ I = \int y dx $ taking by parts but to no avail as I am neither getting the required limits, nor a well-known function by manipulation.
I searched for similar problems; they used polar substitution. I am finding it to be clumsy and maybe not appropriate for this integral. Please give a hint as to how proceed.
A:
Setting $x=r \cos(\theta), y = r \sin(\theta)$ (the classical change of variables : polar $\to$ cartesian) in your equation gives
$$r^4(\underbrace{(\cos(\theta)^2+\sin(\theta)^2)^2}_{=1}-\underbrace{2\cos(\theta)^2.\sin(\theta)^2}_{\tfrac12 \sin(2 \theta)})=r^2 \underbrace{2 \cos(\theta).\sin(\theta)}_{\tfrac12 \sin(2 \theta)}.$$
Simplifying by $r^2$ and taking the square root, one gets :
$$r(\theta)=\sqrt{\dfrac{\sin(2\theta)}{1-\tfrac12\sin(2\theta)^2}} \ \ \
\text{if} \ \ \ \sin{2 \theta} \geq 0$$
The last condition means that the curve will exist if $0 \leq \theta \leq \pi/2$ (first quadrant) and/or $\pi \leq \theta \leq 3\pi/2$ (third quadrant). No part of the curve in the second and fourth quadrants.
It remains to integrate to obtain the area enclosed by the curve:
$$A=\frac12\int_0^{2 \pi}r(\theta)^2 d \theta$$
BUT, this integral is equal to $0$ because we turn once in the positive sense, once in the negative one. We have to take a serious look at the curve :
We are obliged, if we want to have the unsigned area of a "petal" to integrate from $0$ to $\pi/2$ (and afterwards double the result as a final answer). Up to you for the calculations (as you desire mainly hints).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Excel VBA Overflow Error with Dynamic Array
I'm getting an Overflow area in the following sub, and I can't figure out why. Stepping through the code, lRows and lCols gets set to the correct values, and the redims set the correct ranges on the arrays, but it fails when I try to assign the range values to the array (on line: arrData = rng.value). My rows do often go up to around 90,000+, but I have everything as long, so I would think that wouldn't be a problem...
Sub test()
Dim arrData() As Variant
Dim arrReturnData() As Variant
Dim rng As Excel.Range
Dim lRows As Long
Dim lCols As Long
Dim i As Long, j As Long
lRows = ActiveSheet.Range("A" & Rows.Count).End(xlUp).Row
lCols = ActiveSheet.Range("A1").End(xlToRight).Column
ReDim arrData(1 To lRows, 1 To lCols)
ReDim arrReturnData(1 To lRows, 1 To lCols)
Set rng = ActiveSheet.Range(Cells(1, 1), Cells(lRows, lCols))
arrData = rng.value ' Overflow error on this line
For j = 1 To lCols
For i = 1 To lRows
arrReturnData(i, j) = Trim(arrData(i, j))
Next i
Next j
rng.value = arrReturnData
End Sub
A:
try
Dim arrData as Variant
arrData = Range(Cells(1, 1), Cells(lRows, lCols))
and for more info see this answer
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Help regarding BizTalk
i am having my biztalk solution, uptill now i am able to do following thing
1) taken sql adapter as my source schmea i wanted node wise xml so i did xml auto,elements in my SP to its generate schmea nodewise
2) i am able to loop through all the nodes and checking condition in loop wioth decide shape now decide shape is executing perfactly,but now the issue comes i want to insert my current xml into table,from all the xml nodes i am getting single node's xml like following
<userDetails xmlns="http://SqlRowLooping"><userID>1</userID><fName>niladri</fName><lName>Roy</lName><department>it</department></userDetails>
now i have updateGram as well but i think it will accept data attibutewise,right now it is firing error saying cant find procedure userID,
help how to insert this in table,
how updategram will work..
thxs
A:
Change the XML node to conform to updategram syntax, see MSDN
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Text Replace Service with Xcode - Not replacing selected text
I am trying to build a standalone system service (app with .service extension, saved to ~/Library/Services/) to replace user-selected text in Mac OS X.
I want to build it with Xcode and not Automator, because I am more accustomed to Objective-C than Applescript.
I found several examples on the internet, e.g. this and also Apple's documentation. I got the Xcode project appropriately configured and building without problems. However, when I install my service and try to use it, nothing happens.
The service method itself is executed: I placed code to show an NSAlert inside its method body and it shows. However, the selected text does not get replaced.
Any idea what might be missing? This is the method that implements the service:
- (void) fixPath:(NSPasteboard*) pboard
userData:(NSString*) userData
error:(NSString**) error
{
// Make sure the pasteboard contains a string.
if (![pboard canReadObjectForClasses:@[[NSString class]] options:@{}])
{
*error = NSLocalizedString(@"Error: the pasteboard doesn't contain a string.", nil);
return;
}
NSString* pasteboardString = [pboard stringForType:NSPasteboardTypeString];
//NSAlert* alert = [[NSAlert alloc] init];
//[alert setMessageText:@"WORKING!"];
//[alert runModal];
// ^ This alert is displayed when selecting the service in the context menu
pasteboardString = @"NEW TEXT";
NSArray* types = [NSArray arrayWithObject:NSStringPboardType];
[pboard clearContents];
[pboard declareTypes:types owner:nil];
// Set new text:
[pboard writeObjects:[NSArray arrayWithObject:pasteboardString]];
// Alternatively:
[pboard setString:pasteboardString forType:NSStringPboardType];
// (neither works)
return;
}
A:
After careful reading of Apple's documentation, I found the answer: My service app's plist file was missing a key under the Services section:
<key>NSReturnTypes</key>
<array>
<string>NSStringPboardType</string>
</array>
I only had the opposite NSSendTypes key, which lets you send data from the client app to the service. This one is needed to send the modified text back (in the other direction).
It is weird because, Apple's documentation seems to imply that specifying these two is no longer necessary since 10.6 (Snow Leopard).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
First cohomology group on a Riemann surface with all wedge products equal to zero
Sorry for the strong edit, but I realized my question had a easier formulation:
Can there be a Riemann surface $X$ with the property $\sigma\wedge \tau=0$ for every $\sigma,\tau\in H^1(X,\mathbb{C})$?
Here I mean $\sigma\wedge\tau =0\in H^2(X,\mathbb{C})$, i.e. $\sigma\wedge\tau$ is an exact 1-form.
My guess is no but I can't find a contradiction...
A:
Following Mike Miller's suggestion, consider the cylinder $X =S^1 \times \mathbb{R}$ (as a Riemann surface, you may view it as either $\mathbb{C} \setminus \{0\}$ or $\mathbb{C}/\mathbb{Z}$). As this deformation retracts onto the base circle and homology is a homotopy invariant, we know that $H_2(X;\mathbb{C}) \cong H_2(S^1;\mathbb{C}) =0$. As $\mathbb{C}$ is a field, cohomology is dual to homology, and so $H^2(X;\mathbb{C}) \cong (H_2(X;\mathbb{C}))^* = 0$.
Alternatively, if you want to work exclusively with (complex) de Rham cohomology, you may simply observe that $H_{dR}^2(X) \cong H_{dR}^2(S^1) = 0$ because there are no nontrivial $2$-forms on $S^1$.
Or, using the Künneth formula, $$\begin{align}H_{dR}^2(X) &= \left(H^2_{dR}(S^1) \otimes H^0_{dR}(\mathbb{R})\right) \oplus \left(H^1_{dR}(S^1) \otimes H^1_{dR}(\mathbb{R})\right) \oplus \left(H^0_{dR}(S^1) \otimes H^2_{dR}(\mathbb{R})\right)\\
&= (0 \otimes \mathbb{C}) \oplus (\mathbb{C}\otimes 0) \oplus (\mathbb{C} \otimes 0) = 0.\end{align}$$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Magento 1.6 add tab to customer edit on back end
I've been trying to add a tab to the Customer Information page in Magento CE 1.6.
I've tried the following examples:
how to add custom tabs to magento customer edit page on mydons.com
how to add tab in customer information in magento admin on nextbits.eu
how to add tab in customer information on ankursphp.blogspot.com
The last one is the only one that even seems to come close. However, it has two problems. One is it edits core files and two is that when I click on the tab it spins and dyes. Chrome DevTools show the server reporting 404.
Can anyone provide me with some decent documentation or code that is compatible with Magento 1.6?
Edit Adding the text from the last link as that seems to be the only one that remotely worked.
Override the file /app/code/core/Mage/Adminhtml/Block/Customer/Edit/Tabs.php,Inside _beforeToHtml() method, add the following code:
$this->addTab('Custom',array(
'label' =>Mage::helper('customer')->__('Custom'),
'class' => 'ajax',
'url' => $this->getUrl('*/*/custom',array('_current'=>true)),
));
Override the file /app/code/core/Mage/Adminhtml/controllers/CustomerController.php, Add the following code:
public function customAction()
{ $this->_initCustomer();
$this->getResponse()->setBody(
Mage::app()->getLayout()->createBlock('core/template')->setTemplate('custom/customer/tab/custom.phtml')->setCustomerId(Mage::registry('current_customer')->getId())
->setUseAjax(true)->toHtml()
);
}
Create the file /app/code/core/Namespace/ModuleName/Block/Adminhtml/Customer/Edit/Tab/ and create Custom.php, Add the following source code to the file:
<?php
class Custom_Custom_Block_Adminhtml_Customer_Edit_Tab_Custom extends Mage_Adminhtml_Block_Widget_Form
{
public function __construct()
{
parent::__construct();
$this->setTemplate('custom/customer/tab/custom.phtml');
}
}
?>
Now, you need to create a template file. Go to /app/design/adminhtml/default/default/template/modulename/customer/tab/ and create custom.phtml,
A:
Managed to track down some out of the box code that works without any modification as a starting point:
http://www.engine23.com/magento-how-to-create-custom-customer-tab-and-submit-a-form.html
app/etc/modules/Russellalbin_Customertab.xml
<?xml version="1.0"?>
<config>
<modules>
<Russellalbin_Customertab>
<active>true</active>
<codePool>local</codePool>
</Russellalbin_Customertab>
</modules>
</config>
app/etc/local/Russellalbin/Customertab/etc/config.xml
<config>
<modules>
<Russellalbin_Customertab>
<version>0.0.1</version>
</Russellalbin_Customertab>
</modules>
<adminhtml>
<layout>
<updates>
<customertab>
<file>customertab.xml</file>
</customertab>
</updates>
</layout>
</adminhtml>
<admin>
<routers>
<adminhtml>
<args>
<modules>
<russellalbin_customertab before="Mage_Adminhtml">Russellalbin_Customertab_Adminhtml</russellalbin_customertab>
</modules>
</args>
</adminhtml>
</routers>
</admin>
<global>
<blocks>
<customertab>
<class>Russellalbin_Customertab_Block</class>
</customertab>
</blocks>
</global>
</config>
app/code/local/Russellalbin/Customertab/Block/Adminhtml/Customer/Edit/Tab/Action.php
/**
* Adminhtml customer action tab
*
*/
class Russellalbin_Customertab_Block_Adminhtml_Customer_Edit_Tab_Action
extends Mage_Adminhtml_Block_Template
implements Mage_Adminhtml_Block_Widget_Tab_Interface
{
public function __construct()
{
$this->setTemplate('customertab/action.phtml');
}
public function getCustomtabInfo()
{
$customer = Mage::registry('current_customer');
$customtab = 'Mail Order Comics Pull List';
return $customtab;
}
/**
* Return Tab label
*
* @return string
*/
public function getTabLabel()
{
return $this->__('Customer Pull List');
}
/**
* Return Tab title
*
* @return string
*/
public function getTabTitle()
{
return $this->__('Pull List Tab');
}
/**
* Can show tab in tabs
*
* @return boolean
*/
public function canShowTab()
{
$customer = Mage::registry('current_customer');
return (bool)$customer->getId();
}
/**
* Tab is hidden
*
* @return boolean
*/
public function isHidden()
{
return false;
}
/**
* Defines after which tab, this tab should be rendered
*
* @return string
*/
public function getAfter()
{
return 'tags';
}
}
app/code/local/Russellalbin/Customertab/controllers/Adminhtml/CustomertabController.php
class Russellalbin_Customertab_Adminhtml_CustomertabController extends Mage_Adminhtml_Controller_Action
{
function resetAction()
{
$params = array();
$path = 'customer';
$customer_id = (int)$this->getRequest()->getParam('customer_id');
if($customer_id)
{
// Do your stuff here
$params['id'] = $customer_id;
$params['back'] = 'edit';
$params['tab'] = 'customer_edit_tab_action';
$path = 'adminhtml/customer/edit/';
}
$params['_store'] = Mage::getModel('core/store')->load(0);
$url = Mage::getModel('adminhtml/url')->getUrl($path, $params);
Mage::app()
->getResponse()
->setRedirect($url);
Mage::app()
->getResponse()
->sendResponse();
exit;
}
}
app/design/adminhtml/default/default/layout/customertab.xml
<?xml version="1.0"?>
<layout version="0.1.0">
<adminhtml_customer_edit>
<reference name="customer_edit_tabs">
<action method="addTab">
<name>customer_edit_tab_action</name>
<block>customertab/adminhtml_customer_edit_tab_action</block>
</action>
</reference>
</adminhtml_customer_edit>
</layout>
app/design/adminhtml/default/default/template/customertab/action.phtml
<div id="customer_info_tabs_customer_edit_tab_action_content">
<div class="entry-edit">
<div class="entry-edit-head">
<h4 class="icon-head head-edit-form fieldset-legend">Pull List</h4>
</div>
<div id="group_fields4" class="fieldset fieldset-wide">
<div class="hor-scroll">
<h2>This is my form and content that I can submit and return back to this page and tab</h2>
<div><form action="<?php echo $this->getUrl('adminhtml/customertab/reset/', array('customer_id'=> $customer_id)); ?>" method="get"><input type="submit" value="Post This Form" /></form></div>
</div>
</div>
</div>
</div>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to set up Google+ authorship in Drupal 7
How do you set up Google+ authorship in Drupal 7?
So I can link my drupal website to our Google+ account so that our Google+ profile pic can appear in the search engines next to our site.
A:
The easiest way is with the Metatag module:
The Metatag module allows you to automatically provide structured
metadata, aka "meta tags", about your website. In the context of
search engine optimization, when people refer to meta tags they are
usually referring to the meta description tag and the meta keywords
tag that may help improve the rankings and display of your site in
search engine results.
After you install/enable the module, you can edit the metatags at admin/config/search/metatags The Google+ Author is under Advanced.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How To Place Multiple Gantt Chart Bars On Same Row?
I currently have a gantt chart that displays dates of work requests. I've been asked to modify the gantt chart to only display the days work will be performed. For example, WR #1 begins 2/8/2016 and ends 2/25/2016. However, work is only performed Mon-Thur. My current gantt chart shows a bar from 2/8/2016 to 2/25/2016. I would like to break up the bar so that it only displays 2/8-2/11, 2/15-2/18, and 2/22-2/25. I would also like to display this single work request on one line. Is it possible to use SeriesChartType.RangeBar for this, or do I have to use a different chart type? I tried to create a series using multiple DataPoint objects with the same xValue, however when I added the series to the chart each DataPoint was displayed on a separate row. Am I doing something incorrectly?
Here is some code I am using for my chart setup:
using System.Windows.Forms.DataVisualization.Charting;
...
Series series1 = new Series();
series1.ChartArea = "Default";
series1.ChartType = SeriesChartType.RangeBar;
series1.YValuesPerPoint = 2;
series1.YValueType = ChartValueType.Date;
First attempt:
// WR #1
DataPoint dp0 = new DataPoint(0, new double[] { startDate0, stopDate0 });
DataPoint dp1 = new DataPoint(0, new double[] { startDate1, stopDate1 });
DataPoint dp2 = new DataPoint(0, new double[] { startDate2, stopDate2 });
// WR #2
DataPoint dp3 = new DataPoint(1, new double[] { startDate3, stopDate3 });
DataPoint dp4 = new DataPoint(1, new double[] { startDate4, stopDate4 });
DataPoint dp5 = new DataPoint(1, new double[] { startDate5, stopDate5 });
series1.Points.Add(dp0);
series1.Points.Add(dp1);
series1.Points.Add(dp2);
series1.Points.Add(dp3);
series1.Points.Add(dp4);
series1.Points.Add(dp5);
chart1.Series.Clear();
chart1.Series.Add(series1);
I have a few hundred work requests, so naturally I will use a loop to create each DataPoint for each work request and add it to the series.
A:
Try this:
private void Form1_Load(object sender, EventArgs e)
{
chart1.Series[0].Points.AddXY(1, new object[] { new DateTime(2016, 2, 8), new DateTime(2016, 2, 25) });
chart1.Series[0].Points.AddXY(2, new object[] { new DateTime(2016, 2, 8), new DateTime(2016, 2, 11) });
chart1.Series[0].Points.AddXY(2, new object[] { new DateTime(2016, 2, 15), new DateTime(2016, 2, 18) });
chart1.Series[0].Points.AddXY(2, new object[] { new DateTime(2016, 2, 22), new DateTime(2016, 2, 25) });
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Timeout with SimpleTimeLimiter - shutting down application take much time
I am using Guava's SimpleTimeLimiter to get Timeout functionality. The problem is that shutting down the app (if it's finished) take 30s as soon as i use the SimpleTimeLimiter (this time does not change if i change timeout). If i call new MyCallable().call() directly all works fine - app is shut down as soon as last task is finished.
The app itself has an own shutdown hook to be able to handle ctrl-c (to finish last task). The app uses a H2- embedded db and Network.
I tried to profile with visualvm - the time at the end is not recorded?! This long waiting period tooks placed before my shutdown hook is called (probably another shutdown hook?).
Any ideas how to fix this?
A:
When you create SimpleTimeLimiter with default constructor - he create own Executors.newCachedThreadPool() that you can't control, So your application what until all threads will be completed. from Javadoc
... Threads that have not been used for sixty seconds are
terminated and removed from the cache....
If you create own ExecutorService and create SimpleTimeLimiter with this executorService then you can shutdown executorService on your shutdown hook.
private final ExecutorService executor;
private final TimeLimiter timeLimiter;
...
executor = Executors.newCachedThreadPool();
timeLimiter = new SimpleTimeLimiter(executor);
...
public void shutdown() {
if (executor == null || executor.isShutdown()) {
return;
}
executor.shutdown();
try {
executor.awaitTermination(5, TimeUnit.SECONDS);
} catch (InterruptedException e) {
log.log(Level.WARNING, "Interrupted during executor termination.", e);
Thread.currentThread().interrupt();
}
executor.shutdownNow();
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to add background images for jumbotron in bootstrap?
I started with bootstrap just 2 days ago.
The problem i'm facing now is that i'm unable to put background image to my whole page or even my jumbotron. I tried directly giving the path (both relative and absolute) to the div tag, but it didn't work. I Also have tried through css, but still I dont see any background image in my output. Please help!
I mainly want to have a background image for the jumbotron.
This is the code i wrote:
Html:
<div class="row">
<div class="jumbotron">
<div class="container">
<center>
<h1>Avudo computers</h1>
<h5>Redesigning HOPES.</h5>
</center>
</div>
</div>
</div>
Css:
background-image: url(../img/jumbotronbackground.jpg);
background-position: 0% 25%;
background-size: cover;
background-repeat: no-repeat;
color: white;
text-shadow: black 0.3em 0.3em 0.3em;
A:
If seems like you simply haven't declared a class/ID in your CSS.
You have to attach .jumbotron to your CSS rules.
See example.
body,
html {
background: url(http://i.telegraph.co.uk/multimedia/archive/02423/london_2423609k.jpg) center center no-repeat;
background-size: cover;
height: 100%;
}
div.jumbotron {
background-image: url(http://placehold.it/1350x550/f00/f00);
background-position: 0% 25%;
background-size: cover;
background-repeat: no-repeat;
color: white;
text-shadow: black 0.3em 0.3em 0.3em;
}
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet" />
<div class="jumbotron">
<div class="container">
<center>
<h1>Avudo computers</h1>
<h5>Redesigning HOPES.</h5>
</center>
</div>
</div>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Default tab by date or day
I'm building a TV listing and I'm using quicktabs module with tabs:
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Sunday
Every tab shows a node that have a TV listing.
Is it possible these tabs to be shown by date/day?
Example: today is Monday, so when a user enters the page for TV schedule the active tab will be Monday.
I've inspected the tab links and I find out that every tab has additional link to the main URL.
Example: The first tab (in mine case - Monday) has /program?qt-quicktabs_tv_schedule=0#qt-quicktabs_tv_schedule
The second tab (in my case - Tuesday) has /program?qt-quicktabs_tv_schedule=1#qt-quicktabs_tv_schedule
etc.
EDIT:
I've tried this code:
(function($){
$(document).ready(function(){
var d = new Date();
var n = d.getDay();
$(".quicktabs-tabs li").removeClass('active');
$('#quicktabs-tab-quicktabs_tv_schedule-0' + n).parent().addClass('active');
});
})(jQuery);
The removing active class is working, but the days changing based on
var d = new Date();
var n = d.getDay();
it's not working. Maybe I'm missing something...
EDIT 2:
Here's what I've tried:
(function($){
$(document).ready(function(){
var d = new Date();
var n = d.getDay()-1;
if ( n == -1) {
n = 6;
};
$(".quicktabs-tabs li").removeClass('active');
$('#quicktabs-tab-quicktabs_tv_schedule' + n).parent().addClass('active');
$('.quicktabs-tabpage').addClass('quicktabs-hide');
$('#quicktabs-tab-page-id-' + n).removeClass('quicktabs-hide');
});
})(jQuery);
With this code it simply hides the default tab.
A:
You can use javascript for this.
Each quicktab "A" tag has an ID something like quicktabs-tab-name-0 (replace name with the quicktab name you have given)
You can use some thing like the below code to get the current day of the week
var d = new Date();
var n = d.getDay();
this would return a number which would be something from 0 to 6 where 0 Represents Sunday and so on.
The active class is set to li tag with in which the a tag is placed.
So first we need to remove the active class from all the li with in quick tab this can be done with
$(".quicktabs-tabs li").removeClass('active');
Next we need to add the active class to the current day's tab for which you could do something like.
$('#quicktabs-tab-name-' + n).parent().addClass('active');
// n is the day number we got from the code above.
Next you need to enable the tab page for the corresponding tab for that you would do the following. The first one would add the hide class to all tab pages and the second code removes the hide class from the current div.
$('.quicktabs-tabpage').addClass('quicktabs-hide');
$('#quicktabs-tab-page-id-' + n).removeClass('quicktabs-hide');
That should do it.
Just make sure that the tabs are arranged form Sunday to Saturday. So that quicktab ids also get the same numbering. If you want a different order then you can offset the number accordingly by adding the offset value and then getting the reminder value of a division by 7.
So in effect the complete code would be
var d = new Date();
var n = d.getDay();
$(".quicktabs-tabs li").removeClass('active');
$('quicktabs-tab-name-' + n).parent().addClass('active');
$('.quicktabs-tabpage').addClass('quicktabs-hide');
$('#quicktabs-tab-page-id-' + n).removeClass('quicktabs-hide');
This would go inside the document ready function.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Runtime error 1004 unable to get the match property when using two index/match
Trying to make the below code work, but unfortunately the second index/match throws an error. The code works if I remove the "count" variable from the second part (works fine in the first index/match), and refer to a single cell, no idea why.
Tried initing a new counter variable, part 2 still throws an error. Also, is there a better way to refer to cells in a range, in a for loop, instead of the ghetto solution I used?
Thanks!
Dim sht As Worksheet
Dim LastRow As Long
Dim count As Integer
Set sht = ActiveSheet
LastRow = sht.Cells(sht.Rows.count, "A").End(xlUp).Row
count = 2
For Each i In Range("f2:f" & LastRow)
With Application.WorksheetFunction
i.Value = .Index(Worksheets("Area").Range("c:c"), .Match(Range("E" & count), Worksheets("Area").Range("a:a")))
End With
count = count + 1
Next
count = 2
For Each i In Range("h2:h" & LastRow)
i.Value = count
With Application.WorksheetFunction
i.Value = .Index(Worksheets("Park reason").Range("C:C"), .Match(Range("G" & count), Worksheets("Park reason").Range("A:A")))
End With
count = count + 1
Next
End Sub
A:
Most likely a match was not found. Test for a match not found first. If you use Application.Match you can use the error returned in a test to see if a match was found before attempting to get your i.value. Do the same for both Match attempts.
With Application.WorksheetFunction
Dim test As Variant
test = Application.Match(Range("E" & count), Worksheets("Area").Range("a:a"), 0)
If Not IsError(test) Then
i.Value = .Index(Worksheets("Area").Range("c:c"), test)
End If
End With
I would probably re-write as:
With sht
Dim test As Variant
test = Application.Match(.Range("E" & count), Worksheets("Area").Range("A:A"), 0)
If Not IsError(test) Then
i.Value = Application.WorksheetFunction.Index(Worksheets("Area").Range("C:C"), test)
End If
End With
I would also look to work with smaller ranges than entire columns i.e. not "C:C" for example. Find the used range/last row and work up to that.
Fuller version:
Option Explicit
Sub test()
Dim sht As Worksheet
Dim LastRow As Long
Dim count As Long
Set sht = ActiveSheet
LastRow = sht.Cells(sht.Rows.count, "A").End(xlUp).Row
With sht
count = 2
Dim i As Range, test As Variant
For Each i In .Range("F2:F" & LastRow)
test = Application.Match(.Range("E" & count), Worksheets("Area").Range("A:A"), 0)
If Not IsError(test) Then
i.Value = Application.WorksheetFunction.Index(Worksheets("Area").Range("C:C"), test)
End If
count = count + 1
Next
count = 2
Dim test2 As Variant
For Each i In .Range("H2:H" & LastRow)
test2 = Application.Match(.Range("G" & count), Worksheets("Park reason").Range("A:A"))
If Not IsError(test2) Then
i.Value = Application.WorksheetFunction.Index(Worksheets("Park reason").Range("C:C"), test2)
End If
count = count + 1
Next
End With
End Sub
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Building executable JAR in NetBeans not working
I have seen a lot of references to this type of problem on StackOverflow and other places on the internet but the solution everybody else is happy with isn't working for me.
The issue:
I've created Java projects and would like to run them via executable .jar files. When I try to run a .jar file for my project I get a "Could not find the main class: classname. Project will exit." error.
Solutions I've read about:
-editing the 'main class' from the project properties "run" tab and choose the location of the main class.
-edit the manifest file to include:
Main-Class: classname
None of this has worked. Entering the right class in the project properties, and with an updated manifest file I still get the main class not found error and I have run out of ideas on how to fix this.
Any help would be more than slightly appreciated.
EDIT:
Here is a copy of my actual manifest file in its entirity:
Manifest-Version: 1.0
Main-Class: TestCode
<invisible blank line here>
I've heard that a blank line is required in the .mf file so I've put one in there just in case.
The project name is TestCode it is in the "default package" under TestCode.java
EDIT 2:
I unpacked the .jar file and looked at its contents the manifest.mf file inside the .jar has the correct class path listed for the .class file which contains the main method. (most of these projects have only one .class file) and yet I still get the "Could not find the main class" error.
The main class is clearly inside the .jar file, the manifest properly points to it and it still won't run the program.
A:
write
main -class: Packagename.Classname
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What to do in a case of no siddur available and additions to the Shemoneh Esrei?
(Feel free to write a better title.)
Today i ended up getting to shul too late for mincha, and having to daven outside (still within the proper time). Now, i know shemoneh esrei by heart, and even yaaleh v'yavo, but not al hanisim. Luckily, i had a siddur in my pocket (no, not a smartphone), and was able to read from it. But that made me wonder: What if i didn't have the text?
If i would not have been able to say al hanisim due to not knowing it by heart, would it be better to say shemoneh esrei in the proper time without al hanisim, or miss mincha, and daven tashlumin at maariv, with the proper additions?
What about yaaleh v'yavo? The difference is that if yaaleh v'yavo is forgotten, the shemoneh esrei must be repeated, while that is not the case with al hanisim.
To avoid working around the question: There's no mincha minyan available, no siddur or any text available, and no time to go look it up.
A:
Rav Shlomo Zalman was asked what should one do who does not have a siddur available and it is shabbos or yom tov,and he only knows the weekday shmoneh esrei by heart.Rav Shlomo Zalman answered that he should say the weekday shmoneh esrei with yaleh v'yavo,and if it is just shabbos he should say the weekday shmoneh esrei and say yaleh vyavo and insert "ba'yom ha'shabbos ha'zeh".
Text of V'aleiu lo Yibul :
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Creative Programming
Make a story out of your programming.
Example in JavaScript:
self.book = {
"story": function() {
var once = "upon",
aTime = setTimeout(function() {
// Code would continue...
}, 1000)
}
};
self.book.story();
Stipulations:
Must run error-free before, during, and after it's compiled.
You can only use up to two words for the story per String/name.
JavaScript Example:
var story = "Once upon a"; // Wrong (Using more than two words)
var story = "Onceupona"; // Wrong (Using more than two "words")
var onceUponA = "time"; // Wrong (more than two words as a variable name)
var onceUpon = "a time"; // This is fine
var story = "Once upon"; // This is fine
The story must be a complete sentence (at least).
Having some sort of output (such as "printing" the story itself) is not necessary, but it's a plus.
Bring some creativity into it.
Since there are no length rules, the answer with the most votes / best creativity will win. :)
A:
JavaScript
Not sure how historically accurate this, but it's a mini-history of ECMAScript. Please feel free to suggest improvements.
function story() {
var IE = {
from: "Microsoft"
},
Netscape = {
from: "Mozilla"
};
var setUp = {
oncethere: "were two",
browsers: IE + Netscape
};
var parts = {
And: function() {
var theyfought = "to be",
theBest = "browser";
},
oneday: function() {
var they = {
added: function() {
var add = "scripting languages";
Netscape.language = add;
IE.language = add;
return add;
},
thought: function() {
if (what(they.added) === good) {
they.wouldBeat = "the other";
}
}
};
},
andso: function() {
function callLanguage(name) { return name };
Netscape.language = callLanguage("Javascript");
IE.language = callLanguage("JScript");
},
butThen: function() {
var ECMA = "Standards Committee";
(function standardized(languages) {
(function into() {
return "ECMAScript";
})();
})([IE.language, Netscape.language]);
},
theEnd: function() {
return {
andWe: "all lived",
happilyEver: "after..."
};
},
what: function(thing) {
return thing;
},
good: true || false
};
}
story();
A:
JavaScript
'How';do{'computers'^Function}while(0);'they have'|'no power?'
The output is: 0 on console :D
A:
Reminds me of LOLCode, everything is sort of a story (or at least a "conversation"):
HAI
CAN HAS STDIO?
I HAS A VAR
IM IN YR LOOP
UPZ VAR!!1
VISIBLE VAR
IZ VAR BIGR THAN 10? GTFO. KTHX
KTHX
KTHXBYE
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to migrate from Google Apps Mail
I want to migrate some Google Apps accounts to a local mail server. Are there any scripts available, for free or money, which do solve that problem? I will have to migrate about 30 mail accounts this way, a simple IMAP copy via my mail client won't do here.
A:
Is this a google apps account? You can request entire mbox archives of user's email accounts.
The API: https://developers.google.com/admin-sdk/email-audit/?csw=1
Command line tool: https://code.google.com/p/google-apps-manager/
I wish this were available via Google Takeout, but it isn't.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Partial answer in comments
I have recently asked a question here. 1 answer solved my issue partially, but the answerer helped in solving the rest of the issue in the question comments (not the answer comments). In fact most of his answer is on comments.
I have asked him to include those info in the answer and I am sure he will do it too. But I was just wondering what if he doesn't? Should we flag the comments or the answer to merge them with the original answer or can we simply ignore?
I am concerned because I have read in one of the question here that people don't usually read comments and since the answer was in question comments and not in answer comments, I am sure that it won't get enough eyeballs.
I couldn't find any similar question, so posted it here.
A:
Wait for some time ( a few days as the answereer might have had urgent things that he needed to take care of, so couldn't edit the answer ), and if they still don't add the important parts in their answer, you can edit the answer. But please do not make drastic changes to the answer.
If the edit requires a lot of changes, then it's better to post your own answer giving the full details. You can take your time, organize everything and post a Good Quality answer.
By posting another answer, if you feel bad about gaining reputation due to someone else's effort, you can mention that your's is an extension of the other answer and mark your answer as community wiki so you don't feel bad.
Do not flag the comments or answer's as there is nothing wrong with them. Most of the time, users post comments because they cannot guarantee if that is the actual reason, and when the problem is confirmed, they can post the answers without fear.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Plot Using User-defined Coordinates
Suppose instead of using Cartesian coordinate $\vec{x}=(1, 0)$ and $\vec{y}=(0, 1)$, I want to define new coordiates $\vec{a} = (1,0)$ and $\vec{b} = (\frac{1}{2},\frac{\sqrt{3}}{2} )$. And when I plot a function for example $f(\vec{r}) = f(c, d)$
f[c_, d_] := d
Plot3D[f[c, d], {c, 0, 1}, {d, 0, 1}]
Is there a way it will interpret the plotting region and function as $d*\vec{b}$ instead of $d*\vec{y}$?
I know I can probably redefine the new unit vectors $\vec{a}$ and $\vec{b}$ as Cartesian coordinate but it would be convenient if I can redefine the coordinate system.
I hope I'm clear enough what I'm trying to do and thanks in advnace!
A:
ClearAll[f]
f[c_, d_] := d
p1 = Plot3D[f[c, d], {c, 0, 1}, {d, 0, 1}, ImageSize -> 300];
f2 = Rescale[f[##], {0, 1}, {1, Sqrt[3]}/2] &;
p2 = Plot3D[ f2[c, d], {c, 0, 1}, {d, 0, 1}, ImageSize -> 300];
Row[{p1, p2}, Spacer[5]]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Controller in other package ( CakePHP 2.1 )
At first sorry my bad English.
I want define routing for this url:
For example my controller is in "/app/Controller/Services/TypesController.php"
My controller has 2 action
index() and property();
How to define routing for this url:
/services/types/property
/services/types/index
A:
you cannot create subfolders in /Controller
It is just
/app/Controller/TypesController.php
url result:
/types/action/
OR - you can make it a plugin:
/app/Plugin/Services/Controller/TypesController.php
url result:
/services/types/action/
but thats pretty well documented: http://book.cakephp.org/2.0/en/plugins.html
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the best way to compress a sequence of about 2 millions numbers (value range is from 1-->28)
I am trying to compress a list of integer numbers, where:
There is no negative number.
The value range of items is from [1....28]
There are totally 2482113 items in the list.
Currently I'm using 5 bits to store each number.
The "appearing" statistic is as below
1 : 1242149
2 : 620038
3 : 309399
4 : 154983
5 : 77816
6 : 38601
7 : 19651
8 : 9790
9 : 4830
10 : 2447
11 : 1253
12 : 597
13 : 303
14 : 130
15 : 73
16 : 23
17 : 17
18 : 4
19 : 4
20 : 2
21 : 1
23 : 1
28 : 1
So please show me the best way to compress this kind of data (estimate compression ratio - if possible - is highly appreciated).
A:
With that sort of distribution (a), you'd probably want to look into a variable-length encoding scheme such as Huffman. That will give you far better compression than the fixed 5-bit size. They work by using less bits to indicate more common values (and more bits to represent uncommon values) to drive down the average bit width.
Just taking a simple case, let's say a 0 bit represents the number one and all the other numbers are represented by a 1 bit followed by your current 5-bit scheme.
That means you save four bits for each value of one (1,242,149 x 4 = 4,968,596 bits) and "waste" one bit for all the other values (1,239,964 bits), a net saving of 3.7 million bits.
That's a "hardcoded" Huffman scheme for your particular data set meant as an illustration of how it works, you'd probably want to be a little more adaptive for arbitrary data sets.
And expanding it to include more of the larger quantities makes an added improvement. We already know the savings for the top value:
Bit pattern Value Quantity Saved bits
0 1 1,242,149 4,968,596 (4 per)
1xxxxx >1 1,239,964 1,239,964- (1 per)
---------
Net saving 3,728,632 (extra return 3,728,632)
For the top two values:
Bit pattern Value Quantity Saved bits
0 1 1,242,149 4,968,596 (4 per)
10 2 620,038 1,860,114 (3 per)
11xxxxx >2 619,926 1,239,852- (2 per)
---------
Net saving 5,588,858 (extra return 1,860,226)
And for the top three:
Bit pattern Value Quantity Saved bits
0 1 1,242,149 4,968,596 (4 per)
10 2 620,038 1,860,114 (3 per)
110 3 309,399 618,798 (2 per)
111xxxxx >3 310,527 931,581- (3 per)
---------
Net saving 6,515,927 (extra return 927,069)
And for the top four:
Bit pattern Value Quantity Saved bits
0 1 1,242,149 4,968,596 (4 per)
10 2 620,038 1,860,114 (3 per)
110 3 309,399 618,798 (2 per)
1110 4 154,983 154,983 (1 per)
1111xxxxx >4 155,544 622,176- (4 per)
---------
Net saving 6,980,315 (extra return 464,388)
At this level, your scheme of having a fixed five bits per number results in 12,410,565 bits. With a net saving of 6,980,315 bits, total compressed size is now 5,430,250 bits, a savings of about 56 over the fixed-bit-size method.
You can see the extra return on investment diminishing pretty quickly as more values are added. Beyond the top four values, you don't save anything with this hardcoded scheme since the bit savings per item go to zero (and negative after that). A truly adaptive encoding would give you more savings (since it's also optimising the xxxxx bit) but probably not much.
(a) A very much contrived distribution by the looks of it. Each quantity is about half the previous quantity, making the variable length encoding an ideal solution.
A:
Look at Huffman Coding. I don't know the exact details off the top of my head, but the basic principle is to assign less bits to more common numbers and more bits to less common numbers, as needed, so that overall, the average bits per number is less than what you'd expect for a uniform distribution (~5 bits per character)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Unable to Run Query, dbo.tablename invalid object name?
I am trying to build a dynamic query that will insert or update a record based on row exists in db, if yes i will update a bunch of records & sub records depending on @ObjectID.
Here is my query:
DECLARE @ObjectID BIGINT = 0;
SET @ObjectID = 0;
IF NOT EXISTS (SELECT ID
FROM dbo.ResortInfo
WHERE dbo.ResortInfo.resortCode = N'PYI')
BEGIN
INSERT INTO dbo.ResortInfo (columns)
VALUES (colvalues)
SET @ObjectID = SCOPE_IDENTITY()
PRINT @ObjectID
END
ELSE
BEGIN
PRINT 'Already exists' -- update query will replace here
END
The query runs ok without the declare part, but when I add
DECLARE @ObjectID BIGINT = 0;
SET @ObjectID = 0;
I get the following error :
Msg 208, Level 16, State 1, Line 4
Invalid object name 'dbo.ResortInfo'
A:
I would double check that you have the correct db selected. I currently have master selected, but your table may live in another.
In your stored procedure you could add this at the top to ensure you are using the correct one.
USE [<your db name here>]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Address of an overloaded C++ template function involving SFINAE
I need to get the address of an overloaded template function that involves SFINAE. A good example of this scenario would be boost::asio::spawn found here...
https://www.boost.org/doc/libs/1_70_0/doc/html/boost_asio/reference/spawn.html
How would I find the address of this particular instance...
template<
typename Function,
typename Executor>
void spawn(
const Executor & ex,
Function && function,
const boost::coroutines::attributes & attributes = boost::coroutines::attributes(),
typename enable_if< is_executor< Executor >::value >::type* = 0);
I've unsuccessfully tried this...
using Exec = boost::asio::io_context;
using Func = std::function<void(boost::asio::yield_context)>;
void (*addr)(Exec, Func) = boost::asio::spawn;
A:
boost::asio::spawn is not a function. It is a function template. It's a blueprint from which functions can be created. There's no way to get a pointer to a function template because it's a purely compile-time construct.
boost::asio::spawn<Func, Exec> is a function overload set, but it has no overload that matches the signature void(Exec,Func). Remember, default function arguments are just syntactic sugar. Those arguments are still part of the function's signature.
Those two issues make getting a pointer to boost::asio::spawn hard and ugly. It would be much easier to use a lambda. A lambda will let you preserve type deduction and take advantage of the default arguments:
auto func = [](auto&& exec, auto&& func) {
boost::asio::spawn(std::froward<decltype(exec)>(exec),
std::forward<decltype(func)>(func));
};
Even if you absolutely need a function pointer, a lambda is still probably the way to go. You lose parameter type deduction, but can still take advantage of the function's default arguments:
void(*addr)(const Exec&, Func) = [](const Exec& exec, Func func) {
boost::asio::spawn(exec, std::move(func));
};
This works because captureless lambdas can be converted to raw function pointers.
If you really, absolutely need a pointer directly to one of the spawn instantiations for some reason, you can get it, but it's not pretty:
using Exec = boost::asio::io_context::executor_type;
using Func = std::function<void(boost::asio::yield_context)>;
void(*addr)(const Exec&, Func&, const boost::coroutines::attributes&, void*) = boost::asio::spawn<Func&, Exec>;
You lose a lot in doing so though. Not only do you lose argument type deduction and the default arguments, you also lose the ability to pass both lvalues and rvalues to the function since you no longer have a deduced context for forwarding references to work in. I've to get a pointer to the instantiation accepting an lvalue-reference to the function. If you want it to accept rvalue-references instead, use
void(*addr)(const Exec&, Func&&, const boost::coroutines::attributes&, void*) = boost::asio::spawn<Func, Exec>;
Also note that this function takes four parameters. Call it with, i.e.
addr(my_io_context.get_executor(), my_function, boost::coroutines::attributes{}, nullptr);
Example
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Silverlight 4 - Loading a new webpage on button click?
In the older Silverlights, to open up a new webpage I'd use HtmlPage.Navigate, but that doesn't appear to work in Silverlight 4. (Yes, I've loaded System.Windows.Browser; )
Thanks in advance!
-Sootah
A:
From MSDN, might using HtmlPage.Window.Navigatework?
System.Windows.Browser.HtmlPage.Window.Navigate(
new Uri("http://silverlight.net"),
"_blank", "height=300,width=600,top=100,left=100");
link text
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Microsoft Office 2011 Mac - Removing individual components
I've installed 2011 fully but now need to remove Office, Messenger and maybe some other components. How can I do it in correct way, so it will clean up all related data.
A:
Follow these instructions from Microsoft's knowledge base. Check on the linked page for up to date instructions, but to summarize: Delete the following (* being a wildcard):
/Applications/Microsoft Office 2011/
~/Library/Preferences/com.microsoft.*
~/Library/Application Support/Microsoft/Office 2011 or ~/Library/Preferences/Microsoft/Office 2011
/Library/LaunchDaemons/com.microsoft.office.licensing.helper.plist
/Library/PrivilegedHelperTools/com.microsoft.office.licensing.helper
/Library/Preferences/com.microsoft.office.licensing.plist
/Library/Application Support/Microsoft/ (This will also remove Silverlight)
/Library/Receipts/Office2011_*
/private/var/db/receipts/com.microsoft.office*
~/Library/Application Support/Microsoft/Office/
/Library/Fonts/Microsoft/
~/Documents/Microsoft User Data/
Restart afterwards.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Django: How to filter for keyword in multiple fields on a model?
I have Article model with title and body fields. I'm building a search functionality and need to filter for articles that have keywords either in title or body fields.
I have two Articles. One has "candy" in the title and the other has "candy" in the body. So my result filter should have both articles. I'm trying below query but it's bringing me the first article only
Article.objects.filter(title__icontains='candy').filter(body__icontains='candy')
Thx
A:
You need to use Q objects.
Article.objects.filter(Q(title__icontains='candy')|Q(body__icontains='candy'))
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there a special significance attached to a Latin Mass?
Every so often, one of my Catholic acquaintances will mention that they will be, or have just attended a Latin Mass. I know pretty much nothing about a Latin Mass, other than (obviously) that it's performed in Latin.
Is there a spiritual, ritual, historical, or special significance about a Latin Mass? Or is it more something that people attend simply because they enjoy the uniqueness of it, like it's something that's just "neat" or "interesting" to attend rather than something with a deeper meaning?
If there is a deeper meaning, of course, I'd like to know what it is.
A:
"Mass in Latin" vs. "Latin Mass"
It's important to understand that the term Latin Mass is almost always a colloquial and somewhat imprecise reference to the Extraordinary Form of the Roman Rite.
This is the form of the liturgy that, in its essentials, was used by almost all Western Christians from at least the early Middle Ages until the Reformation, was codified for the great majority of Roman-Rite Catholics in 1570, and has remained fairly fixed up to and including the latest edition, which was published in 1962.
The older form of the mass is relatively rare and hard to find these days (though it is enjoying a gradual resurgence) -- most Catholics instead use the Ordinary Form of the Roman Rite, which was first published in 1970 as the initial answer to the Second Vatican Council's call for a reform and revision of the Church's worship.
The official texts of both forms are promulgated in Latin by the Vatican, but the older form must, by law, be celebrated in Latin (with a tiny bit of Greek), whereas the newer form may be celebrated in Latin, but rarely is.
Hence the informal term "Latin Mass", which really signifies the ritual more than the language, but does so by way of the most immediately-obvious distinguishing feature.
Why anyone cares
The two forms are strikingly different. They share the same overall structure and have many elements in common, but also many significant areas of divergence in structural detail, ceremonial rules, vesture, and the particular texts and music prescribed for various parts of the mass and various days of the year. Typically, though not in every single case, these divergences culminate in a strikingly different atmosphere of worship at church on Sunday morning.
This is not perhaps the place to discuss whether one form is better than the other, but it's plain that some people energetically prefer one form to the other.
A:
The Roman Catholic Mass was celebrated in Latin from about the year 400 until the Second Vatican in 1962. With over 1500 years of Catholics celebrating the Mass in Latin, it hold a special place in many people's hearts. There is a uniqueness to it, but for me it is more of knowing that I am celebrating the Mass in the same language as Catholics have for hundreds of years before me.
Recently, the English(language) Roman Catholic Mass was re-written to better reflect the language of the Latin Masses. Many of the changes are more direct translations of the Latin Mass.
Many people felt that the transition away from the Latin Mass in 1962 "corrupted" the Mass and took away from the universality of the Liturgies. In Vatican II, the Church concluded that saying the Mass in the native tongue of the area would be more spiritually fulfilling and revert to when the mass was said in the local language, before the start of the Latin Mass.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Cookie expiration is browser specific or server specific?
I am setting a a cookie with an expiration time by
mktime(24,0,0).
My question is simple. If browsers timezone is different , will cookie follow the server's timezone to expire or browser's timezone ?
A:
The Set-Cookie header has timezone information as part of the expires datetime so the user agent knows when it should expire.
Set-Cookie: sessionToken=abc123; Expires=Wed, 09 Jun 2021 10:18:14 GMT
From the php docs for setcookie
expire
...
Note: You may notice the expire parameter takes on a Unix timestamp,
as opposed to the date format Wdy, DD-Mon-YYYY HH:MM:SS GMT, this is
because PHP does this conversion internally.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Parsing JSON data in Perl
I am parsing JSON data which is in .json file. Here I have 2 formats of JSON data files.
I could parse first JSON file - file is shown below:
file1.json
{
"sequence" : [ {
"type" : "type_value",
"attribute" : {
"att1" : "att1_val",
"att2" : "att2_val",
"att3" : "att3_val",
"att_id" : "1"
}
} ],
"current" : 0,
"next" : 1
}
Here is my script:
#/usr/lib/perl
use strict;
use warnings;
use Data::Dumper;
use JSON;
my $filename = $ARGV[0]; #Pass json file as an argument
print "FILE:$filename\n";
my $json_text = do {
open(my $json_fh, "<:encoding(UTF-8)", $filename)
or die("Can't open \$filename\": $!\n");
local $/;
<$json_fh>
};
my $json = JSON->new;
my $data = $json->decode($json_text);
my $aref = $data->{sequence};
my %Hash;
for my $element (@$aref) {
my $a = $element->{attribute};
next if(!$a);
my $aNo = $a->{att_id};
$Hash{$aNo}{'att1'} = $a->{att1};
$Hash{$aNo}{'att2'} = $a->{att2};
$Hash{$aNo}{'att3'} = $a->{att3};
}
print Dumper \%Hash;
Everything is getting stored in %Hash and when I print Dumper of the %Hash I am getting following result.
$VAR1 = {
'1' => {
'att1' => 'att1_val',
'att2' => 'att2_val',
'att3' => 'att3_val'
}
};
But when I parse second set of JSON file, I am getting empty hash by using the above script.
Output:
$VAR1 = {};
Here is the JSON file -
file2.json
{
"sequence" : [ {
"type" : "loop",
"quantity" : 8,
"currentIteration" : 0,
"sequence" : [ {
"type" : "type_value",
"attribute" : {
"att1" : "att1_val",
"att2" : "att2_val",
"att3" : "att3_val",
"att_id" : "1"
}
} ]
} ]
}
We can see two sequence in above JSON data file, which is causing the problem.
Can somebody tell me what I am missing in the script inorder to parse file2.json.
A:
One possibility might be to check the type field to differentiate between the two file formats:
# [...]
for my $element (@$aref) {
if ( $element->{type} eq "loop" ) {
my $aref2 = $element->{sequence};
for my $element2 ( @$aref2 ) {
get_attrs( $element2, \%Hash );
}
}
else {
get_attrs( $element, \%Hash );
}
}
sub get_attrs {
my ( $element, $hash ) = @_;
my $a = $element->{attribute};
return if(!$a);
my $aNo = $a->{att_id};
$hash->{$aNo}{'att1'} = $a->{att1};
$hash->{$aNo}{'att2'} = $a->{att2};
$hash->{$aNo}{'att3'} = $a->{att3};
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Selenium IDE: How to check if an element has a focus?
Is there a built in method for checking that an input text element has a focus ?
Well, I didn't find one, so I tried this extension.
But, it doesn't work for me either (i.e. the test fails).
Any ideas ?
A:
I have had numerous problems detecting if an element has focus because the browser Selenium is controlling typically does not have the focus within the Operating System, and as such the browser will NOT consider any elements to have focus until the browser regains the focus.
I have been pulling my hair out over this, so worked up a solution to this problem. See http://blog.mattheworiordan.com/post/9308775285/testing-focus-with-jquery-and-selenium-or
for a full explanation of the problem and a solution to this.
If you can't be arsed to read the lengthy explanation, simply include https://gist.github.com/1166821 BEFORE you include JQuery, and use $(':focus') to find the element that has focus, or .is(':focus') to check if an element has focus.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
API throws java.io.UnsupportedEncodingException
I am developing a Java program in eclipse using a proprietary API and it throws the following exception at run-time:
java.io.UnsupportedEncodingException:
at java.lang.StringCoding.encode(StringCoding.java:287)
at java.lang.String.getBytes(String.java:954)...
my code:
private static String SERVER = "localhost";
private static int PORT = 80;
private static String DFT="";
private static String USER = "xx";
private static String pwd = "xx";
public static void main(String[] args) {
LLValue entInfo = new LLValue();
LLSession session = new LLSession(SERVER, PORT, DFT, USER, pwd);
try {
LAPI_DOCUMENTS doc = new LAPI_DOCUMENTS(session);
doc.AccessPersonalWS(entInfo);
} catch (Exception e) {
e.printStackTrace();
}
}
The session appears to open with no errors, but the encoding exception is thrown at doc.AccessEnterpriseWS(entInfo)
Through researching this error I have tried using the -encoding option of the compiler, changing the encoding of my editor, etc.
My questions are:
how can I find out the encoding of the .class files I am trying to use?
should I be matching the encoding of my new program to the encoding of the API?
If java is machine independent why isn't there standard encoding?
I have read this stack trace and this guide already --
Any suggestions will be appreciated!
Cheers
A:
Run it in your debugger with a breakpoint on String.getBytes() or StringCoding.encode(). Both are classes in the JDK so you have access to them and should be able to see what the third party is passing in.
The character encoding is used to specify how to interpret the raw binary. The default encoding on English Windows systems in CP1252. Other languages and systems may use different a different default encoding. As a quick test, you might try specifying UTF-8 to see if the problem magically disappears.
As noted in this question, the JVM uses the default encoding of the OS, although you can override this default.
Without knowing more about the third party API you are trying to use, it's hard to say what encoding they might be using. Unfortunately from looking at the implementation of StringCoding.encode() it appears there are a couple different ways you could get an UnsupportedEncodingException. Stepping through with a debugger should help narrow things down.
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.