text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
Linq-select group from specific date
Problem: Looking to select 1 row of each type
Table: EntryPrices
AttendingTypesId (int)
Name (nvarchar)
Price (int)
GroupId (int)
FromDate (DateTime) //yyyy-MM-dd
Sample:
Basic entry - 100 - 1 - 2012-08-01
Sponsor entry - 350 - 2 - 2012-08-01
Staff entry - 70 - 3 - 2012-08-01
Basic entry - 150 - 1 - 2012-10-01
Basic entry - 200 - 1 - 2012-12-01
As you can see, there are 3 basic entries. Each is valid depending on the date when they registered.
So if a person registers in mid september, he gets it for 100
mid November 150 and end December 200.
However i want to get the "valid" fee depending on todays date and i want to list that in a dropdown list.
So if i query this today (2012-aug-02) i should get the following
rows:
Basic entry - 100 - 1 - 2012-08-01
Sponsor entry - 350 - 2 - 2012-08-01
Staff entry - 70 - 3 - 2012-08-01
If i query this (2012-dec-20) i should get the following rows:
Sponsor entry - 350 - 2 - 2012-08-01
Staff entry - 70 - 3 - 2012-08-01
Basic entry - 200 - 1 - 2012-12-01
How can i construct this in linq-to-sql?
My idea was something like:
var f = from data in mc.AttendingTypes
where DateTime.Now.CompareTo(data.FromDate) > 0
group data by new { data.GroupId, data.Name, data.Price, data.AttendingTypesId }
into newData
select new { newData.Key.Name, newData.Key.Price, newData.Key.AttendingTypesId };
A:
I'd rather do something like that :
var today = DateTime.Now;
var result = mc.AttendingTypes
.Where(at => at.FromDate <= today)
.GroupBy(at => at.GroupId)
.Select(g => g.OrderByDescending(m => m.FromDate).FirstOrDefault())
.ToList();
| {
"pile_set_name": "StackExchange"
} |
Q:
javascript : why missing name after . operator alert appear
why in my script written why missing name after . operator when I've included a script like this
this.switch = function(){
if (this.status == "enabled")
{
this.disable();
this.stop();
}
else
{
this.enable();
}
}
the script is meant to divert status from enabled to disabled
A:
switch is a reserved keyword (for ... switch statements!). If you imperatively, absolutely must use this name, write this['switch'] instead, but it will be annoying to use.
A common name for a function that turns something on/off is toggle().
A:
switch is a javascript keyword. Try using a different name for your function.
| {
"pile_set_name": "StackExchange"
} |
Q:
Better way of comparing two lists with LINQ?
I have the 2 collections:
IEnumerable<Element> allElements
List<ElementId> someElements,
What is a concise way of doing the following together:
[1] Verifying that all elements in someElements exist in allElements, return quickly when condition fails.
and
[2] Obtain a list of Element objects that List<ElementId> someElements maps to.
Every Element object has an ElementId
Thank you.
A:
I would do this:
var map = allElements.ToDictionary(x => x.Id);
if (!someElements.All(id => map.ContainsKey(id))
{
// Return early
}
var list = someElements.Select(x => map[x])
.ToList();
Note that the first line will throw an exception if there are any duplicates in allElements.
| {
"pile_set_name": "StackExchange"
} |
Q:
Python conda traceback: No module named ruamel.yaml.comments
Ran conda update conda on bash terminal and below is the traceback.
Any idea on what is wrong with my installation?
yusuf@yusuf-pc2:~$ conda update conda
Traceback (most recent call last):
File "/usr/local/bin/conda", line 11, in <module>
load_entry_point('conda==4.2.7', 'console_scripts', 'conda')()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 567, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2612, in load_entry_point
return ep.load()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2272, in load
return self.resolve()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2278, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/local/lib/python2.7/dist-packages/conda/cli/__init__.py", line 8, in <module>
from .main import main # NOQA
File "/usr/local/lib/python2.7/dist-packages/conda/cli/main.py", line 46, in <module>
from ..base.context import context
File "/usr/local/lib/python2.7/dist-packages/conda/base/context.py", line 18, in <module>
from ..common.configuration import (Configuration, MapParameter, PrimitiveParameter,
File "/usr/local/lib/python2.7/dist-packages/conda/common/configuration.py", line 40, in <module>
from ruamel.yaml.comments import CommentedSeq, CommentedMap # pragma: no cover
ImportError: No module named ruamel.yaml.comments
yusuf@yusuf-pc2:~$
Location of ruamel package:
/home/yusuf/anaconda2/lib/python2.7/site-packages/ruamel_yaml/comments.py
/home/yusuf/anaconda2/lib/python2.7/site-packages/ruamel_yaml/comments.pyc
The path needs to be corrected. Any idea on how to fix it?
A:
Got a solution from Argonauts (superuser.com):
It looks like you have two instances of anaconda installed on your
system. One installed to /usr/local/ and the other to your home
directory.
From the bash shell, modify your path (temporarily) with this command:
export PATH="/home/yusuf/anaconda2/bin:$PATH"
Then try updating again:
conda update conda
That should resolve the issue. You can then add the export PATH...
line to the bottom of your ~/.bashrc file and the fix will be
persistent.
Assuming that I guessed correctly about the cause of this issue,
you'll want to remove the duplicate installation - as long no other
system users are using it.
To clean up the duplicate installation issue, under normal
circumstances you can remove an unwanted anaconda install simply by
deleting it - however since you have it installed to /usr/local/ you
will need to be very careful - other applications may be installed to
that directory. Simply renaming /usr/local/bin/conda to
/usr/local/bin/old.conda and the same for each /usr/local/bin/python*
entry shoud be enough to prevent that install being used, but it's far
from a perfect solution.
Source: https://superuser.com/questions/1159792/conda-traceback-no-module-named-ruamel-yaml-comments
| {
"pile_set_name": "StackExchange"
} |
Q:
Are my variables global?
Hello I've been working on a homework and due to homework rules im not allowed to use global variables. I've made a research about global variables but couldnt realy understand if my variables are global or local. The variables are defined in constructor inside my class. This is how my header looks like:
#include <string>
using namespace std;
class Team{
public:
string tColor;
string tName;
};
class Player{
public:
string pPos;
string pName;
};
class SocReg {
private:
Team *teams;// These are the variables Im not sure of
Player *players;// These are the variables Im not sure of
int playernum, teamnum; // These are the variables Im not sure of
public:
SocReg();
~SocReg();
void addTeam( string teamName, string color );
void removeTeam( string teamName );
void addPlayer( string teamName, string playerName, string playerPosition );
void removePlayer( string teamName, string playerName );
void displayAllTeams();
void displayPlayer( string playerName );
void displayTeam( string teamName );
// ...
// you may define additional member functions and data members,
// if necessary.
};
This question might sound too noobish but im so confused thanks in advance
A:
Team *teams;// These are the variables Im not sure of
Player *players;// These are the variables Im not sure of
int playernum, teamnum; // These are the variables Im not sure of
Pragmatic answer: These variables are neither global nor local. They are member variables. But whoever gave you this assignment surely just wanted to make sure that you don't use global variables, so you'll be fine. It would be completely meaningless to give students an assignment with a class and forbid member variables.
Language-lawyer answer: Neither "global variable" nor "member variable" are official terminology. Believe it or not, but the entire ~1300-1400 PDF for the ISO C++ standard contains only 3 instances of "member variable" and 1 instance of "global variable" (I have searched in a draft PDF, but that doesn't make much of a difference).
A local variable is described in the C++ standard at §3.3.3/1 as follows:
A variable declared at block scope is a local variable.
A global variable is officially called "non-local variable" (§3.6.2). It's by definition the opposite of a local variable.
A member variable is officially called a "data member", as evidenced in §9.2/1:
Members of a class are data members, member functions (...), nested
types, and enumerators.
| {
"pile_set_name": "StackExchange"
} |
Q:
react lifecycle when using redux
in react, when the app start, it will run ComponentWill/DidMount, then for every this.setState(), the component will run componentWill/DidUpdate.
in redux, the store will subscribe a render function like this,
const store = createStore({Reducers})
function run(){
ReactDOM.render(<App />, document.getElementById("element"))
}
store.subscribe(run);
// then for every dispatch() action, store/reducer will run the render all over again
it seems to me that for every changes to store, redux will have to initialize component all over again.
Does this means that for every changes, the component will always run componentWill/DidMount(), it will never run componentWill/DidUpdate?
if no.1 is correct, does it make redux slower because it has to initialize the component for every update?
A:
First, the answers to your questions:
Does this means that for every changes, the component will always run componentWill/DidMount(), it will never run componentWill/DidUpdate?
In my experience componentWill/DidMount is called just the first time, then yoy will have componentWill/DidUpdate, if the DOM structure doesn't change and only your component properties do.
if no.1 is correct, does it make redux slower because it has to initialize the component for every update?
Redux doesn't have performance issues because it uses React best practices and updates only the components that have their properties changed.
I reccomend two great resources to help you in your Redux / React journey:
Redux tutorial has a Usage with React section
Getting Started with Redux (30 free videos)
| {
"pile_set_name": "StackExchange"
} |
Q:
Array not get data for particular key
Response array like this
NewDataSet = {
Table = (
{
City = {
text = "\nThiruvananthapuram";
};
Country = {
text = "\n\nIndia";
};
text = "\n";
},
{
City = {
text = "\nVellore";
};
Country = {
text = "\n\nIndia";
};
text = "\n";
}
);
text = "\n";
I have write this code..
xmlDictionary = [XMLReader dictionaryForXMLString:xmlResultString error:nil];
NSLog(@"%@",xmlDictionary);
NSLog(@"%@",xmlDictionary);
NSArray * responseArr = xmlDictionary[@"NewDataSet"];
NSLog(@"%@",responseArr);
for(NSDictionary * dic in responseArr)
{
NSLog(@"%@",dic);
//[array1 addObject:[dic valueForKey:@"City"]];
[array1 addObject:[[dic valueForKey:@"City"] valueForKey:@"text"]];
}
but not get the data. in array1. please help me out this thanks in advance.
Problem is i will not get the value in NSDictionary.
Error log is
this class is not key value coding-compliant for the key City.'
A:
I got my Solution problem is i will not get the direct City key. Because City key is under the NewDataSet & Table
so first you go to the NewDataSet and then Table key then finally you get the City key.
Now get array data from Dictionary Like
NSArray * City=[[NSArray alloc]init];
City=[[[xmlDictionary valueForKey:@"NewDataSet"] valueForKey:@"Table"] valueForKey:@"City"];
NSLog(@"%@",City);
pass multiple keys inside key this is the solution.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to pass an argument in the function CreateProcess in Win32
I have an application hello.exe which takes input number by command-line argument and generates Fibonacci number. I want to execute this process by function CreateProcess in Win32.
Here is my file hello.c:
#include<stdio.h>
#include<stdlib.h>
int fib(int n)
{
if (n <= 1)
return n;
return fib(n-1) + fib(n-2);
}
int main(int argc, char *argv[])
{
int number = atoi(argv[1]); // Command Line Argument For Input
int res = fib(number);
printf("\nFibonacci no of %d is: %d\n",number ,res);
return 0;
}
Compiled the above program by : gcc hello.c -o hello
Here is my program for Create Process:
#include<stdio.h>
#include<Windows.h>
#include<string.h>
int main()
{
STARTUPINFO si[1];
PROCESS_INFORMATION pi[1];
ZeroMemory(&si, sizeof(si));
ZeroMemory(&pi, sizeof(pi));
char getApplicationName[][200] = { "C:\\Users\\xyz\\Documents\\Visual Studio 2015\\Projects\\process_CLI\\hello.exe" };
const int NoOfApplication = sizeof(getApplicationName) / sizeof(getApplicationName[0]);
//char *fibNumber = "10";
char *fibNumber = "C:\\Users\\xyz\\Documents\\Visual Studio 2015\\Projects\\process_CLI\\hello.exe 10";
for (int i = 0; i < NoOfApplication; i++)
{
BOOL bCreateProcess = FALSE;
bCreateProcess = CreateProcess(
getApplicationName[i],
fibNumber,
NULL,
NULL,
FALSE,
0,
NULL,
NULL,
&si[i],
&pi[i]
);
if (bCreateProcess == FALSE)
{
printf("\nProcess %d Creation Failed . Its Error Number: %d\n", i, GetLastError());
}
else {
printf("\nProcess Creation Successful\n");
//printf("\nProcessId: %d\n", GetProcessId(pi[i].hProcess));
//printf("\nThreadId: %d\n", GetThreadId(pi[i].hThread));
}
WaitForSingleObject(pi[i].hProcess, INFINITE);
}
for (int i = 0; i < NoOfApplication; i++)
{
CloseHandle(pi[i].hProcess);
CloseHandle(pi[i].hThread);
}
system("PAUSE");
return 0;
}
I tried diffrent method but unable to pass the argument through the function CreateProcess.
Expected Output:
Process Creation Successful
Fibonacci no of 10 is: 55
Actual Output:
Process Creation Successful
Fibonacci no of 0 is: 0
Please suggest me correct approach.
A:
Because you specified a module name as first parameter, the second parameter will be the command line, excluding the module name. So just 10.
lpApplicationName The name of the module to be executed. The lpApplicationName parameter can be NULL. In that case, the module name must be the first white space-delimited token in the lpCommandLine string.
lpCommandLine The command line to be executed. The lpCommandLine parameter can be NULL. In that case, the function uses the string pointed to by lpApplicationName as the command line.
If both lpApplicationName and lpCommandLine are non-NULL, the null-terminated string pointed to by lpApplicationName specifies the module to execute, and the null-terminated string pointed to by lpCommandLine specifies the command line.
So either provide a module name plus command line as lpApplicationName or as lpCommandLine, but not both, or provide the module name as first parameter and the command line as the second.
EDIT See the comment of Eryk Sun; however, the documentation does not require in this case that a path name with spaces be contained in quotes:
The lpApplicationName parameter can be NULL. In that case, the module name must be the first white space-delimited token in the lpCommandLine string.
If you are using a long file name that contains a space, use quoted strings to indicate where the file name ends and the arguments begin; otherwise, the file name is ambiguous. For example, consider the string "c:\program files\sub dir\program name". This string can be interpreted in a number of ways. The system tries to interpret the possibilities in the following order:
c:\program.exe files\sub dir\program name
c:\program files\sub.exe dir\program name
c:\program files\sub dir\program.exe name
c:\program files\sub dir\program name.exe
However, since lpApplicationName in OP's code is not NULL and since lpApplicationName cannot contain command line arguments, lpApplicationName does not need to be a quoted string if this name contains spaces.
| {
"pile_set_name": "StackExchange"
} |
Q:
invertible block matrix equivalence
Let $$A=\begin{pmatrix} R&v \\u^T & 0 \end{pmatrix}$$ where $R$ is an invertible upper triangular matrix and $u,v \in \mathbb R^n$
Prove that $A$ is invertible if and only if $u^TR^{-1}v\neq0$
Would appreciate any help
A:
According to the determinant formula for block matrices
$$
\det A=\det\begin{bmatrix}R & v\\u^T & 0\end{bmatrix}=\det R\cdot\det(0-u^TR^{-1}v)=-\det R\cdot u^TR^{-1}v.
$$
Since $\det R\ne 0$ we have $\det A\ne 0$ iff $u^TR^{-1}v\ne 0$.
| {
"pile_set_name": "StackExchange"
} |
Q:
auto-increment doesnt return id in mongoose
I wanted to increment the id whenever a new document is added. But when I use the following code, neither fieldnum nor _id is autoincrementing.And I couldn't find the fieldnum . I know this is a lame question but can anyone please help me on this?
var mongoose = require('mongoose'),
Schema = mongoose.Schema,
pureautoinc = require('mongoose-pureautoinc');
var connection = mongoose.createConnection("mongodb://localhost/myDatabase");
pureautoinc.init(connection);
var bookSchema = new Schema({
title: String,
genre: String,
publishDate: Date
});
bookSchema.plugin(pureautoinc.plugin,{model : 'Book',field :'fieldnum'});
var Book = connection.model('Book',bookSchema);
var book1 = new Book({title : "goutham", genre : "comedy", publishDate : new Date()});
book1.save();
var book2 = new Book({title : "goutham1", genre : "comedy", publishDate : new Date()});
book2.save();
console.log(book1,book2);
A:
It's creating the fieldnum atrribute, and it's also autoincrementing it. If you check your saved objects through your mongodb console, calling db.books.find(), you'll see that the attribute was created.
The problem in your code is that you're not passing any callback to the save calls, so your're calling console.log(book1, book2) before they return. What you should've done to see the persisted objects:
var logIfSaved = function (error, doc) {
if (error) throw Error(error);
console.log(doc);
};
var book1 = new Book({title : "goutham", genre : "comedy", publishDate : new Date()});
book1.save(logIfSaved);
var book2 = new Book({title : "goutham1", genre : "comedy", publishDate : new Date()});
book2.save(logIfSaved);
Now, passing a callback to your save calls, you'll log the objects only after they're saved.
| {
"pile_set_name": "StackExchange"
} |
Q:
Should visitors need to load images to be able to read text?
See this meta discussion on another site IS the site supposed to look like this?
When you turn off image loading in your browser and visit the site you are presented with this:
When you load images a white background is loaded which provides very much better contrast for the blue text.
Obviously the site cannot be fully used without loading images (but I am curious about how blind users manage) but most people visiting sites on SE network will not have accounts. They'll have done a web search, and they'll click a result. They do not need the vote buttons or the flags or any of that interface stuff. But they do need to be able to read the text. For those people "reading an answer" is "using the site".
Thus: should visitors need to load images in order to be able to read the text on sites in the SE network?
A:
Jin just pushed a fix for this. Should be live in the next production build. Thanks for the report!
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get/obtain Variables from URL in Flash AS3
So I have a URL that I need my Flash movie to extract variables from:
example link:
http://www.example.com/example_xml.php?aID=1234&bID=5678
I need to get the aID and the bID numbers.
I'm able to get the full URL into a String via ExternalInterface
var url:String = ExternalInterface.call("window.location.href.toString");
if (url) testField.text = url;
Just unsure as how to manipulate the String to just get the 1234 and 5678 numbers.
Appreciate any tips, links or help with this!
A:
Create a new instance of URLVariables.
// given search: aID=1234&bID=5678
var search:String = ExternalInterface.call("window.location.search");
var vars:URLVariables = new URLVariables(search);
trace(vars.aID); // 1234
trace(vars.bID); // 5678
| {
"pile_set_name": "StackExchange"
} |
Q:
adding a second for statement to function
Updating my code to reflect my attempt at incorporating both for statements.
function unavailableDays(date) {
//date array to be disabled
var disabledDays = ["1963-3-31", "1965-9-18", "1965-9-19",
"1965-10-2", "1965-10-3", "1965-10-9", "1965-10-10"];
var yy = date.getFullYear(), mm = date.getMonth(), dd = date.getDate();
for (i = 0; i < disabledDays.length; i++) {
if($.inArray(yy + '-' + (mm+1) + '-' + dd,disabledDays) != -1 || new Date() < date) {
return [false];
}
}
return [true];
//date range to be disabled
var first = new Date("1978-8-10");
var last = new Date("1978-11-05");
var unavailableRange = [];
var yy = date.getFullYear(), mm = date.getMonth(), dd = date.getDate();
for(j = first; j < last; j.setDate(j.getDate() + 7)){
if($.inArray(yy + '-' + (mm+1) + '-' + dd,unavailableRange) != -1 || new Date() < date) {
return [false];
}
}
return [true];
}
I'm trying to incorporate this initial variable with range of dates into the unavailableDays function below so that I can use it in beforeShowDay in the datepicker to disable dates. I've tried creating a separate function but that didn't work for beforeShowDate so I think it needs to be inside it.
var first = new Date("1978-08-10");
var last = new Date("1978-11-05");
var dates = [];
for (var i = first; i < last; i.setDate(i.getDate() + 7))
dates.push(new Date);
var disabledDays = ["1963-2-17", "1963-2-24", "1963-3-3", "1963-3-10", "1963-3-17", "1963-3-24", "1963-3-31", "1965-9-18", "1965-9-19", "1965-10-2", "1965-10-3", "1965-10-9", "1965-10-10"];
function unavailableDays(date) {
var yy = date.getFullYear(),
mm = date.getMonth(),
dd = date.getDate();
for (i = 0; i < disabledDays.length; i++) {
if ($.inArray(yy + '-' + (mm + 1) + '-' + dd, disabledDays) != -1 || new Date() < date) {
return [false];
}
}
return [true];
}
$(document).ready(function () {
$('.selector').datepicker({
inline: true,
dateFormat: 'yy-mm-dd',
constrainInput: true,
changeYear: true,
changeMonth: true,
gotoCurrent: true,
minDate: new Date(1962, 1 - 1, 1),
//months are index-based(1-1)
maxDate: new Date(2011, 10 - 1, 24),
yearRange: '-60y',
beforeShowDay: unavailableDays,
onSelect: function (dateText, inst) {
$("#img").attr("src", "http://www.example.com" + dateText + ".jpg");
var chosenDates = $.datepicker.parseDate('yy-mm-dd', dateText);
var backToString = $.datepicker.formatDate('MM dd' + ',' + ' yy', chosenDates);
$('.info').html('You are viewing:' + '<br />' + backToString);
}
});
A:
You should put the unavailableDays function's return into a variable and test for the value before letting the action continue, so programmatically it must wait for the function to finish before moving on.
onSelect: function (dateText, inst) {
var blackouts = unavailableDays(date);
if(blackouts==true){
$("#img").attr("src", "http://www.example.com" + dateText + ".jpg");
var chosenDates = $.datepicker.parseDate('yy-mm-dd', dateText);
var backToString = $.datepicker.formatDate('MM dd' + ',' + ' yy', chosenDates);
$('.info').html('You are viewing:' + '<br />' + backToString);
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it widely followed that only some parts of academic books are taught to undergraduates?
I'm an undergraduate student studying psychology using English language in a non-English speaking country. I'm graduating soon, and I'm applying for master programs. We mostly use McGraw Hill publications for our undergraduate curriculum.
My concern is this: Our instructors teach us through slides, and books are secondary learning sources—that is if they are used at all. These abbreviated slides use the same chapters of the book, yet not all chapters are covered, in fact some courses cover as little as half of the relevant book's chapters.
Is this normal across the world?
A friend of mine told me that academic books are made to accompany both bachelor-level topics, and master-level topics, and this is why some parts were excluded for undergraduate programs. Is this correct?
I'm asking this because I don't want to get shocked when doing my masters and discover that I'm under-prepared or under-trained.
A:
Not covering the entirety of a textbook is true in absolutely ever course I've ever taken, and is true of every institution I'm familiar with. However, I would not say that it is true that any given subset of sections of any given book are the ones that every professor covers; in fact, at least in the US, each professor has a wide latitude in choosing what parts of the book they want to cover, and to what extent their lectures and/or tests cover the same material as the book at all. Some professors, in some classes, prefer only to cover whats more-or-less written in the book, and some intentionally lecture on topics that are not in the book and rely on reading the book to cover other topics. Some professors even have a book only as optional reading and you don't need to read any of it.
All of the above is true at the undergraduate level, and even more true - in my experience - at the graduate level, as professors deviate even further from any available textbook.
As to some books being designed for both undergraduate and graduate levels, this varies by book. Some books are almost never used at the graduate level, and some are almost never used at the undergraduate level, and some are used in both but to different extents.
But to answer your core question: yes, it is very common not to cover the whole book, and no you shouldn't worry about it. Its always good to lightly skim through the material that isn't required, so you get an idea of what you are skipping, but generally most professors make it a point to select out the material they believe is most important and relevant and skip what they don't deem necessary. Few professors ever follow the order of the book, either, and prefer to select their own ordering - and rarely do two professors agree on what that order would best be.
And don't worry - if you end up doing anything challenging, you will always feel tremendously under-prepared and under-trained, no matter how many textbooks you've read. That just comes with the territory :)
A:
Yes, it is common to teach a course using only parts of an associated textbook.
In particular, read the "Preface" (or "Foreword", or "To the Instructor") part of your textbook(s); it usually specifies one or more suggested course sequences, which sections are optional, which sections more strongly depend on other sections, etc.
If you're really concerned about being under-prepared, then there's nothing stopping you from just reading the other sections on your own. This itself will be a skill broadly expected in graduate school, so it's not bad to practice itself at this time.
| {
"pile_set_name": "StackExchange"
} |
Q:
Multiple data sources as array in google charts dashboard api
What I have done?
I am building a dashboard with multiple data. The data are in form of arrays.
What i need to implement?
I have implemented the dashboard with the help of the tutorial but I am not able to implement another data source.
Here is my code
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8"/>
<title>
Google Visualization API Sample
</title>
<script type="text/javascript" src="http://www.google.com/jsapi"></script>
<script type="text/javascript">
google.load('visualization', '1.1', {packages: ['controls']});
</script>
<script type="text/javascript">
function drawVisualization() {
// Prepare the data
var data1 = google.visualization.arrayToDataTable([
['Name', 'Type', 'Precheck Alarms', 'Postcheck Alarms'],
['Michael' , 'Type1', 12, 5],
['Elisa', 'Type2', 20, 7],
['Robert', 'Type1', 7, 3],
['John', 'Type1', 54, 2],
['Jessica', 'Type2', 22, 6],
['Aaron', 'Type1', 3, 1],
['Margareth', 'Type2', 42, 8],
['Miranda', 'Type2', 33, 6]
]);
var data2 = google.visualization.arrayToDataTable([
['Name', 'Type', 'Precheck Alarms', 'Postcheck Alarms'],
['Michael' , 'Type1', 12, 5],
['Elisa', 'Type2', 20, 7],
['Robert', 'Type1', 7, 3],
['John', 'Type1', 54, 2],
['Jessica', 'Type2', 22, 6],
['Aaron', 'Type1', 3, 1],
['Margareth', 'Type2', 42, 8],
['Miranda', 'Type2', 33, 6]
]);
// Define a category picker control for the Type column
var categoryPicker = new google.visualization.ControlWrapper({
'controlType': 'CategoryFilter',
'containerId': 'control2',
'options': {
'filterColumnLabel': 'Type',
'ui': {
'labelStacking': 'vertical',
'allowTyping': false,
'allowMultiple': false
}
}
});
// Define a Pie chart
var columns_alarms = new google.visualization.ChartWrapper({
'chartType': 'ColumnChart',
'containerId': 'chart1',
'options': {
'width': 600,
'height': 600,
'legend': 'none',
'title': 'Alarms',
'chartArea': {'left': 15, 'top': 15, 'right': 0, 'bottom': 0},
//'pieSliceText': 'label'
},
// Instruct the piechart to use colums 0 (Name) and 3 (Donuts Eaten)
// from the 'data' DataTable.
'view': {'columns': [0, 2,3]}
});
// Define a table
var table_alarms = new google.visualization.ChartWrapper({
'chartType': 'Table',
'containerId': 'chart2',
'options': {
'width': '300px'
}
});
var columns_kpi = new google.visualization.ChartWrapper({
'chartType': 'ColumnChart',
'containerId': 'chart4',
'options': {
'width': 600,
'height': 600,
'legend': 'none',
'title': 'Alarms',
'chartArea': {'left': 15, 'top': 15, 'right': 0, 'bottom': 0},
//'pieSliceText': 'label'
},
// Instruct the piechart to use colums 0 (Name) and 3 (Donuts Eaten)
// from the 'data' DataTable.
'view': {'columns': [0, 2,3]}
});
// Define a table
var table_kpi = new google.visualization.ChartWrapper({
'chartType': 'Table',
'containerId': 'chart5',
'options': {
'width': '300px'
}
});
// Create a dashboard
new google.visualization.Dashboard(document.getElementById('dashboard_alarms')).
new google.visualization.Dashboard(document.getElementById('dashboard_kpi')).
// Establish bindings, declaring the both the slider and the category
// picker will drive both charts.
bind([categoryPicker], [columns_kpi, table_kpi,columns_alarms, table_alarms]).
// Draw the entire dashboard.
draw(data1);
draw(data2);
}
google.setOnLoadCallback(drawVisualization);
</script>
</head>
<body style="font-family: Arial;border: 0 none;">
<div id="dashboard">
<table>
<tr style='vertical-align: top'>
<td style='width: 300px; font-size: 0.9em;'>
<div id="control1"></div>
<div id="control2"></div>
<div id="control3"></div>
</td>
<td style='width: 600px'>
<div style="float: left;" id="chart1"></div>
<div style="float: left;" id="chart2"></div>
<div style="float: left;" id="chart3"></div>
<div style="float: left;" id="chart4"></div>
<div style="float: left;" id="chart5"></div>
</td>
</tr>
</table>
</div>
</body>
</html>
The above code renders WSD.
A:
There are few mistakes in your code.
new google.visualization.Dashboard(document.getElementById('dashboard_alarms')).
new google.visualization.Dashboard(document.getElementById('dashboard_kpi')).
should be
new google.visualization.Dashboard(document.getElementById('dashboard_alarms'));
new google.visualization.Dashboard(document.getElementById('dashboard_kpi')).
(the "." should be a ";" at the end of the first line)
Also in the same two lines you refer to elements with id dashboard_alarms and dashboard_kpi but you don't have those elements in your html. You should add the tags
<div id="dashboard_alarms"/>
<div id="dashboard_kpi"/>
to your html.
You can use firebug to debug javascript code if you're using Firefox. Goole chrome might have a javascrpt debugger as well. With a javascript debugger you can diagnose the reason for such problems.
A working example of the code is available at jsfiddle.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to install source rpm(src.rpm) in fedora?
How to install source rpm(src.rpm) in fedora?
When i try to rebuild spec file after install package(for example openssh) with sample command :
rpmbuild -ba openssh.spec
I get folow message and not complet build
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.QkUOot
+ umask 022
+ cd /root/rpmbuild/BUILD
+ LANG=C
+ export LANG
+ unset DISPLAY
+ cd /root/rpmbuild/BUILD
+ rm -rf openssh-3.9p1
+ /usr/bin/gzip -dc /root/rpmbuild/SOURCES/openssh-3.9p1-noacss.tar.gz
+ /bin/tar -xf -
+ STATUS=0
+ '[' 0 -ne 0 ']'
+ cd openssh-3.9p1
+ /bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ echo 'Patch #0 (openssh-3.9p1-redhat.patch):'
Patch #0 (openssh-3.9p1-redhat.patch):
+ /bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-redhat.patch
+ /usr/bin/patch -s -p1 -b --suffix .redhat --fuzz=0
+ echo 'Patch #1 (openssh-3.6.1p2-groups.patch):'
Patch #1 (openssh-3.6.1p2-groups.patch):
+ /bin/cat /root/rpmbuild/SOURCES/openssh-3.6.1p2-groups.patch
+ /usr/bin/patch -s -p1 -b --suffix .groups --fuzz=0
1 out of 1 hunk FAILED -- saving rejects to file sshd.c.rej
error: Bad exit status from /var/tmp/rpm-tmp.QkUOot (%prep)
RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.QkUOot (%prep)
IN this file (rpm-tmp.QkUOot) following message exists:
#!/bin/sh
RPM_SOURCE_DIR="/root/rpmbuild/SOURCES"
RPM_BUILD_DIR="/root/rpmbuild/BUILD"
RPM_OPT_FLAGS="-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables"
RPM_ARCH="i386"
RPM_OS="linux"
export RPM_SOURCE_DIR RPM_BUILD_DIR RPM_OPT_FLAGS RPM_ARCH RPM_OS
RPM_DOC_DIR="/usr/share/doc"
export RPM_DOC_DIR
RPM_PACKAGE_NAME="openssh"
RPM_PACKAGE_VERSION="3.9p1"
RPM_PACKAGE_RELEASE="8.RHEL4.17.endian2"
export RPM_PACKAGE_NAME RPM_PACKAGE_VERSION RPM_PACKAGE_RELEASE
RPM_BUILD_ROOT="/root/rpmbuild/BUILDROOT/openssh-3.9p1-8.RHEL4.17.endian2.i386"
export RPM_BUILD_ROOT
PKG_CONFIG_PATH="/usr/lib/pkgconfig:/usr/share/pkgconfig"
export PKG_CONFIG_PATH
set -x
umask 022
cd "/root/rpmbuild/BUILD"
LANG=C
export LANG
unset DISPLAY
cd '/root/rpmbuild/BUILD'
rm -rf 'openssh-3.9p1'
/usr/bin/gzip -dc '/root/rpmbuild/SOURCES/openssh-3.9p1-noacss.tar.gz' | /bin/tar -xf -
STATUS=$?
if [ $STATUS -ne 0 ]; then
exit $STATUS
fi
cd 'openssh-3.9p1'
/bin/chmod -Rf a+rX,u+w,g-w,o-w .
echo "Patch #0 (openssh-3.9p1-redhat.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-redhat.patch | /usr/bin/patch -s -p1 -b --suffix .redhat --fuzz=0
echo "Patch #1 (openssh-3.6.1p2-groups.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.6.1p2-groups.patch | /usr/bin/patch -s -p1 -b --suffix .groups --fuzz=0
echo "Patch #2 (openssh-3.8.1p1-skip-initial.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.8.1p1-skip-initial.patch | /usr/bin/patch -s -p1 -b --suffix .skip-initial --fuzz=0
echo "Patch #3 (openssh-3.8.1p1-krb5-config.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.8.1p1-krb5-config.patch | /usr/bin/patch -s -p1 -b --suffix .krb5-config --fuzz=0
echo "Patch #4 (openssh-3.9p1-vendor.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-vendor.patch | /usr/bin/patch -s -p1 -b --suffix .vendor --fuzz=0
echo "Patch #5 (openssh-3.9p1-no-log-signal.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-no-log-signal.patch | /usr/bin/patch -s -p1 -b --suffix .signal --fuzz=0
echo "Patch #6 (openssh-3.9p1-exit-deadlock.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-exit-deadlock.patch | /usr/bin/patch -s -p1 -b --suffix .exit-deadlock --fuzz=0
echo "Patch #7 (openssh-3.9p1-gid.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-gid.patch | /usr/bin/patch -s -p1 -b --suffix .gid --fuzz=0
echo "Patch #8 (openssh-3.9p1-loginuid.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-loginuid.patch | /usr/bin/patch -s -p1 -b --suffix .loginuid --fuzz=0
#SELinux
echo "Patch #12 (openssh-selinux.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-selinux.patch | /usr/bin/patch -s -p1 -b --suffix .selinux --fuzz=0
echo "Patch #16 (openssh-3.9p1-audit.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-audit.patch | /usr/bin/patch -s -p1 -b --suffix .audit --fuzz=0
#%patch20 -p0 -b .gssapimitm
echo "Patch #21 (openssh-3.9p1-skip-used.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-skip-used.patch | /usr/bin/patch -s -p1 -b --suffix .skip-used --fuzz=0
echo "Patch #22 (openssh-3.9p1-can-2005-2798.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-can-2005-2798.patch | /usr/bin/patch -s -p3 -b --suffix .destroy-creds --fuzz=0
echo "Patch #23 (openssh-3.9p1-scp-no-system.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-scp-no-system.patch | /usr/bin/patch -s -p1 -b --suffix .no-system --fuzz=0
echo "Patch #24 (openssh-3.9p1-safe-stop.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-safe-stop.patch | /usr/bin/patch -s -p1 -b --suffix .safe-stop --fuzz=0
echo "Patch #25 (openssh-3.9p1-scp-no-overwrite.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-scp-no-overwrite.patch | /usr/bin/patch -s -p1 -b --suffix .no-overwrite --fuzz=0
echo "Patch #26 (openssh-3.9p1-pam-message.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-pam-message.patch | /usr/bin/patch -s -p0 -b --suffix .pam-message --fuzz=0
echo "Patch #27 (openssh-3.9p1-log-in-chroot.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-log-in-chroot.patch | /usr/bin/patch -s -p1 -b --suffix .log-chroot --fuzz=0
echo "Patch #28 (openssh-3.9p1-cve-2006-4924.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-cve-2006-4924.patch | /usr/bin/patch -s -p1 -b --suffix .deattack-dos --fuzz=0
echo "Patch #29 (openssh-3.9p1-cve-2006-5051.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-cve-2006-5051.patch | /usr/bin/patch -s -p1 -b --suffix .sig-no-cleanup --fuzz=0
echo "Patch #100 (openssh-3.9p1-rc-condstop.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-rc-condstop.patch | /usr/bin/patch -s -p1 -b --suffix .condstop --fuzz=0
echo "Patch #30 (openssh-3.9p1-cve-2006-5794.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-cve-2006-5794.patch | /usr/bin/patch -s -p1 -b --suffix .verify --fuzz=0
echo "Patch #31 (openssh-3.9p1-buffer-len.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-buffer-len.patch | /usr/bin/patch -s -p1 -b --suffix .buffer-len --fuzz=0
echo "Patch #32 (openssh-3.9p1-no-dup-logs.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-no-dup-logs.patch | /usr/bin/patch -s -p1 -b --suffix .no-dups --fuzz=0
echo "Patch #33 (openssh-4.3p2-no-v6only.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-4.3p2-no-v6only.patch | /usr/bin/patch -s -p1 -b --suffix .no-v6only --fuzz=0
echo "Patch #34 (openssh-3.9p1-hash-known.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-hash-known.patch | /usr/bin/patch -s -p1 -b --suffix .hash-known --fuzz=0
echo "Patch #35 (openssh-3.9p1-pam-session.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-pam-session.patch | /usr/bin/patch -s -p1 -b --suffix .pam-session --fuzz=0
echo "Patch #36 (openssh-3.9p1-gssapi-canohost.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-gssapi-canohost.patch | /usr/bin/patch -s -p1 -b --suffix .canohost --fuzz=0
echo "Patch #37 (openssh-3.9p1-cve-2006-5052.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-cve-2006-5052.patch | /usr/bin/patch -s -p1 -b --suffix .krb5-leak --fuzz=0
echo "Patch #38 (openssh-3.9p1-sftp-memleak.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-sftp-memleak.patch | /usr/bin/patch -s -p1 -b --suffix .sftp-memleak --fuzz=0
echo "Patch #39 (openssh-3.9p1-restart-reliable.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-restart-reliable.patch | /usr/bin/patch -s -p1 -b --suffix .restart-reliable --fuzz=0
echo "Patch #40 (openssh-3.9p1-close-sock.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-close-sock.patch | /usr/bin/patch -s -p1 -b --suffix .close-sock --fuzz=0
echo "Patch #41 (openssh-4.3p2-cve-2007-3102.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-4.3p2-cve-2007-3102.patch | /usr/bin/patch -s -p1 -b --suffix .inject-fix --fuzz=0
echo "Patch #42 (openssh-3.9p1-sftp-drain-acks.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-sftp-drain-acks.patch | /usr/bin/patch -s -p1 -b --suffix .drain-acks --fuzz=0
echo "Patch #43 (openssh-3.9p1-buffer-nonfatal.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-buffer-nonfatal.patch | /usr/bin/patch -s -p1 -b --suffix .nonfatal --fuzz=0
echo "Patch #44 (openssh-3.9p1-scp-manpage.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-3.9p1-scp-manpage.patch | /usr/bin/patch -s -p0 -b --suffix .scp-manpage --fuzz=0
echo "Patch #45 (openssh-4.7-cve-2007-4752.patch):"
/bin/cat /root/rpmbuild/SOURCES/openssh-4.7-cve-2007-4752.patch | /usr/bin/patch -s -p0 -b --suffix .scp-manpage --fuzz=0
autoreconf
And in spec file exist folow message:
%if %{?WITH_SELINUX:0}%{!?WITH_SELINUX:1}
%define WITH_SELINUX 1
%endif
%if %{WITH_SELINUX}
# Audit patch applicable only over SELinux patch
%define WITH_AUDIT 1
%endif
# OpenSSH privilege separation requires a user & group ID
%define sshd_uid 74
%define sshd_gid 74
# Version of ssh-askpass
%define aversion 1.2.4.1
# Do we want to disable building of x11-askpass? (1=yes 0=no)
%define no_x11_askpass 1
# Do we want to disable building of gnome-askpass? (1=yes 0=no)
%define no_gnome_askpass 1
# Do we want to link against a static libcrypto? (1=yes 0=no)
%define static_libcrypto 0
# Do we want smartcard support (1=yes 0=no)
%define scard 0
# Use GTK2 instead of GNOME in gnome-ssh-askpass
%define gtk2 1
# Is this build for RHL 6.x?
%define build6x 0
# Build position-independent executables (requires toolchain support)?
%define pie 1
# Do we want kerberos5 support (1=yes 0=no)
%define kerberos5 0
# Whether or not /sbin/nologin exists.
%define nologin 1
# Reserve options to override askpass settings with:
# rpm -ba|--rebuild --define 'skip_xxx 1'
%{?with_x11_askpass:%define no_x11_askpass 0}
%{?with_gnome_askpass:%define no_gnome_askpass 0}
# Add option to build without GTK2 for older platforms with only GTK+.
# RedHat <= 7.2 and Red Hat Advanced Server 2.1 are examples.
# rpm -ba|--rebuild --define 'no_gtk2 1'
%{?no_gtk2:%define gtk2 0}
# Is this a build for RHL 6.x or earlier?
%{?build_6x:%define build6x 1}
# If this is RHL 6.x, the default configuration has sysconfdir in /usr/etc.
%if %{build6x}
%define _sysconfdir /etc
%endif
# Options for static OpenSSL link:
# rpm -ba|--rebuild --define "static_openssl 1"
%{?static_openssl:%define static_libcrypto 1}
# Options for Smartcard support: (needs libsectok and openssl-engine)
# rpm -ba|--rebuild --define "smartcard 1"
%{?smartcard:%define scard 1}
# Is this a build for the rescue CD (without PAM, with MD5)? (1=yes 0=no)
%define rescue 0
%{?build_rescue:%define rescue 1}
# Turn off some stuff for resuce builds
%if %{rescue}
%define kerberos5 0
%endif
Summary: The OpenSSH implementation of SSH protocol versions 1 and 2.
Name: openssh
Version: 3.9p1
Epoch: 1
%define rel 8.RHEL4.17.endian2
%if %{rescue}
Release: %{rel}rescue
%else
Release: %{rel}
%endif
URL: http://www.openssh.com/portable.html
#Source0: ftp://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-%{version}.tar.gz
#Source1: ftp://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-%{version}.tar.gz.sig
Source0: openssh-%{version}-noacss.tar.gz
Source1: openssh-nukeacss.sh
Source2: http://www.pobox.com/~jmknoble/software/x11-ssh-askpass/x11-ssh-askpass-%{aversion}.tar.gz
Patch0: openssh-3.9p1-redhat.patch
Patch1: openssh-3.6.1p2-groups.patch
Patch2: openssh-3.8.1p1-skip-initial.patch
Patch3: openssh-3.8.1p1-krb5-config.patch
Patch4: openssh-3.9p1-vendor.patch
Patch5: openssh-3.9p1-no-log-signal.patch
Patch6: openssh-3.9p1-exit-deadlock.patch
Patch7: openssh-3.9p1-gid.patch
Patch8: openssh-3.9p1-loginuid.patch
Patch12: openssh-selinux.patch
Patch16: openssh-3.9p1-audit.patch
Patch20: openssh-3.8p1-gssapimitm.patch
Patch21: openssh-3.9p1-skip-used.patch
Patch22: openssh-3.9p1-can-2005-2798.patch
Patch23: openssh-3.9p1-scp-no-system.patch
Patch24: openssh-3.9p1-safe-stop.patch
Patch25: openssh-3.9p1-scp-no-overwrite.patch
Patch26: openssh-3.9p1-pam-message.patch
Patch27: openssh-3.9p1-log-in-chroot.patch
Patch28: openssh-3.9p1-cve-2006-4924.patch
Patch29: openssh-3.9p1-cve-2006-5051.patch
Patch30: openssh-3.9p1-cve-2006-5794.patch
Patch31: openssh-3.9p1-buffer-len.patch
Patch32: openssh-3.9p1-no-dup-logs.patch
Patch33: openssh-4.3p2-no-v6only.patch
Patch34: openssh-3.9p1-hash-known.patch
Patch35: openssh-3.9p1-pam-session.patch
Patch36: openssh-3.9p1-gssapi-canohost.patch
Patch37: openssh-3.9p1-cve-2006-5052.patch
Patch38: openssh-3.9p1-sftp-memleak.patch
Patch39: openssh-3.9p1-restart-reliable.patch
Patch40: openssh-3.9p1-close-sock.patch
Patch41: openssh-4.3p2-cve-2007-3102.patch
Patch42: openssh-3.9p1-sftp-drain-acks.patch
Patch43: openssh-3.9p1-buffer-nonfatal.patch
Patch44: openssh-3.9p1-scp-manpage.patch
Patch45: openssh-4.7-cve-2007-4752.patch
Patch100: openssh-3.9p1-rc-condstop.patch
License: BSD
Group: Applications/Internet
BuildRoot: %{_tmppath}/%{name}-%{version}-buildroot
BuildRequires: openssl-devel
Obsoletes: ssh
%if %{nologin}
Requires: /sbin/nologin
%endif
Requires: initscripts
%if ! %{no_gnome_askpass}
%if %{gtk2}
BuildPreReq: gtk2-devel, xauth
%else
BuildPreReq: gnome-libs-devel
%endif
%endif
%if %{scard}
BuildPreReq: sharutils
%endif
BuildPreReq: autoconf, openssl-devel, perl, zlib-devel
BuildPreReq: util-linux, groff, man
BuildPreReq: glibc-devel, pam-devel
%if ! %{no_x11_askpass}
BuildPreReq: XFree86-devel
%endif
%if %{kerberos5}
BuildPreReq: krb5-devel
%endif
%if %{WITH_SELINUX}
Requires: libselinux >= 1.17.9
BuildRequires: libselinux-devel >= 1.17.9
%endif
%if %{WITH_AUDIT}
BuildRequires: audit-libs-devel >= 1.0.12
%endif
%package extras
Summary: The OpenSSH implementation of SSH protocol version 2.
Requires: openssh = %{epoch}:%{version}-%{release}
Group: Applications/Internet
%package clients
Summary: OpenSSH clients.
Requires: openssh = %{epoch}:%{version}-%{release}
Group: Applications/Internet
Obsoletes: ssh-clients
%package clients-extras
Summary: OpenSSH clients.
Requires: openssh-clients = %{epoch}:%{version}-%{release}
Group: Applications/Internet
%package server
Summary: The OpenSSH server daemon.
Group: System Environment/Daemons
Obsoletes: ssh-server
PreReq: openssh = %{epoch}:%{version}-%{release}, /usr/sbin/useradd, /usr/bin/id
%if ! %{build6x}
Requires: /etc/pam.d/system-auth, /%{_lib}/security/pam_loginuid.so
%endif
%if %{WITH_AUDIT}
Requires: audit-libs >= 1.0.12
%endif
%package server-extras
Summary: The OpenSSH server daemon.
Group: System Environment/Daemons
PreReq: openssh-server = %{epoch}:%{version}-%{release}
%package askpass
Summary: A passphrase dialog for OpenSSH and X.
Group: Applications/Internet
Requires: openssh = %{epoch}:%{version}-%{release}
Obsoletes: ssh-extras
%package askpass-gnome
Summary: A passphrase dialog for OpenSSH, X, and GNOME.
Group: Applications/Internet
Requires: openssh = %{epoch}:%{version}-%{release}
Obsoletes: ssh-extras
%description
SSH (Secure SHell) is a program for logging into and executing
commands on a remote machine. SSH is intended to replace rlogin and
rsh, and to provide secure encrypted communications between two
untrusted hosts over an insecure network. X11 connections and
arbitrary TCP/IP ports can also be forwarded over the secure channel.
OpenSSH is OpenBSD's version of the last free version of SSH, bringing
it up to date in terms of security and features, as well as removing
all patented algorithms to separate libraries.
This package includes the core files necessary for both the OpenSSH
client and server. To make this package useful, you should also
install openssh-clients, openssh-server, or both.
%description extras
SSH (Secure SHell) is a program for logging into and executing
commands on a remote machine. SSH is intended to replace rlogin and
rsh, and to provide secure encrypted communications between two
untrusted hosts over an insecure network. X11 connections and
arbitrary TCP/IP ports can also be forwarded over the secure channel.
OpenSSH is OpenBSD's version of the last free version of SSH, bringing
it up to date in terms of security and features, as well as removing
all patented algorithms to separate libraries.
This package includes the core files necessary for both the OpenSSH
client and server. To make this package useful, you should also
install openssh-clients, openssh-server, or both.
This package contains ripped down files
%description clients
OpenSSH is a free version of SSH (Secure SHell), a program for logging
into and executing commands on a remote machine. This package includes
the clients necessary to make encrypted connections to SSH servers.
You'll also need to install the openssh package on OpenSSH clients.
%description server
OpenSSH is a free version of SSH (Secure SHell), a program for logging
into and executing commands on a remote machine. This package contains
the secure shell daemon (sshd). The sshd daemon allows SSH clients to
securely connect to your SSH server. You also need to have the openssh
package installed.
%description clients-extras
OpenSSH is a free version of SSH (Secure SHell), a program for logging
into and executing commands on a remote machine. This package includes
the clients necessary to make encrypted connections to SSH servers.
You'll also need to install the openssh package on OpenSSH clients.
This package contains ripped down files
%description server-extras
OpenSSH is a free version of SSH (Secure SHell), a program for logging
into and executing commands on a remote machine. This package contains
the secure shell daemon (sshd). The sshd daemon allows SSH clients to
securely connect to your SSH server. You also need to have the openssh
package installed.
This package contains ripped down files
%description askpass
OpenSSH is a free version of SSH (Secure SHell), a program for logging
into and executing commands on a remote machine. This package contains
an X11 passphrase dialog for OpenSSH.
%description askpass-gnome
OpenSSH is a free version of SSH (Secure SHell), a program for logging
into and executing commands on a remote machine. This package contains
an X11 passphrase dialog for OpenSSH and the GNOME GUI desktop
environment.
%prep
%if ! %{no_x11_askpass}
%setup -q -a 2
%else
%setup -q
%endif
%patch0 -p1 -b .redhat
%patch1 -p1 -b .groups
%patch2 -p1 -b .skip-initial
%patch3 -p1 -b .krb5-config
%patch4 -p1 -b .vendor
%patch5 -p1 -b .signal
%patch6 -p1 -b .exit-deadlock
%patch7 -p1 -b .gid
%patch8 -p1 -b .loginuid
%if %{WITH_SELINUX}
#SELinux
%patch12 -p1 -b .selinux
%endif
%if %{WITH_AUDIT}
%patch16 -p1 -b .audit
%endif
#%patch20 -p0 -b .gssapimitm
%patch21 -p1 -b .skip-used
%patch22 -p3 -b .destroy-creds
%patch23 -p1 -b .no-system
%patch24 -p1 -b .safe-stop
%patch25 -p1 -b .no-overwrite
%patch26 -p0 -b .pam-message
%patch27 -p1 -b .log-chroot
%patch28 -p1 -b .deattack-dos
%patch29 -p1 -b .sig-no-cleanup
%patch100 -p1 -b .condstop
%patch30 -p1 -b .verify
%patch31 -p1 -b .buffer-len
%patch32 -p1 -b .no-dups
%patch33 -p1 -b .no-v6only
%patch34 -p1 -b .hash-known
%patch35 -p1 -b .pam-session
%patch36 -p1 -b .canohost
%patch37 -p1 -b .krb5-leak
%patch38 -p1 -b .sftp-memleak
%patch39 -p1 -b .restart-reliable
%patch40 -p1 -b .close-sock
%patch41 -p1 -b .inject-fix
%patch42 -p1 -b .drain-acks
%patch43 -p1 -b .nonfatal
%patch44 -p0 -b .scp-manpage
%patch45 -p0 -b .scp-manpage
autoreconf
%build
CFLAGS="$RPM_OPT_FLAGS"; export CFLAGS
%if %{rescue}
CFLAGS="$CFLAGS -Os"
%endif
%if %{pie}
%ifarch s390 s390x
CFLAGS="$CFLAGS -fPIE"
%else
CFLAGS="$CFLAGS -fpie"
%endif
export CFLAGS
LDFLAGS="$LDFLAGS -pie"; export LDFLAGS
%endif
%if %{build6x}
export CFLAGS="$CFLAGS -D__func__=__FUNCTION__"
%endif
%if %{kerberos5}
krb5_prefix=`krb5-config --prefix`
if test "$krb5_prefix" != "%{_prefix}" ; then
CPPFLAGS="$CPPFLAGS -I${krb5_prefix}/include -I${krb5_prefix}/include/gssapi"; export CPPFLAGS
CFLAGS="$CFLAGS -I${krb5_prefix}/include -I${krb5_prefix}/include/gssapi"
LDFLAGS="$LDFLAGS -L${krb5_prefix}/%{_lib}"; export LDFLAGS
else
krb5_prefix=
CPPFLAGS="-I%{_includedir}/gssapi"; export CPPFLAGS
CFLAGS="$CFLAGS -I%{_includedir}/gssapi"
fi
%endif
%configure \
--sysconfdir=%{_sysconfdir}/ssh \
--libexecdir=%{_libexecdir}/openssh \
--datadir=%{_datadir}/openssh \
--with-default-path=/bin:/usr/bin \
--with-superuser-path=/sbin:/bin:/usr/sbin:/usr/bin \
--with-privsep-path=%{_var}/empty/sshd \
--enable-vendor-patchlevel="endian-%{version}-%{release}" \
%if %{scard}
--with-smartcard \
%endif
%if %{build6x}
--with-ipv4-default \
%endif
%if %{rescue}
--without-pam \
%else
--with-pam \
%endif
%if %{WITH_SELINUX}
--with-selinux \
%else
--without-selinux \
%endif
%if %{WITH_AUDIT}
--with-linux-audit \
%endif
%if %{kerberos5}
--with-kerberos5${krb5_prefix:+=${krb5_prefix}}
%else
--without-kerberos5
%endif
%if %{static_libcrypto}
perl -pi -e "s|-lcrypto|%{_libdir}/libcrypto.a|g" Makefile
%endif
make
%if ! %{no_x11_askpass}
pushd x11-ssh-askpass-%{aversion}
# This configure can't handle platform strings.
./configure --prefix=%{_prefix} --libdir=%{_libdir} --libexecdir=%{_libexecdir}/openssh
xmkmf -a
make
popd
%endif
# Define a variable to toggle gnome1/gtk2 building. This is necessary
# because RPM doesn't handle nested %if statements.
%if %{gtk2}
gtk2=yes
%else
gtk2=no
%endif
%if ! %{no_gnome_askpass}
pushd contrib
if [ $gtk2 = yes ] ; then
make gnome-ssh-askpass2
mv gnome-ssh-askpass2 gnome-ssh-askpass
else
make gnome-ssh-askpass1
mv gnome-ssh-askpass1 gnome-ssh-askpass
fi
popd
%endif
%install
rm -rf $RPM_BUILD_ROOT
mkdir -p -m755 $RPM_BUILD_ROOT%{_sysconfdir}/ssh
mkdir -p -m755 $RPM_BUILD_ROOT%{_libexecdir}/openssh
mkdir -p -m755 $RPM_BUILD_ROOT%{_var}/empty/sshd
mkdir -p -m755 $RPM_BUILD_ROOT%{_var}/run/sshd
make install DESTDIR=$RPM_BUILD_ROOT
install -d $RPM_BUILD_ROOT/etc/pam.d/
install -d $RPM_BUILD_ROOT/etc/rc.d/init.d
install -d $RPM_BUILD_ROOT%{_libexecdir}/openssh
%if %{build6x}
install -m644 contrib/redhat/sshd.pam.old $RPM_BUILD_ROOT/etc/pam.d/sshd
install -m755 contrib/redhat/sshd.init.old $RPM_BUILD_ROOT/etc/rc.d/init.d/sshd
%else
install -m644 contrib/redhat/sshd.pam $RPM_BUILD_ROOT/etc/pam.d/sshd
install -m755 contrib/redhat/sshd.init $RPM_BUILD_ROOT/etc/rc.d/init.d/sshd
%endif
%if ! %{no_x11_askpass}
install -s x11-ssh-askpass-%{aversion}/x11-ssh-askpass $RPM_BUILD_ROOT%{_libexecdir}/openssh/x11-ssh-askpass
ln -s x11-ssh-askpass $RPM_BUILD_ROOT%{_libexecdir}/openssh/ssh-askpass
%endif
%if ! %{no_gnome_askpass}
install -s contrib/gnome-ssh-askpass $RPM_BUILD_ROOT%{_libexecdir}/openssh/gnome-ssh-askpass
%endif
%if ! %{scard}
rm -f $RPM_BUILD_ROOT%{_datadir}/openssh/Ssh.bin
%endif
%if ! %{no_gnome_askpass}
install -m 755 -d $RPM_BUILD_ROOT%{_sysconfdir}/profile.d/
install -m 755 contrib/redhat/gnome-ssh-askpass.csh $RPM_BUILD_ROOT%{_sysconfdir}/profile.d/
install -m 755 contrib/redhat/gnome-ssh-askpass.sh $RPM_BUILD_ROOT%{_sysconfdir}/profile.d/
%endif
%if %{no_gnome_askpass}
rm -f $RPM_BUILD_ROOT/etc/profile.d/gnome-ssh-askpass.*
%endif
perl -pi -e "s|$RPM_BUILD_ROOT||g" $RPM_BUILD_ROOT%{_mandir}/man*/*
%clean
rm -rf $RPM_BUILD_ROOT
%triggerun server -- ssh-server
if [ "$1" != 0 -a -r /var/run/sshd.pid ] ; then
touch /var/run/sshd.restart
fi
%triggerun server -- openssh-server < 2.5.0p1
# Count the number of HostKey and HostDsaKey statements we have.
gawk 'BEGIN {IGNORECASE=1}
/^hostkey/ || /^hostdsakey/ {sawhostkey = sawhostkey + 1}
END {exit sawhostkey}' /etc/ssh/sshd_config
# And if we only found one, we know the client was relying on the old default
# behavior, which loaded the the SSH2 DSA host key when HostDsaKey wasn't
# specified. Now that HostKey is used for both SSH1 and SSH2 keys, specifying
# one nullifies the default, which would have loaded both.
if [ $? -eq 1 ] ; then
echo HostKey /etc/ssh/ssh_host_rsa_key >> /etc/ssh/sshd_config
echo HostKey /etc/ssh/ssh_host_dsa_key >> /etc/ssh/sshd_config
fi
%triggerpostun server -- ssh-server
if [ "$1" != 0 ] ; then
if test -f /var/run/sshd.restart ; then
rm -f /var/run/sshd.restart
/etc/init.d/sshd start > /dev/null 2>&1 || :
fi
fi
%pre server
%if %{nologin}
/usr/sbin/useradd -c "Privilege-separated SSH" -u 74 \
-s /sbin/nologin -r -d /var/empty/sshd sshd 2> /dev/null || :
%else
/usr/sbin/useradd -c "Privilege-separated SSH" -u 74 \
-s /dev/null -r -d /var/empty/sshd sshd 2> /dev/null || :
%endif
%postun server
/etc/init.d/sshd condrestart > /dev/null 2>&1 || :
%preun server
if [ "$1" = 0 ]
then
/etc/init.d/sshd stop > /dev/null 2>&1 || :
fi
%files
%defattr(-,root,root)
%attr(0755,root,root) %dir %{_sysconfdir}/ssh
%attr(0600,root,root) %config(noreplace) %{_sysconfdir}/ssh/moduli
%if ! %{rescue}
%attr(0755,root,root) %{_bindir}/ssh-keygen
%attr(0755,root,root) %dir %{_libexecdir}/openssh
%attr(4711,root,root) %{_libexecdir}/openssh/ssh-keysign
%endif
%if %{scard}
%attr(0755,root,root) %dir %{_datadir}/openssh
%attr(0644,root,root) %{_datadir}/openssh/Ssh.bin
%endif
%files extras
%defattr(-,root,root)
%doc CREDITS ChangeLog INSTALL LICENCE OVERVIEW README* RFC* TODO WARNING*
%if ! %{rescue}
%attr(0644,root,root) %{_mandir}/man1/ssh-keygen.1*
%attr(0644,root,root) %{_mandir}/man8/ssh-keysign.8*
%endif
%files clients
%defattr(-,root,root)
%attr(0755,root,root) %{_bindir}/ssh
%attr(0755,root,root) %{_bindir}/scp
%attr(0644,root,root) %config(noreplace) %{_sysconfdir}/ssh/ssh_config
%files clients-extras
%defattr(-,root,root)
%attr(-,root,root) %{_bindir}/slogin
%attr(0644,root,root) %{_mandir}/man1/ssh.1*
%attr(0644,root,root) %{_mandir}/man1/scp.1*
%attr(0644,root,root) %{_mandir}/man1/slogin.1*
%attr(0644,root,root) %{_mandir}/man5/ssh_config.5*
%if ! %{rescue}
%attr(2755,root,nobody) %{_bindir}/ssh-agent
%attr(0755,root,root) %{_bindir}/ssh-add
%attr(0755,root,root) %{_bindir}/ssh-keyscan
%attr(0755,root,root) %{_bindir}/sftp
%attr(0644,root,root) %{_mandir}/man1/ssh-agent.1*
%attr(0644,root,root) %{_mandir}/man1/ssh-add.1*
%attr(0644,root,root)
A:
You may simply have an SRPM that won't build. Try removing the --fuzz=0 from the patch commands. Or, better, play it safe with openssh, and look for a different source package!
| {
"pile_set_name": "StackExchange"
} |
Q:
Remove a css class for textbox on focus event using javascript without using jquery
Am trying to remove "error-border" class for input fields in my form. i need to done it with javascript without using jquery, in 'Onfocus event' of each input field. Please help me to solve this
I have tried following code
document.getElementsByClassName("error-class").focus({
document.getElementsByClassName("error-class").remove();
});
Please help me to solve this
A:
Do it like this.
var error = document.getElementsByClassName("error-class");
error.onfocus = function(){
error.classList.remove('error-class');
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Session Cookie does not set in IE11 only
Curious problem.
Newly developed website, uses 3rd party login system which uses sessions (surprise!). Website works perfectly on all instances, on all browsers except Internet Explorer 11 (and possibly previous versions, unchecked).
Qualifiers:
I have read various related topics on SO, nothing fits the bill.
PHP Header does not to do a redirect on every affected page
no _ in domain name or URL.
No iframes.
Session and domain are secured.
Code Details:
a) Each page has a controller file with header information included on it:
header("Cache-Control: no-cache, must-revalidate"); //HTTP 1.1
header("Expires: Thu, 19 Nov 2011 08:52:00 GMT"); // Date in the past
header('Content-Type: text/html; charset=utf-8');
header("X-Clacks-Overhead: GNU Terry Pratchett");
header_remove("X-Powered-By");
header("X-XSS-Protection: 1; mode=block");
header("X-Frame-Options: SAMEORIGIN");
header("X-Content-Type-Options: nosniff");
header("Content-Language: en");
header("Content-Security-Policy: upgrade-insecure-requests;");
header("Referrer-Policy: origin-when-cross-origin"); //referrer for Chrome
header("Referrer-Policy: strict-origin-when-cross-origin");
if (isset($_SERVER['HTTP_USER_AGENT']) &&
(strpos($_SERVER['HTTP_USER_AGENT'], 'MSIE') !== false)){
header('X-UA-Compatible: IE=edge,chrome=1');
}
b) As part of this process; a cookie check is carried out to know if the cookies are enabled on the client browser. This is done across both login/access controlled and public site areas.
if($_COOKIE['cookieEnabled'] !== "yes") {
\setcookie('cookieEnabled', "yes", time() + 42000, "/", $_SERVER['HTTP_HOST'], true, true);
}
All it is , is a cookie that says "yes" , cookies are enabled if the cookie is not already set. Simple.
c) Below this; there is controller code to load the session variables and do other stuff for the 3rd party admin side of things.
// Create / Include the Session Object - Session.php
$session = new Session($db);
d) I have setup a testing statment within the Session.php __construct to do this:
session_start();
if($_COOKIE['cookieEnabled'] !== "yes" && empty($_SESSION)) {
error_log("INFO: An access attempt without a session or cookie was attempted...");
if($_COOKIE['cookieEnabled'] !== "yes"){
error_log("Cookie does not appear to be enabled");
}
die("unimportant debug error");
}
Note that the session array will never be empty as it's prepopulated on previous pages;
e) The [local] PHP.ini is thus:
session.cookie_secure=1
default.charset=utf-8
error_log=/home/domainaccount/error/PHP_error.log
session.save_path=/home/domainaccount/sessionz
session.cookie_domain=domain.org.uk
NOTE: The web path is: /home/domainaccount/public_html/
The PHP.ini values have been checked with phpinfo() and are set correctly.
Curious problem
I load the website in various browsers and it logs in just fine, all works, session data is carried.
However on IE11 it does not. It simply comes back with a blank screen, no errors, no feedback (aka session data passed back to login page), and no code-based error logs.
Error log shows:
INFO: An access attempt without a session or cookie was attempted...
A whole bunch of times but no indication that the cookie is denied, simply the session.
Unsurprisingly, the login page features a header location redirect for both success and failed login attempts.
About IE11
IE version number: 11.248.16299.0.
IE cookie settings: first party cookies accepted, third party cookies accepted, always allow session cookies.
Questions
1) Why does this occur ONLY for IE?
2) How can I solve this (change my headers, cookie setup, etc.?)
A:
Some versions of IE silently drop cookies if the server time is in the past compared to the client time. Properly setting server/client time may help.
That's horrific -- servers will be far more accurate timekeepers than client browsers. Can you reference this at all?
I came across it once in a description from someone else on GitHub and it fixed my problem.
As a side note, since you explicitly called out no underscores in the domain, are you aware that leading numerals are also invalid URLs according to the RFC and IE also has problems with them?
| {
"pile_set_name": "StackExchange"
} |
Q:
add/remove fields - file upload javascript
Folks,
I'm trying to make a file upload form with add buttons and remove more fields for file upload.
What happens is that when I fire the button, both add and remove, they do not find the divs that were cloned, so the add / remove function does not work in the cloned fields.
I put the code here as an example
const $btnAdd = document.querySelectorAll('[data-js="btn-add"]');
const listBtnAdd = Array.from($btnAdd);
const $btnRemove = document.querySelectorAll('[data-js="btn-remove"]');
const listBtnRemove = Array.from($btnRemove);
const $rowAttach = document.querySelectorAll('[data-js="rowAttach"]');
const listRow = Array.from($rowAttach);
let rows = listRow.map((row, i, arr) => {
return row;
});
listBtnAdd.map((btnAdd, i) => {
btnAdd.addEventListener('click', function(e){
e.preventDefault();
console.log("button", btnAdd, i);
const formItems = document.querySelector('[data-js="formAttach"]');
const copy = formItems.firstElementChild.cloneNode(true);
formItems.appendChild(copy);
}, false);
});
listBtnRemove.map( (btnRemove, i)=> {
btnRemove.addEventListener('click', function(e){
e.preventDefault(e);
console.log('remove btn', btnRemove, i);
}, false);
});
<!-- begin snippet: js hide: false console: true babel: false -->
.row-attach { display: inline-block;margin-bottom: 20px }
.row-attach fieldset {float: left; margin-right: 10px;border: 0;}
.row-attach label {display: block;font-weight: bold;margin-bottom: 5px; }
<h3>Form</h3>
<form enctype="multipart/form-data" action="/upload/image" method="post" data-js="formAttach">
<div class="row-attach" data-js="rowAttach">
<fieldset>
<Label>Arquivo Anexado</Label>
<input type="file" id="file-name" name="file-upload" value="escolha" multiple>
</fieldset>
<fieldset>
<Label>Descrição</Label>
<input type="text" name="descricao" value="">
<button data-js="btn-add">+</button>
<button data-js="btn-remove">X</button>
</fieldset>
</div>
</form>
<div class="btn-carregar">
<input type="submit" value="Carregar">
</div>
How do I trigger the new fields created?
A:
I managed to solve it like this:
form.addEventListener('click', function (evt) {
const elem = evt.target;
const dataJS = elem.dataset.js;
if (dataJS != null) {
evt.preventDefault();
}
if ('btn-add' === dataJS) {
const row = createRow();
row.querySelector('[name="file-upload"]').value = '';
row.querySelector('[name="descricao"]').value = '';
} else if ('btn-remove' === dataJS) {
evt.target.parentNode.parentNode.remove();
}
});
function createRow() {
const row = rows[0].cloneNode(true);
rows[0].parentNode.appendChild(row);
return row;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Validity of a Probability Density Function
Possible Duplicate:
Probability Density Function Validity
If $X$ is a continuous random variable with range $[x_l,\infty)$ and p.d.f.
$f_x(X)\propto x^{-a}$, for $x\in[x_l,\infty)$
for some values $x_l > 0$ and $a \in \mathbb{R}$.
After integrating $f(x)$, how can I find the range of values for $a$ that would make $f(x)$ a valid pdf?
A:
Let $a$ be a real number, and let $f_X(x)=kx^{-a}$, where $k>0$. Since we are dealing with positive quantities, the only thing that we require in order for $f_X(x)$ to be a density function is
$$\int_{x_l}^\infty kx^{-a}\,dx=1.\qquad\qquad(\ast)$$
The above integration, like many others that arise in probability, is over an infinite interval, so may fail to exist.
We will show that the above integral converges if $a>1$ and diverges otherwise.
Let
$$I(M)=\int_{x_l}^M kx^{-a}\,dx.$$
By definition, if $\lim_{M\to\infty}I(M)$ exists, the integral in $(\ast)$ converges and has value equal to that limit.
Suppose first that $a>1$. Integrating, we find that
$$I(M)=\int_{x_l}^M kx^{-a}\,dx=\left.\frac{-k}{(a-1)x^{a-1}}\right|_{x_l}^M$$
Thus
$$I(M)=\frac{k}{(a-1)x_{x_l}^{a-1}}-\frac{k}{(a-1)M^{a-1}}. \qquad\qquad(\ast\ast)$$
Since $a-1>0$, we can see that $\frac{k}{(a-1)M^{a-1}}\to 0$ as $M\to\infty$. It follows that if $a>1$, then
$$\int_{x_l}^\infty kx^{-a}\,dx=\frac{k}{(a-1){x_l}^{a-1}}.$$
For any $a>1$, and any positive $x_l$, we can find a unique constant of proportionality $k$ such that
$$\frac{k}{(a-1){x_l}^{a-1}}=1.$$
Just take $k=(a-1)x_l^{a-1}$. So everything is fine if $a>1$.
We complete the analysis by showing that if $a \le 1$, then our integral does not converge. There are two somewhat different cases, $a=1$ and $a<1$.
Suppose first that $a=1$. Then
$$I(M)=\int_{x_l}^M kx^{-1}\,dx=\left.k\ln x\right|_{x_l}^M=k\ln M-k\ln x_l.$$
As $M\to\infty$, $\ln M\to\infty$, so $I(M)$ does not have a finite limit, and therefore $\int_{x_l}^\infty kx^{-1}\,dx$ does not exist.
Finally, we deal with $a<1$. In this case,
$$I(M)=\frac{kM^{1-a}}{1-a}-\frac{kx_l^{1-a}}{1-a}.$$
As $M\to\infty$, $I(M)\to\infty$, so the integral from $x_l$ to $\infty$ diverges (does not have a finite value).
| {
"pile_set_name": "StackExchange"
} |
Q:
Would planet explode without gravity?
A planet (as well as a dwarf planet) must, according to the IAU definition, have sufficient mass to assume hydrostatic equilibrium (a nearly round shape). Does it mean they would break apart or explode if gravity vanishes, unlike small bodies like comets or minor planets, whose integrity doesn't depend on gravity?
A:
Even rocky planets would explode. I think there are two ways to see this.
From the perspective of forces, the earth is in equilibrium between the compressive force of gravity and the elastic resistance to compression of the materials that make it up. By Newton's third law, the mantle is pressing upwards on the crust with a force equal to the weight of the crust. If gravity disappears, you will still have that upward force with no downward force to balance it, so you will get upward acceleration.
You may think that the tensile strength of the crust will hold the planet together. But the upward force is the same as the weight of the crust. We know that very large stone structures cannot support their weight under tension.
You may also think that iron and rock aren't very compressible, so the mantle wouldn't expand very far. But the pressures are very great. The core has a density at least 25% greater than iron does under normal pressures. Plus, the temperature is around 6000K, equal to the surface of the sun and much higher than the boiling point of iron at atmospheric pressure. So the expansion won't just be a matter of some cracks forming.
From an energetic point of view, the potential energy of all the parts of the earth spread out across the protoplanetary disk was converted to heat and elastic potential energy when the earth formed. At least one study suggests that about half that heat remains, and in any case heat from radioactive decay has been added over time. If all the energy remained, it would be enough to disperse the earth with the same velocity that parts came together on average when it formed. Given that the energy is at least on the same scale, I think it's reasonable to expect an explosive breakup.
| {
"pile_set_name": "StackExchange"
} |
Q:
Render a component on a route only if it is redirected from another route after form submission
I visited to a url "https://domainname.com/passwords/new" which asks me to enter my email to send a reset password link to my gmail account. When I enters my email and submit then the url changes to "https://domainname.com/passwords" and it renders
"An email is sent to your account successfully". But if I manually navigates to the same url("https://domainname.com/passwords") it shows me 404. How to do it with reactjs.
A:
Set a variable (think local or session storage) after the user successfully resets their password. Then, when rendering the /passwords route, read that variable. If it exists, render the component, if not, redirect to a 404. Works pretty much the same way as the authenticated routes example.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I do a MSBuild Condition testing if an ItemGroup contains an item?
This should be simple, but I can't find how to do this (or maybe it's not possible).
In MSBuild I have an ItemGroup that is a list of files.
I want to execute a task only if a particular file is in that ItemGroup
Something like:
<Copy Condition="@(Files) <contains> C:\MyFile.txt" .... />
Any way to do this? Preferably without writing a custom task.
Edit: The list of files is only to do with the condition. Otherwise it has no relation to the task.
A:
Try
<Copy Condition="'%(Files.Identity)' == 'C:\MyFile.txt'" .. />
| {
"pile_set_name": "StackExchange"
} |
Q:
What is the best method to compute project volatility in Real Option Valuation?
There are few methods like Copeland-Antikarov, Herath-Park, Cobb-Charnes etc. to compute project volatility, however these methods compute upward biased volatility.
What is the best method I could use to compute project volatility for real option valuation?
A:
There are two main approaches:
Comparables (depends on having an existing sample of similar project outcomes, which can be difficult to obtai [similar to historical volatility for stocks])
Simulation (when comparables, or rather enough of them, aren't available)
Here's a link to a paper that provides a technique for estimating volatility for real options:
http://new.vmi.edu/media/ecbu/cobb/EE4902.pdf
Generally comparables is the preferable approach in real options, provided that you have a large enough historical sample. In most cases, however simulation is used as 'true' comparables are notoriously difficult to find en masse.
| {
"pile_set_name": "StackExchange"
} |
Q:
How is it possible that lines in the end (which were not executed yet) affect the beginning of the code? And why does it throw an incorrect error?
So I had an x is not defined error in my code which confused me a little bit because x was already defined couple lines before. I had to spend some time tweaking my code, deleting and adding lines until I managed to understand why it was happening. After I removed every unnecessary information, now the code looks like this:
let foo = 2;
console.log(foo);
if (foo === 2){
console.log(foo);
let foo = 1;
}
It throws foo is not defined at line 5. An error pops out when I'm trying to console.log(foo) ! If I remove line 6 let foo = 1; the code works fine. I mean an error happens before I declare foo for a second time. So the first question is:
How is it possible that line 6 (which hasn't been executed yet) makes line 5 end up with an error?
The second thing I can't understand is why does it say foo is not defined instead of foo has been already declared.
If I replace the second let with var an error will appear at line 6 and it will say foo has been already declared so it looks fine. But having let being set as the second identifier always throws an incorrect error.
Why does it throw an incorrect error?
After testing different scenarios I noticed that the outcome depends on which identifiers I use:
identifiers | result
----------------------------------------------
var var | the code works well
var let | not defined error
let var | has been already declared error
let let | not defined error
So the 3rd question is:
Why is everyone against of using var when in this scenario double using var is the only way the code works flawless? Is it an exception?
A:
How is it possible that line 6 (which hasn't been executed yet) makes line 5 end up with an error?
Because the scope of bindings (loosely, "variables") declared with let, const, and class is the entire block, not just from where they're declared to the end of the block. The time between code entering the block and the execution of the let statement is called the Temporal Dead Zone (TDZ), during which the binding exists but is uninitialized and cannot be used in any way. Just having let foo in the block shadows the outer foo, even before the let foo is encountered in the code flow.
Aside from scope, this TDZ is the big difference between var and let is that var creates a binding and initializes it to undefined, regardless of where the var statement is in the scope. In contrast, let (and const and class) create the binding, but don't initialize it until later, when the let (const, class) is encountered in the step-by-step execution of the code. You can't use an uninitialized binding.
Why does it throw an incorrect error?
It's not incorrect. You could argue it's poorly-worded. :-) Basically it's saying "you can't use foo here, it's not initialized." The current error message from V8 (the JavaScript engine in Chrome, Chromium, Brave, the new Chromium-based Edge, and Node.js) is, to my mind, clearer:
Uncaught ReferenceError: Cannot access 'foo' before initialization
A:
When you declare a variable using let it is valid within the scope of the current code block. Your second let foo declaration defines a separate foo than the first variable and it's only valid within the if-block. However you are using it before it is defined so you get the error correctly that it's not defined yet.
If you truly intend there to be two different foo variables, I'd recommend calling them something else (foo1 and foo2 for example) to avoid the conflict. Then it becomes clear that you are using the variable before it's defined.
let foo1 = 2;
console.log(foo1);
if (foo1 === 2){
console.log(foo1);
let foo2 = 1;
}
If you mean for line 5 to be using the first instance of foo set to 2, then you've hidden it by the new definition happening within the if-block of code.
If you mean for the foo that's set to 1 to be used on line 5 then you should move its definition to before its use.
Note that using var has a different result because the scope of var variables his broader than the scope of let variables. See here which has this definition:
let allows you to declare variables that are limited to a scope of a
block statement, or expression on which it is used, unlike the var
keyword, which defines a variable globally, or locally to an entire
function regardless of block scope.
To try to make it more clear I've marked your code up with the state of the variables at each stage of the code:
let foo = 2; // foo defined and set to 2
console.log(foo); // foo defined and set to 2
if (foo === 2) // foo defined and set to 2
{ // <-- start of the if-block!
console.log(foo); // foo not defined yet
let foo = 1; // foo defined and set to 1
} // <-- end of if-block!
console.log(foo); // foo defined and set to 2
| {
"pile_set_name": "StackExchange"
} |
Q:
Count distinct records from child table for each user in MYSQL
I have a competition which counts how many species each user has collected.
this is managed by 3 tables:
a parent table called "sub" with collection,each collection is unique, has an id and is associated to a user id.
+----+---------+
| id | user_id |
+----+---------+
| 1 | 1 |
| 2 | 10 |
| 3 | 1 |
| 4 | 3 |
| 5 | 1 |
| 6 | 10 |
+----+---------+
the child table called "sub_items" contains multiple unique records of the specs and is related to the parent table by the sub id to id.(each sub can have multiple records of specs)
+----+--------+---------+--+
| id | sub_id | spec_id | |
+----+--------+---------+--+
| 1 | 1 | 1000 | |
| 2 | 1 | 1003 | |
| 3 | 1 | 2520 | |
| 4 | 2 | 7600 | |
| 5 | 2 | 1000 | |
| 6 | 3 | 15 | |
+----+--------+---------+--+
a user table with associated user_id
+--------+-------+--+
| usename | name |
+---------+-------+--+
| 1 | David |
| 10 | Ruth |
| 3 | Rick |
+--------+-------+--+
i need to list the users with the most unique specs collected in a decsending order.
output expected:
David has a total of 2 unique specs.Ruth has a total of 2 unique specs.
+--------+---------+
| id | total |
+----+-------------+
| David | 2 |
| Ruth | 2 |
| Rick | 2 |
+----+-------------+
so far i have this,it produces a result. but its not accurate, it counts the total records.
im probably missing a DISTINCT somewhere in the sub-query.
SELECT s.id, s.user_id,u.name, sum(t.count) as total
FROM sub s
LEFT JOIN (
SELECT id, sub_id, count(id) as count FROM sub_items GROUP BY sub_id
) t ON t.sub_id = s.id
LEFT JOIN user u ON u.username = s.user_id
GROUP BY user_id
ORDER BY total DESC
i have looked at this solution, but it doesn't consider the unique aspect
A:
You'll first have to get the max "score" for all the users like:
SELECT count(DISTINCT si.id) as total
FROM sub INNER JOIN sub_items si ON sub.id = su.sub_id
GROUP BY sub.user_id
ORDER BY total DESC
LIMIT 1
Then you can use that to restrict your query to users that share that max score:
SELECT u.name, count(DISTINCT si.id) as total
FROM
user u
INNER JOIN sub ON u.usename = sub.user_id
INNER JOIN sub_items si ON sub.id = su.sub_id
GROUP BY u.name
HAVING total =
(
SELECT count(DISTINCT si.id) as total
FROM sub INNER JOIN sub_items si ON sub.id = su.sub_id
GROUP BY sub.user_id
ORDER BY total DESC
LIMIT 1
)
| {
"pile_set_name": "StackExchange"
} |
Q:
Change nested setTimeout functions to efficient loop
I have a list of students - each student being a DIV with a specific class and an ID.
I also have an array of student IDs, which I have randomised.
What I'd like to do is the following:
Pick a random student
Highlight the relevant DIV in purple (the pulse class)
Brief pause (like 0.2s)
Pick another random student
Rinse and repeat 1-3 10 times in total
Highlight the selected student in a different colour (selected class)
The code below works correctly...
setTimeout(function() {
$("#" + arr[1]).addClass('pulse');
setTimeout(function() {
$("#" + arr[1]).removeClass('pulse');
setTimeout(function() {
$("#" + arr[2]).addClass('pulse');
setTimeout(function() {
$("#" + arr[2]).removeClass('pulse');
setTimeout(function() {
$("#" + arr[3]).addClass('pulse');
setTimeout(function() {
$("#" + arr[3]).removeClass('pulse');
setTimeout(function() {
$("#" + arr[4]).addClass('pulse');
setTimeout(function() {
$("#" + arr[4]).removeClass('pulse');
setTimeout(function() {
$("#" + arr[5]).addClass('pulse');
setTimeout(function() {
$("#" + arr[5]).removeClass('pulse');
setTimeout(function() {
$("#" + arr[6]).addClass('pulse');
setTimeout(function() {
$("#" + arr[6]).removeClass('pulse');
setTimeout(function() {
$("#" + arr[7]).addClass('pulse');
setTimeout(function() {
$("#" + arr[7]).removeClass('pulse');
setTimeout(function() {
$("#" + arr[8]).addClass('pulse');
setTimeout(function() {
$("#" + arr[8]).removeClass('pulse');
setTimeout(function() {
$("#" + arr[9]).addClass('pulse');
setTimeout(function() {
$("#" + arr[9]).removeClass('pulse');
setTimeout(function() {
$("#" + arr[10]).addClass('pulse');
setTimeout(function() {
$("#" + arr[10]).removeClass('pulse');
$("#" + arr[0]).addClass('activeClass');
Dojo.disableButtons(false);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
}, 250);
But is incredibly ugly.
Is there a more efficient way of doing this using a for loop?
Thanks in advance,
A:
You could use a function (and obviously rename it something more meaningful):
function lessMessy(index) {
$("#" + arr[index]).addClass('pulse');
if (index === 10) {
$("#" + arr[0]).addClass('activeClass');
Dojo.disableButtons(false);
} else {
setTimeout(function() {
$("#" + arr[index]).removeClass('pulse');
lessMessy(index + 1)
}, 250)
}
}
lessMessy(1)
EDIT: Note that this is better than setInterval because it will always wait a quarter of a second. If the code inside takes longer than 1/4 of a second, then setInterval will just skip that iteration. This will lead to a broken page, since the pulse class will not be removed from the previous element.
A:
I think that the setInterval function is what you need.
This executes an other function until you call clearInterval.
[EDIT]
Here is an idea:
var arr = YOUR ARRAY HERE;
var index = 0;
var t = setInterval(function(){
if (index > 0){
$('#' + arr[index - 1]).removeClass('pulse'); //remove class from previous
}
if (index < 10){
$('#' + arr[index]).addClass('pulse'); //add class to current element
}
else {
clearInterval(t);//stop everything
}
index ++;
}, 250)
This might not work. It's of the top of my head, but it should give you and idea.
| {
"pile_set_name": "StackExchange"
} |
Q:
Determining speed of turbocharger via sound?
There are inductive sensors which can be retrofitted to turbochargers to measure their speed, like this:
Examples can be seen here.
Installation requires some careful and precise drilling and threading in addition to removing and re-installing the turbo itself, which is something I don't want to do for my personal experimenting.
Now I am wondering if the speed of the turbine/compressor can be determined acoustically. I mean, you can easily hear the distinctive sound of the turbo when it revs up/down.
The question to me is, what kind of sensor could be used to pick up the turbo's sound?
I'm not sure what frequency you hear from the turbo, but it must be either its rotational frequency (up to about 200000rpm/60s~3.3kHz), or the same multiplied by the number of blades (~13 -> 40kHz).
Installation of the pickup inside the engine bay will require, among other things, mechanical ruggedness and temperature resistance to at least 100°C.
Can a cheap piezo ceramic plate do the trick?
Are there special microphones for this kind of environment?
Knock sensors come to mind, but I'd prefer something contactless, especially not needing to be bolted to the turbo, see above :)
A:
For 40kHz you're in ultrasonic territory. You'll be hard-pressed to find an ultrasonic mic that's also rated for high-temperature.
This one is rated for a maximum of 100C, and shows a response curve to 80kHz:
SPH0641LU4H-1
https://www.digikey.ca/product-detail/en/knowles/SPH0641LU4H-1/423-1402-2-ND/5332438
This one only covers up to 40kHz and 85C:
https://www.digikey.ca/product-detail/en/tdk-invensense/ICS-41350/1428-1064-1-ND/6025660
You might be able to create a temperature-shielded or cooled housing that's acoustically conductive, but YMMV.
| {
"pile_set_name": "StackExchange"
} |
Q:
Automatic calculation field based on line modification within QGIS
I have a polygon layer with every cities in my region. One of the field of this layer is the population of the city.
I also have a line layer which represents some bus route
I would like to automaticaly calculate the population concerned by the bus route (which means the sum of population of the polygon intersect by each bus line).
First of all how to calculate this in a new field on the line layer?
Second how to get this result to be update if the route change?
A:
First of all how to calculate this in a new field on the line layer?
Install the RefFunctions plugin.
Use the Field Calculator to add a field to the bus route layer with this expression (substitute your actual field and layer names):
intersecting_geom_sum('city_polygons','population')
Second how to get this result to be update if the route change?
Make it a virtual field if you only need the values in the current project.
If you need the values saved as a permanent part of the layer's attributes, make it a regular field with a default field value. Check the box for "Apply default value on update." Note that the field won't update if the layer isn't in edit mode; you may also need to make some sort of change to trigger the update.
| {
"pile_set_name": "StackExchange"
} |
Q:
Relation between an indefinte and definite integral
How is the following relation established?
$$
\int e^{-p r^2} d\mathbf{r}=4 \pi\int_0^\infty r^2 e^{-pr^2} dr
$$
where $p$ is a real and positive number.
A:
I think that $d\mathbf{r}$ means here the volume integral and that the integral on the left is over all $\mathbb{R}^3$.
If we pass to spherical polars $(r,\theta,\phi)$, and remember the formula for the volume element, then the integral will become
$$
\int_{r=0}^{\infty}\int_{\theta=0}^{2\pi}\int_{\phi=0}^{\pi} e^{-pr^2}\ r^2\ \sin\phi\ dr \ d\theta\ d\phi
$$
which evaluates to the RHS.
| {
"pile_set_name": "StackExchange"
} |
Q:
inconsistent indentation with Python after split
edit in progress will re-submit sometimes later
edit in progress will re-submit sometimes later
edit in progress will re-submit sometimes later
A:
That should work:
import re #Regex may be the easiest way to split that line
with open(infile) as in_f, open(outfile,'w') as out_f:
f = (i for i in in_f if i.rstrip()) #iterate over non empty lines
for line in f:
_, k = line.split('\t', 1)
x = re.findall(r'^1..100\t([+-])chr(\d+):(\d+)\.\.(\d+).+$',k)
if not x:
continue
out_f.write(' '.join(x[0]) + '\n')
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does my session remain?
I must be really stupid because it seems a fairly obvious thing is completely confusing me right now.
I have a session...
ie $_SESSION['handbag_id'];
and at a certain point, I need to completely kill this session.
ie
// at the start of the page
session_start();
// elsewhere on the same page
unset($_SESSION);
session_destroy();
And yet, I can then go to another page, and do a
echo $_SESSION['handbag_id'];
And I've still got the same handbag_id as before.
What am I missing? Do I not understand how this works or do I have some server setting that reigns supreme over my desire to destroy its values?
A:
Don't do this
unset($_SESSION);
Do this
$_SESSION = array();
And finally
session_destroy();
A:
Session functions can be very tricky. To completely kill a session you need to assign a new value to the $_SESSION superglobal. Otherwise, all you do is unloading session data from current script. This should work:
session_start();
$_SESSION = array();
session_write_close(); // Not required
If you also need to open an entirely new session, you can do this:
session_regenerate_id(FALSE);
$tmp = session_id();
session_destroy();
session_id($tmp);
unset($tmp);
session_start();
Update:
A related question you may find useful: Close session and start a new one
| {
"pile_set_name": "StackExchange"
} |
Q:
Transmitting Zipped file from server in .Net
I am writing a small web app and part of the functionality is to be able to download the logs from the server to the user's file system. I am able to zip the existing logs, but am failing to get the zipped folder to transmit. I have browsed many other similar questions on here, but have yet to get it to work based on any of them.
Here is the code that I am currently trying:
.Net Controller
[HttpPost]
public ActionResult DownloadLogs()
{
string path = System.Configuration.ConfigurationManager.AppSettings["LogPath"];
try
{
Log.Information("Closing current logger. AudtiLog: {auditLog}", true);
Log.CloseAndFlush();
string startPath = path;
string zipPath = "C:\\my_folder\\result.zip";
string extractPath = path + "extract";
//deleting existing zips
if (System.IO.File.Exists(zipPath))
System.IO.File.Delete(zipPath);
if (System.IO.Directory.Exists(extractPath))
System.IO.Directory.Delete(extractPath, true);
ZipFile.CreateFromDirectory(startPath, zipPath);
ZipFile.ExtractToDirectory(zipPath, extractPath);
FileInfo file = new FileInfo(zipPath);
using (FileStream filestream = new FileStream(zipPath, FileMode.Open))
{
return File(filestream, "application/zip", "ServerLogZip.zip");
}
}
catch (Exception ex)
{
Log.Error("Error occurred when downloading server logs. Error: {Error}; AuditLog: {auditLog}", ex.Message, true);
return Json(new { result = "error" });
}
}
Javascript
function DownloadLogs() {
$.ajax({
type: "POST",
url: "/ManageServer/DownloadLogs",
contentType: "application/zip",
success: function (response) {
alert("Success")
},
error: function (response) {
alert("Error");
}
});
}
Whenever I run it, it zips the logs into one folder, steps through the Response portion of the code successfully, but nothing happens. I've tried debugging and stepping through the code, but haven't found the answer yet. I've also tried the Response.WriteFile method as well. No luck.
Edit
I've updated the code to return ActionResult and returned a File. It is currently returning a 500 error from the server.
A:
You have a problem on your code as @mason noticed. You are trying to return two things, the file and the status.
The operation status should be checked through HTTP return codes.
You can use the the IActionResult interface as return type so you can handle things right. Give the method a File as return type and if everthing is fine it will return your file with all headers needed. In case of something goes wrong, you can return a BadRequest("Error Message");
The File return type accepts as parameters a FileStream that will contain your raw data, the Mime Type of the file and the filename
To achieve that, do the following steps
Change the method return type to FileResult
Create a variable that will receive your file content as a Stream or use an using statement
Create a byte array to allocate the file content (If you try to return the filestream you will get an error of file closed because the using statement is closed before the retun occurs)
Return data like this return File(byteArray, "application/zip", "ServerLogZip.zip");
Sample
try{
// Do things to prepare your file
using(FileStream filestream = new FileStream(zipPath,FileMode.Open))
{
byte[] zipBytes= new byte[filestream.Length];
filestream.Read(PhotoBytes, 0, PhotoBytes.Length);
return File(zipBytes, "application/zip", "ServerLogZip.zip");
}
}
catch(Exception ex){
return BadRequest("Something gone wrong: " + ex.Message);
}
I've been burning my mind to figure out how to download this file through async request, but in the end of the day, I realized that maybe you don't need such a complex solution. You can just call the route and the file will be downloaded.
function DownloadLogs() {
document.location = your_route;
}
For this work properly, you must also change the method decorator of your C# method from [HttpPost] to [HttpGet]
| {
"pile_set_name": "StackExchange"
} |
Q:
How can we underline n-th letter in angular ng-repeat?
I have a requirement in my project.
I am having titles loaded in a scope variable from http call :
$scope.titleObject = [{
"title": "Title1",
"underlinekey": "t" }, {
"title": "Sub-Heading",
"underlinekey": "u" }, {
"title": "Heading text",
"underlinekey": "a" }, {
"title": "Some Title",
"underlinekey": "o" }, {
"title": "More Title",
"underlinekey": "r" }];
Now I want to print titles in html with underlined n-th letter , where n is "underlinedkey"
My html :
<div class="titleCont">
<div ng-repeat="title in titleObject">{{title.title}}</div>
</div>
Output I am getting :
Title1
Sub-Heading
Heading text
Some Title
More Title
What my requirement is :
T̲itle1
Su̲b-Heading
Hea̲ding text
So̲me Title
Mor̲e Title
A:
With a directive can be:
.directive('underline', function(){
return {
scope: {
underline: "="
},
link: function(scope, element){
var html = scope.underline.title.replace(new RegExp("(" + scope.underline.underlinekey +")"), "<u>$1</u>");
element.html(html)
}
}
})
<div ng-repeat="title in titleObject">
<div underline="title"></div>
</div>
Demo 1
Demo 2 with attributes
| {
"pile_set_name": "StackExchange"
} |
Q:
Error while converting query from mysql to oracle
I have four queries in mysql in my webapp that I am trying to convert into oracle queries. However, the datetime string breaks when I try to run the new query. Can someone help me figure out what I am doing wrong?
PostreSQL Queries:-
insert into o_stat_daily
(businesspath,resid,day,value)
(select businesspath,
int8(substring(businesspath from position(':' in businesspath) + 1 for position(']' in businesspath) - position(':' in
businesspath) - 1)),
date_trunc('day',creationdate) as d,
count(*) as c
from o_loggingtable where actionverb='launch' and actionobject='node' and businesspath != '' group by businesspath, d);
insert into o_stat_weekly
(businesspath,resid,week,value)
(select businesspath,
int8(substring(businesspath from position(':' in businesspath) + 1 for position(']' in businesspath) - position(':' in
businesspath) - 1)),
to_char(creationdate, 'IYYY') || '-' || to_char(creationdate, 'IW') as d,
count(*) as c
from o_loggingtable where actionverb='launch' and actionobject='node' and businesspath != '' group by businesspath, d);
insert into o_stat_dayofweek
(businesspath,resid,day,value)
(select businesspath,
int8(substring(businesspath from position(':' in businesspath) + 1 for position(']' in businesspath) - position(':' in
businesspath) - 1)),
int8(to_char(creationdate, 'D')) as d,
count(*) as c
from o_loggingtable where actionverb='launch' and actionobject='node' and businesspath != '' group by businesspath, d);
insert into o_stat_hourofday
(businesspath,resid,hour,value)
(select businesspath,
int8(substring(businesspath from position(':' in businesspath) + 1 for position(']' in businesspath) - position(':' in
businesspath) - 1)),
int8(to_char(creationdate, 'HH24')) as d,
count(*) as c
from o_loggingtable where actionverb='launch' and actionobject='node' and businesspath != '' group by businesspath, d);
Oracle Queries:-
insert into o_stat_daily
(businesspath,resid,day,value)
(select businesspath,
convert(substr(businesspath, locate(':', businesspath) + 1, locate(']', businesspath) - locate(':', businesspath) - 1), int),
convert(creationdate,date) d,
count(*) c
from o_loggingtable where actionverb='launch' and actionobject='node' and businesspath != '' group by businesspath, d);
insert into o_stat_weekly
(businesspath,resid,week,value)
(select businesspath,
convert(substr(businesspath, locate(':', businesspath) + 1, locate(']', businesspath) - locate(':', businesspath) - 1), int),
year(creationdate)+ '-'+repeat('0',2-length(convert((dayofyear(creationdate)-dayofweek(creationdate))/7,varchar(7))))+convert((dayofyear(creationdate)-dayofweek(creationdate))/7,varchar(7))
d,
count(*) c
from o_loggingtable where actionverb='launch' and actionobject='node' and businesspath != '' group by businesspath, d);
insert into o_stat_dayofweek
(businesspath,resid,day,value)
(select businesspath,
convert(substr(businesspath, locate(':', businesspath) + 1, locate(']', businesspath) - locate(':', businesspath) - 1), int),
dayofweek(creationdate) d,
count(*) c
from o_loggingtable where actionverb='launch' and actionobject='node' and businesspath != '' group by businesspath, d);
insert into o_stat_hourofday
(businesspath,resid,hour,value)
(select businesspath,
convert(substr(businesspath, locate(':', businesspath) + 1, locate(']', businesspath) - locate(':', businesspath) - 1), int),
hour(creationdate) d,
count(*) c
from o_loggingtable where actionverb='launch' and actionobject='node' and businesspath != '' group by businesspath, d);
A:
Rather than rewrite all your queries, I have taken just the last one.
The oracle query should be something like:
INSERT INTO o_stat_hourofday (
businesspath,
resid,
hour,
VALUE
)
SELECT businesspath,
TO_NUMBER (
SUBSTR (
businesspath,
INSTR(businesspath, ':') + 1,
INSTR(businesspath, ']') - INSTR(businesspath, ':') - 1
)
),
TO_NUMBER(TO_CHAR(creationdate, 'HH24')) d,
COUNT (*) c
FROM o_loggingtable
WHERE actionverb = 'launch'
AND actionobject = 'node'
AND businesspath IS NOT NULL
GROUP BY businesspath,
TO_NUMBER (
SUBSTR (
businesspath,
INSTR(businesspath, ':') + 1,
INSTR(businesspath, ']') - INSTR(businesspath, ':') - 1
)
),
TO_NUMBER(TO_CHAR(creationdate, 'HH24'));
FYI, Oracle does not recognise the HOUR function and CONVERT in oracle converts one character set to another, not a string to a numeric. LOCATE is not an Oracle function either, you need to use INSTR instead to find a character in a string.
Read up on TO_CHAR (including date formats etc), TO_NUMBER and INSTR.
Hope this helps you!
| {
"pile_set_name": "StackExchange"
} |
Q:
Show that there is $F$ contained in $E$ and $(g_n)$ of simple functions such that $(g_n)\to f$ uniformly on $F$ and $m(E\setminus F)<\epsilon$.
The following is a question from Ryden's Real Analysis.
Let $f$ be a measurable function on $E$ that is finite a.e. on $E$ and $m(E)<\infty$. For each $\epsilon>0$, show that there is a measurable set $F$ contained in $E$ and a sequence $(g_n)$ of simple functions on $E$ such that $(g_n)\to f$ uniformly on $F$ and $m(E\setminus F)<\epsilon$.
What follows is my attempt.
Let $\epsilon>0$. By a previous problem there exists a measurable subset $F$ of $E$ such that $f$ is bounded on $F$ and $m(E\setminus F)<\epsilon$. Since $f$ is measurable there exists a sequence of simple functions $(g_n)$ such that for each $n\in\mathbb{N},$ $|g_n(x)|\leq|g_{n+1}(x)|$ and $\lim_n g_n(x)=f(x)$ for each $x\in E$. Then $(g_n)$ is bounded on $F$ i.e. there exists $M>0$ such that $|g_n|\leq M$ on $F$ for each $n\in\mathbb{N}$.
This is all I can see and I don't understand how to proceed. I wanted to check if "$|g_n|\leq M$ on $F$ for each $n\in\mathbb{N}$" implies $g_n\to f$ uniformly on $F$- wishful thinking- but I'm still stuck. Please help. Thanks
A:
If $f$ is a bounded non-negative function (say by $N\in\mathbb N$), then defining
$$g_n:=\sum_{i=0}^{N2^n-1}i2^{-n}\mathbf 1\left\{i2^{-n}\lt f\leqslant \left(i+1\right)2^{-n}\right\},$$
we get a sequence of simple functions such that $\left|g_n\left(x\right)-f\left(x\right)\right|\leqslant 2^{-n}$ for any $x$.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use RedirectToRoute("routeName") in MVC?
I am just puzzled on why is my RedirectToRoute() method not working. I have a RouteConfig.cs file like this
routes.MapRoute(
"pattern1",
"{action}",
new { controller = "Home", action = "About" }
);
routes.MapRoute(
"pattern2",
"{controller}/{action}/{id}",
new { controller = "Admin", action = "Index", id = UrlParameter.Optional }
);
on this configuration my default controller Home and action About is getting called, now in the action method I am calling RedirectToRoute() with the following value like this
public ActionResult Index()
{
return View();
}
public ActionResult About()
{
return RedirectToRoute("pattern2");
}
Why is the RedirectToRoute() not calling Admin/Index action
A:
Try this:
return RedirectToRoute(new
{
controller = "Admin",
action = "Index",
id = null
});
You could also use RedirectToAction() method. It seems more intuitive.
A:
This is because of the route you have defined. RedirectToRoute() redirects to the route you have defined. The url you have defined for "pattern2" is "{controller}/{action}/{id}". If you want to use the overload which only accepts the routeName, then you need explicitly define the url in your RouteConfig. Example:
routes.MapRoute(
"pattern2",
"Admin/Index",
new { controller = "Admin", action = "Index", id = UrlParameter.Optional }
);
If you do not want to define the url explicitly, then you need to use a different overload of RedirectToRoute() which accepts the routeValues object.
| {
"pile_set_name": "StackExchange"
} |
Q:
Defining a macro to add the prefix 0x to a hex string literal
I am trying to get a macro to work in my c program to add 0x to a HEX literal as follows:
#define BUILD ABCD0000
#define CONCAT(m, n) m ## n
#define HEX(x) CONCAT(0x, x)
const uint32_t Id = HEX (BUILD);
I get this compiler error: invalid suffix "x" on integer constant.
Can anyone help?
A:
This is not the answer that you expect, but I am sorry, I have to:
DON'T DO THIS!!
Why not ?
It is misleading: The name and the syntax HEX(x) leads to think would convert x to hex, whereas it requires the argument to already be in hex.
It behaves badly: HEX(ABC00+10) would take the first part has hex but the second part still in decimal. To let macros behave well with expressions, the trick is to enclose each use of a parameter between parenthesis, but this is not possible with concatenation.
It goes against POLA for you peer developers
Better get accustomed to 0x : it appears in a lot of code around there, in compiler messages, in debuggers, etc... So train your eyes instead of trying to escape.
This being said, after having tested on a couple of compiler versions on godbolt, I could not reproduce your error. So if you want to go on:
Maybe your old compiler is disturbed by spacing (remove all spaces in macro definitions and macro uses). Or, it shouldn't, but who knows, the two x in the macro to expand?
Or maybe your compiler expects each token used in a macro to be valid (e.g. strings must be closed, literals valid, etc...). I remember having such limitations but on very old C compilers in the 80's, perhaps 90s*
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I use SHA-512 with Rfc2898DeriveBytes in my salt & hash code?
I'm completely new to cryptography, but learning. I've pieced together many different suggestions from my research online, and have made my own class for handling the hash, salt, key stretching, and comparison/conversion of associated data.
After researching the built-in .NET library for cryptography, I discovered that what I have is still only SHA-1. But I'm coming to the conclusion that it's not bad since I'm using multiple iterations of the hash process. Is that correct?
But if I wanted to start with the more robust SHA-512, how could I implement it in my code below? Thanks in advance.
using System;
using System.Runtime.InteropServices;
using System.Security;
using System.Security.Cryptography;
public class CryptoSaltAndHash
{
private string strHash;
private string strSalt;
public const int SaltSizeInBytes = 128;
public const int HashSizeInBytes = 1024;
public const int Iterations = 3000;
public string Hash { get { return strHash; } }
public string Salt { get { return strSalt; } }
public CryptoSaltAndHash(SecureString ThisPassword)
{
byte[] bytesSalt = new byte[SaltSizeInBytes];
using (RNGCryptoServiceProvider crypto = new RNGCryptoServiceProvider())
{
crypto.GetBytes(bytesSalt);
}
strSalt = Convert.ToBase64String(bytesSalt);
strHash = ComputeHash(strSalt, ThisPassword);
}
public static string ComputeHash(string ThisSalt, SecureString ThisPassword)
{
byte[] bytesSalt = Convert.FromBase64String(ThisSalt);
Rfc2898DeriveBytes pbkdf2 = new Rfc2898DeriveBytes(
convertSecureStringToString(ThisPassword), bytesSalt, Iterations);
using (pbkdf2)
{
return Convert.ToBase64String(pbkdf2.GetBytes(HashSizeInBytes));
}
}
public static bool Verify(string ThisSalt, string ThisHash, SecureString ThisPassword)
{
if (slowEquals(getBytes(ThisHash), getBytes(ComputeHash(ThisSalt, ThisPassword))))
{
return true;
}
return false;
}
private static string convertSecureStringToString(SecureString MySecureString)
{
IntPtr ptr = IntPtr.Zero;
try
{
ptr = Marshal.SecureStringToGlobalAllocUnicode(MySecureString);
return Marshal.PtrToStringUni(ptr);
}
finally
{
Marshal.ZeroFreeGlobalAllocUnicode(ptr);
}
}
private static bool slowEquals(byte[] A, byte[] B)
{
int intDiff = A.Length ^ B.Length;
for (int i = 0; i < A.Length && i < B.Length; i++)
{
intDiff |= A[i] ^ B[i];
}
return intDiff == 0;
}
private static byte[] getBytes(string MyString)
{
byte[] b = new byte[MyString.Length * sizeof(char)];
System.Buffer.BlockCopy(MyString.ToCharArray(), 0, b, 0, b.Length);
return b;
}
}
Notes: I've referenced a lot of practices from https://crackstation.net/hashing-security.htm. The slowEquals comparison method is to normalize execution time by preventing branching. The usage of SecureString is to have an encrypted form of the password pass between this class and other classes and pages within my web application. While this site will be over HTTPS, it's always nice to go the extra mile to ensure things are as secure as possible while still being within reason.
In my code, I've set the key string to 128 bytes (though it grows bigger sometimes, which is fine), the hash size to 1KB, and the number of iterations at 3,000. It's a little larger than the typical 64 byte salt, 512 byte hash, and 1,000 or 2,000 iterations, but then again login speed and app performance is an extremely low priority.
Thoughts?
A:
3000 iterations is quite low. Even 10000 is low. But you need to weight the security gain of additional iterations against the risk that an attacker DoSes your server by trying to login often, which triggers an expensive hash for each attempt.
There is no point in a salt larger than 128 bits/16 bytes. A salt should be unique, nothing more.
A hash size larger than the native size (20 bytes for SHA-1) reduces performance for the defender but not for the attacker. Since this means you can afford fewer iterations, it actually weakens security.
For example at the same cost as your 1024 byte hash with 3000 iterations, you could afford a 20 byte hash with 156000 iterations, which is 52 times more expensive to crack.
To use SHA-2 you'll need a completely different PBKDF2 implementation, the one included with .net is hardcoded to use SHA-1.
If you bother to use a third party library, I'd rather use a bcrypt library since that's much stronger against GPU based attackers.
Your API is awkward to use, since you push salt management onto the caller instead of handling it within the Create/Verify functions.
It's silly to use SecureString and then to convert it to String. This counteracts the whole point of using a SecureString in the first place.
Personally I wouldn't bother with SecureString in a typical application. It's only worthwhile if you combine it with an extensive whole-stack security review that checks that the password is never stored in a String and always erased from mutable storage once it's no longer required.
I wouldn't store passwords/salts in instance variables. Just keep them local to the relevant functions. I'd only store configuration in the instance variables (such as iteration count).
While SHA-1 is weakened cryptographically, the attacks produce collisions. For password hashing collisions are irrelevant, what you care about are first pre-image attacks. SHA-1 is still pretty strong in that regard.
The main advantage of SHA-512 is not that it's cryptographically stronger (though it is), it's that 64 bit arithmetic costs the attacker more than the defender, since the defender will probably use a 64 bit Intel CPU which offers fast 64 bit arithmetic.
A:
If anyone encounters this question by search, now Microsoft provides Microsoft.AspNetCore.Cryptography.KeyDerivation NuGet package, which allows to use PBKDF2 with SHA-256 and SHA-512 hash functions. Documentation is available at docs.microsoft.com.
A:
Answering the question: Download the free code samples from "SecurityDriven.NET" book. Find the PBKDF2 class which takes an HMAC factory. HMACSHA512 factory is available, among others.
Since you're new to cryptography, I also strongly suggest you read the book (ex. to fully understand the points that CodesInChaos made).
| {
"pile_set_name": "StackExchange"
} |
Q:
Nonlinear interpolation using Newtons method
Given a set of datapoints I'm trying to approximate the coefficients a,b in the function U(x)=8-ax^b using Newtons method in MATLAB.
x = [150 200 300 500 1000 2000]';
y = [2 3 4 5 6 7]';
a=170; b=-0.7; iter = 0;
for iter=1:5
f=8-a*x.^(b) -y;
J = [-x.^b -a*b*x.^(b-1)]; %Jacobis matrix
h=J\f;
a=a-h(1); b=b-h(2);
disp(norm(f))
iter = iter+1;
end
The results are incorrect and I've not been sucessful of finding the misstep. All help is appreciated.
A:
The jacobi matrix is wrong. Using Newton's method, you're trying to find the values of a and bthat would solve the equations 8-ax^b - y = 0. So, your Jacobi should be the derivatives of f with respect to a and b. That is J = [df/da df/db], resulting in:
J = [-x.^b -a.*x.^b.*log(x)]
and you will get the following curve for the 5 iterations:
| {
"pile_set_name": "StackExchange"
} |
Q:
Service with LocationManager check for permission
I need to send the location to the server.
This is on my service class:
lm.requestLocationUpdates(PROVIDER, 600000, 0, myLocationListener);
But i dont know how to call this because this need to check for permissions.
Suggest please?
A:
Checking for permissions does not change, just because you are using a service. Call checkSelfPermission() on the service itself.
What is different is that you cannot ask the user for permission from a service.
The best solution to cover most use cases is to also check the permission in your activity/fragment before you start (or bind to) the service, and ask the user for permission if you do not have it.
Alternatively, have your service display a Notification that leads to an activity where you can ask the user for permission. The activity would need to re-request the service to do its work at that point, so the service knows that it is safe to call requestLocationUpdates().
| {
"pile_set_name": "StackExchange"
} |
Q:
ios - Use Cell design in different Tables
I'm creating an ios app with different tables. But some of this tables use the same cells. Is there a way to design the cell with all the constrains and stuff at a central point? Because I don't want to copy every cell into all the tables and update them when I change something. Or should I do this all programmatically? (I don't really want this, it's much to do)
Thank you!
A:
When you have many tables and some table are using the same cell, in that case, you can use Xib for that
Go to New -> Files and add Empty interface -> Add it.
Go to interface builder and Add tableViewCell from UI elements list
Add a class of Type UITableViewCell do needed connections
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let identifier = "NameOfTableViewCellIdentifier"
var cell: classOfTableViewCell! = tableView.dequeueReusableCell(withIdentifier: identifier) as? classOfTableViewCell
if cell == nil {
tableView.register(UINib(nibName: "XibNameOfTableViewCell", bundle: nil), forCellReuseIdentifier: identifier)
cell = tableView.dequeueReusableCell(withIdentifier: identifier) as? classOfTableViewCell
}
cell.outletOfUIElements..........
return cell!
}
Let me know if it's work for you
| {
"pile_set_name": "StackExchange"
} |
Q:
Kinematics with acceleration as a function of velocity
Say there is a particle moving at $50~\text{m/s}$ and modeled with a function of acceleration such that:
$$a = - 0.5v$$
(this is derived from a force as a function of velocity)
$$F = 50v = 100\cdot a$$
Then say I wanted to know what v was equal to at the time t = 4 seconds.
I integrated both sides and get:
$$30 - 0.5Δv t = 0.5\Delta x$$
is this correct? I now seem to not have enough variables to solve the equation for v at time $t = 5$. What other equation am I missing? Did I get this equation incorrectly? I can not use UAM equations because this particle is not accelerated at a constant rate, so I must derive kinematic equations from this circumstance?
Any help is appreciated. Thanks.
A:
It isn't clear from your question exactly what you are integrating and how, but this is the way to tackle problems like this. You know that:
$$ \frac{dv}{dt} = -kv $$
The way to solve equations like this one is to rearrange it by dividing both sides by $v$ and multiplying both sides by $dt$ to get:
$$ \frac{1}{v}dv = -k\,dt $$
Now we can integrate both sides to get:
$$ \ln v = -kt + C $$
where $C$ is some constant of integration. It's probably clearer if we take the exponential of both sides to get:
$$ v = e^{-kt+C} $$
Mathematicians tend to recoil in horror when we physicists casually treat $df(t)/dt$ as if it were a simple fraction, but it works in physics!
| {
"pile_set_name": "StackExchange"
} |
Q:
CUDA: Addition of two numbers giving wrong answer
Here is the program
#include <stdio.h>
#include <cuda.h>
#include <cuda_runtime.h>
#include <device_launch_parameters.h>
__global__ void Addition(int *a,int *b,int *c)
{
*c = *a + *b;
}
int main()
{
int a,b,c;
int *dev_a,*dev_b,*dev_c;
int size = sizeof(int);
cudaMalloc((void**)&dev_a, size);
cudaMalloc((void**)&dev_b, size);
cudaMalloc((void**)&dev_c, size);
a=5,b=6;
cudaMemcpy(dev_a, &a,sizeof(int), cudaMemcpyHostToDevice);
cudaMemcpy(dev_b, &b,sizeof(int), cudaMemcpyHostToDevice);
Addition<<< 1,1 >>>(dev_a,dev_b,dev_c);
cudaMemcpy(&c, dev_c,size, cudaMemcpyDeviceToHost);
cudaFree(&dev_a);
cudaFree(&dev_b);
cudaFree(&dev_c);
printf("%d\n", c);
return 0;
}
Here is how i compiled it
$ nvcc -o test test.cu
Here is my output
1
Here is the output of deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce 8400 GS"
CUDA Driver Version / Runtime Version 6.5 / 6.5
CUDA Capability Major/Minor version number: 1.1
Total amount of global memory: 511 MBytes (536150016 bytes)
( 1) Multiprocessors, ( 8) CUDA Cores/MP: 8 CUDA Cores
GPU Clock rate: 1350 MHz (1.35 GHz)
Memory Clock rate: 400 Mhz
Memory Bus Width: 64-bit
Maximum Texture Dimension Size (x,y,z) 1D=(8192), 2D=(65536, 32768), 3D=(2048, 2048, 2048)
Maximum Layered 1D Texture Size, (num) layers 1D=(8192), 512 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(8192, 8192), 512 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 8192
Warp size: 32
Maximum number of threads per multiprocessor: 768
Maximum number of threads per block: 512
Max dimension size of a thread block (x,y,z): (512, 512, 64)
Max dimension size of a grid size (x,y,z): (65535, 65535, 1)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 256 bytes
Concurrent copy and kernel execution: No with 0 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): No
Device PCI Bus ID / PCI location ID: 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GeForce 8400 GS
Result = PASS
A:
CUDA 6.5 compiles for a cc2.0 target by default. Your GeForce 8400GS is a cc1.1 device. So your kernels compiled that way will not launch, and you don't have proper cuda error checking in your code (which would have given you an indication of the problem).
If you specify a proper arch switch when compiling, your code should run properly:
nvcc -arch=sm_11 -o test test.cu
A warning message will be displayed that sm_11 is deprecated, but it should still compile your code properly.
| {
"pile_set_name": "StackExchange"
} |
Q:
Iterating Images in Directory By Name
Hopefully this'll be relatively easy to answer as I'm still new to Python.
I need to for-iterate through images of name [x]img[y] where x and y are identifiers that determine how the image is processed. x determines process and is valued 1 or 2 and y determines one of the values to be used in processing.
So, in pseudo-code:
directory = 'my/path/to/images'
for image in directory:
if x = 1:
do process a with y
elif x = 2:
do process b with y
else: print "ERROR"
I'm not familiar enough yet with the code and extracting values from strings to piece together the various different examples I've seen around. I get the feeling I should be able to turn the string into an array and read the 1st and 5th values as integers, right?
Your help is very much appreciated and if you'd like to get a better understanding of what I'm trying to accomplish with this project I've made a GitHub repository here.
Many thanks :)
EDIT
Thanks to Ely for getting me this far however that introduced another error. Here's the test code I'm using to open the directory, which contains files 1img2.png, 1img4.png and 2img1.png.
import os
import re
DIRECTORY = '/home/pi/Desktop/ScannerDev/TestPhotos/'
for img_filename in os.listdir(DIRECTORY):
x, y = os.path.splitext(img_filename)[0].split('img')
print 'Camera is ', x
print 'Image is ', y
"""
for img_filename in os.listdir(DIRECTORY):
a = re.search('^(.*)img(.*)\.png$', img_filename)
if a is None:
break
else:
x = a.group(1)
y = a.group(2)
print 'Camera is ', x
print 'Image is ', y
"""
Which returns:
Camera is 2
Image is 1
Traceback (most recent call last):
File "iteratorTEST.py", line 10, in <module>
x, y = os.path.splitext(img_filename)[0].split('img')
ValueError: need more than 1 value to unpack
A:
Use os.listdir for iterating over the image files in the directory:
for img_filename in os.listdir('my/path/to/images'):
# in case you don't need to worry about anything but 'img'.
x, y = img_filename.split('img')
# in case you want to remove .png as in the comments.
x, y = os.path.splitext(img_filename)[0].split('img')
# do stuff here
If you want to enforce your assumption, say of a .png file name, when iterating the directory, you could do this:
raw_files = os.listdir(DIRECTORY)
for img_filename in filter(lambda x: x.endswith('.png'), raw_files):
...
or
for img_filename in [x for x in os.listdir(DIRECTORY) if x.endswith('.png')]:
...
or various other variations of the idea.
Note that you may also want to further post-process the result of x and y above, like converting them to int, or removing any file extension (if present in the file name), and there are various other helper functions in the os module that will help with it.
This solution makes a hard assumption that the files follow your given format exactly, and that the special string img only appears once in the file name, and acts as a perfect separator between the X portion and the Y portion you need. You'll have to do some additional processing if this assumption is not true.
| {
"pile_set_name": "StackExchange"
} |
Q:
Python KeyError: pandas: match row value to column name/key where some keys are missing
I have DataFrame which looks something like below:
Q5 | Q10 | Q41 | item
a | b | c | Q5
d | e | f | Q10
g | h | i | Q571
j | k | l | Q23340
m | n | o | Q41
h | p | s | Q10
Where Q5, Q10, Q41, item are column names of the DataFrame. I want to add one more column "name" which will have value of the column where value of column "item" matched with the column name. So I want it to look like as below:
Q5 | Q10 | Q41 | item | name
a | b | c | Q5 | a
d | e | f | Q10 | e
g | h | i | Q571 | NA
j | k | l | Q23340 | NA
m | n | o | Q41 | o
h | p | s | Q10 | p
The problem here is, there are more items than columns. So not all the values in column item exist as columns which causes keyError. I tried doing like below:
df['col_exist'] = [(col in df.columns) for col in df.item]
df['name'] = np.where(df['col_exist']==True, df[df.item], np.nan)
And I get error as:
KeyError: "['Q571', 'Q23340'] not in index"
I also tried using df.apply as below:
df['name'] = np.where(df['col_exist']==True, df.apply(lambda x: x[x.item], axis=1), np.nan)
But I am getting error as below:
KeyError: ('Q571', 'occurred at index 2')
I am not sure why it is trying to access the column which does not exist despite of placing col_exit check there.
Can someone please help me to resolve this issue?
A:
You can filter item column based on columns then use lookup i.e
df['new'] = df['item'].apply(lambda x : x if x in df.columns else np.nan)
or
df['new'] = np.where(df['item'].isin(df.columns), df['item'], np.nan)
df['name'] = np.nan
df['name'] = df.lookup(df.index,df['new'].fillna('name'))
Output:
Q5 Q10 Q41 item new name
0 a b c Q5 Q5 a
1 d e f Q10 Q10 e
2 g h i Q571 NaN NaN
3 j k l Q23340 NaN NaN
4 m n o Q41 Q41 o
5 h p s Q10 Q10 p
To remove new column df = df.drop('new',1)
To make your approach work instead of df[df.item] use df['item']
df['name'] = np.where(df['col_exist']==True, df['item'], np.nan)
| {
"pile_set_name": "StackExchange"
} |
Q:
Increase number of visitors to a question (without bounty)
I need a tricky idea to increase the number of visitors to my questions on Super User. Sometimes I ask a question on SU where the answer is really important to me but after a week or more just has 10 to 30 visitors and has no answer; how can I increase the number of visitors to such questions?
P.S: I think of a way that your question always suggest by google to other people who search about. but it need a tricky way to force google crawler and spider to show your question as a top 3.
A:
In addition to bounty, see this section of the /faq --
What if I don't get a good answer?
In order to get good answers, you have to put some effort into your question. Edit your question to provide status and progress updates. Document your own continued efforts to answer your question. This will naturally bump your question and get more people interested in it.
| {
"pile_set_name": "StackExchange"
} |
Q:
When is it appropriate to have minor version numbers for tags?
There are some very specific tags like ruby-1.9.2, for which there is already ruby-1.9.
When is it appropriate to have minor version numbers for tags?
Also, would it make sense to automatically have these revisions pointing to the original tag? (I'm aware it's been asked before: Would specialized version tags be useful?)
A:
I believe it's appropriate to use tags when your question is specific to that revision. For example.
Your code works with ruby 1.9. But with 1.9.2 it behaves differently.
Or when you are a lazy copy/paste activist.
| {
"pile_set_name": "StackExchange"
} |
Q:
Prevent copying Site Templates
How can Prevent copying my Site templates such as CSS & Javascript code or anyone fails to copy my website.
please help me.
I tried many ways but it was inconclusive.I'm sorry that asked this question.There is no way for this problem.I realized my mistake.
A:
Sorry to say but all client-side code can be copied.
Your best bet is to stop publishing your site online.
Obfuscate
However, there are tools that help you obfuscate code. Here's one.
http://www.javascriptobfuscator.com/
http://htmlobfuscator.com/
Minify
On the other hand, most techniques employ that minify your code.
Here's two links:
http://cssminifier.com/
http://www.willpeavy.com/minifier/
| {
"pile_set_name": "StackExchange"
} |
Q:
Getting screen coordinate of action bar menu item for creating introduction screen
I was wondering, if there is any possibility of getting the screen coordinates of an action bar menu item?
As I would like to create an introduction screen, to draw an arrow pointing to the desired action bar menu item, so that the user knows where to start.
A:
Here's how I did it:
@Override
public boolean onOptionsItemSelected(MenuItem item) {
if(item.getItemId().equals(R.id.my_menu_item)) {
View menuView = findViewById(R.id.menu_item_search);
int[] location = new int[2];
menuView.getLocationOnScreen(location);
int menuViewX = location[0];
int menuViewY = location[1];
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Generate beta-binomial distribution from existing vector
Is it possible to/how can I generate a beta-binomial distribution from an existing vector?
My ultimate goal is to generate a beta-binomial distribution from the below data and then obtain the 95% confidence interval for this distribution.
My data are body condition scores recorded by a veterinarian. The values of body condition range from 0-5 in increments of 0.5. It has been suggested to me here that my data follow a beta-binomial distribution, discrete values with a restricted range.
set1 <- as.data.frame(c(3,3,2.5,2.5,4.5,3,2,4,3,3.5,3.5,2.5,3,3,3.5,3,3,4,3.5,3.5,4,3.5,3.5,4,3.5))
colnames(set1) <- "numbers"
I see that there are multiple functions which appear to be able to do this, betabinomial() in VGAM and rbetabinom() in emdbook, but my stats and coding knowledge is not yet sufficient to be able to understand and implement the instructions provided on the function help pages, at least not in a way that has been helpful for my intended purpose yet.
A:
We can look at the distribution of your variables, y-axis is the probability:
x1 = set1$numbers*2
h = hist(x1,breaks=seq(0,10))
bp = barplot(h$counts/length(x1),names.arg=(h$mids+0.5)/2,ylim=c(0,0.35))
You can try to fit it, but you have too little data points to estimate the 3 parameters need for a beta binomial. Hence I fix the probability so that the mean is the mean of your scores, and looking at the distribution above it seems ok:
library(bbmle)
library(emdbook)
library(MASS)
mtmp <- function(prob,size,theta) {
-sum(dbetabinom(x1,prob,size,theta,log=TRUE))
}
m0 <- mle2(mtmp,start=list(theta=100),
data=list(size=10,prob=mean(x1)/10),control=list(maxit=1000))
THETA=coef(m0)[1]
We can also use a normal distribution:
normal_fit = fitdistr(x1,"normal")
MEAN=normal_fit$estimate[1]
SD=normal_fit$estimate[2]
Plot both of them:
lines(bp[,1],dbetabinom(1:10,size=10,prob=mean(x1)/10,theta=THETA),
col="blue",lwd=2)
lines(bp[,1],dnorm(1:10,MEAN,SD),col="orange",lwd=2)
legend("topleft",c("normal","betabinomial"),fill=c("orange","blue"))
I think you are actually ok with using a normal estimation and in this case it will be:
normal_fit$estimate
mean sd
6.560000 1.134196
| {
"pile_set_name": "StackExchange"
} |
Q:
C# USB eToken signature and validation issue
I have an x509 certificate with a public and private key that is stored on a safenet usb token.
I have some data I want to sign. I need to use the public key of the certificate to verify the signature.
Ultimate code doing the signing with my own self signed certificate:
RSACryptoServiceProvider rsa1 = (RSACryptoServiceProvider)useCertificate.PrivateKey;
byte[] digitalSignature = rsa1.SignHash(hash, CryptoConfig.MapNameToOID("SHA1"));
And the code to verify using the public key of the certificate:
RSACryptoServiceProvider rsa = (RSACryptoServiceProvider)useCertificate.PublicKey.Key;
Verified = rsa.VerifyHash(hash, CryptoConfig.MapNameToOID("SHA1"), digitalSignature);
With the self signed certificate this works fine. The signature I get back is 256
Bytes.
With the token using this code to obtain the signature and then verify it, I get only 128 Byte signature and the verify fails:
CspParameters csp = new CspParameters(1, "SafeNet RSA CSP");
csp.Flags = CspProviderFlags.UseDefaultKeyContainer;
csp.KeyNumber = (int)KeyNumber.Signature;
RSACryptoServiceProvider rsa1 = new RSACryptoServiceProvider(csp);
Verify code same as above.
I note that the certificate I want to use is the default in the token. Why am I only getting a 128 Byte signature back instead of 256? I suspect that is why it won't verify.
Do I need some other parameters and settings in my csp?
Thanks
* Update based on comments *
It's clear that I am using 1024 bits when I specify the csp.keyNumber = (int)KeyNumber.Signature - but this is the only way the token actually returns anything. Even though the token key size is 2048 bits and the key specification is AT_KEYEXCHANGE. When I use the exchange keynumber which I think is actually correct, then when I try to compute a signature I am prompted to login, but then I get an exception "The parameter is invalid". So I need one of 2 things as far as I can see:
1 - how to use the public key to verify the signature using 1024 bits (without the token - we need to verify on a machine without the token).
or
2 - how to set whatever is incorrect so that we can get passed the exception -- which I think is the better idea.
Does anyone have any advice on what I can do about this exception or what might be causing it?
Full exception details below:
HResult = -2147024809
Message = The parameter is incorrect.
Stack Trace
at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash, Int32 cbHash, ObjectHandleOnStack retSignature)
at System.Security.Cryptography.Utils.SignValue(SafeKeyHandle hKey, Int32 keyNumber, Int32 calgKey, Int32 calgHash, Byte[] hash)
at System.Security.Cryptography.RSACryptoServiceProvider.SignHash(Byte[] rgbHash, Int32 calgHash)
at System.Security.Cryptography.RSACryptoServiceProvider.SignHash(Byte[] rgbHash, String str)
at TE.Program.Main(String[] args) in z:\Work\compusolve\enctest\TE\TE\Program.cs:line 77
A:
The answer to this is two fold. If you are using one of these devices, I found that in the registry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\Defaults\Provider
There are 3 different providers. Each with identical settings for type and even image - the dll used. But selecting a different one, in my case Datakey RSP CSP, provided the 256 byte signature based on the 2048 bit key. You also have to ensure that the certificate you are using is the default certificate in the token. In my case there were two different certificates. I was verifying using one, but signing using another.
Complete source code for a test client is below:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Security.Cryptography.X509Certificates;
using System.Security.Cryptography;
namespace TE
{
class Program
{
static void Main(string[] args)
{
try
{
// these variables should be changed to math your installation
// find CSP's in this windows registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\Defaults\Provider
string TokenCSPName = "Datakey RSA CSP";
string TokenCertificateName = "ACME Inc";
string NonTokenCertificateName = "SelfSigned";
string certLocation = "Token"; // change to something else to use self signed "Token" for token
// the certificate on the token should be installed into the local users certificate store
// tokens will not store or export the private key, only the public key
// find the certificate we want to use - there's no recovery if the certificate is not found
X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
store.Open(OpenFlags.OpenExistingOnly);
X509Certificate2Collection certificates = store.Certificates;
X509Certificate2 certificate = new X509Certificate2();
X509Certificate2 useCertificate = new X509Certificate2();
if (certLocation == "Token")
{
for (int i = 0; i < certificates.Count; i++)
{
certificate = certificates[i];
string subj = certificate.Subject;
List<X509KeyUsageExtension> extensions = certificate.Extensions.OfType<X509KeyUsageExtension>().ToList();
if (certificate.GetNameInfo(X509NameType.SimpleName, false).ToString() == TokenCertificateName)
{
for (int j = 0; j < extensions.Count; j++)
{
if ((extensions[j].KeyUsages & X509KeyUsageFlags.DigitalSignature) == X509KeyUsageFlags.DigitalSignature)
{
useCertificate = certificate;
j = extensions.Count + 1;
}
}
}
}
} else
{
for (int i = 0; i < certificates.Count; i++)
{
certificate = certificates[i];
string subj = certificate.Subject;
List<X509KeyUsageExtension> extensions = certificate.Extensions.OfType<X509KeyUsageExtension>().ToList();
if (certificate.GetNameInfo(X509NameType.SimpleName, false).ToString() == NonTokenCertificateName)
useCertificate = certificate;
}
}
CspParameters csp = new CspParameters(1, TokenCSPName);
csp.Flags = CspProviderFlags.UseDefaultKeyContainer;
csp.KeyNumber = (int)KeyNumber.Exchange;
RSACryptoServiceProvider rsa1 = new RSACryptoServiceProvider(csp);
string SignatureString = "Data that is to be signed";
byte[] plainTextBytes = Encoding.ASCII.GetBytes(SignatureString);
bool Verified = false;
using (SHA1CryptoServiceProvider shaM = new SHA1CryptoServiceProvider())
{
// hash the data to be signed - you can use signData and avoid the hashing if you like
byte[] hash = shaM.ComputeHash(plainTextBytes);
// sign the hash
byte[] digitalSignature = rsa1.SignHash(hash, CryptoConfig.MapNameToOID("SHA1"));
// check your signature size here - if not 256 bytes then you may not be using the proper
// crypto provider
// Verify the signature with the hash
RSACryptoServiceProvider rsa = (RSACryptoServiceProvider)useCertificate.PublicKey.Key;
Verified = rsa.VerifyHash(hash, CryptoConfig.MapNameToOID("SHA1"), digitalSignature);
if (Verified)
{
Console.WriteLine("Signature Verified");
}
else
{
Console.WriteLine("Signature Failed Verification");
}
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Magento2 get Shipping Rates Programmatically
Let me know how to get Shipping rates Programmatically, I tried with default collectrates(), Still it's not working.
A:
This will help you
$quote = $this->checkoutSession->getQuote();
$address = $quote->getShippingAddress();
$address->collectShippingRates();
| {
"pile_set_name": "StackExchange"
} |
Q:
Selecting by attribute using Python and a list
I have been using this script to select a feature using Python:
layer = iface.activeLayer()
layer.selectByExpression('\"Declividad\"= value', QgsVectorLayer.SetSelection)
selection = layer.selectedFeatures()
But now, I need to select the values for Declividad from a list.
For exemple: list = [10,11,12], so I want to select the values 10,11 and 12 for Declividad.
How could I do that?
A:
You could juse use an expression like:
layer = iface.activeLayer()
my_list = [10, 11, 12]
values = ','.join(str(x) for x in my_list)
layer.selectByExpression('\"Declividad\" IN (' + values + ')', QgsVectorLayer.SetSelection)
| {
"pile_set_name": "StackExchange"
} |
Q:
Solving $\iint \frac{1}{(x^2+y^2+1)^{3/2}} dx dy $
This is a question in a book of statistics and probability. To prove that this function is a Probability density function, we should solve it to get the answer equals to 1.
I haven't had to deal with these kinds of integrals for a while.
I need a level by level solution with complete description.
Any help would be greatly appreciated.
problem
A:
$$\iint_{\mathbb{R}^2}\frac{dx\,dy}{(1+x^2+y^2)^{3/2}}=\int_{0}^{+\infty}\pi\frac{2 \rho}{(1+\rho^2)^{3/2}}\,d\rho=\pi\left[-\frac{2}{\sqrt{1+\rho^2}}\right]_{0}^{+\infty}=\color{red}{2\pi}. $$
| {
"pile_set_name": "StackExchange"
} |
Q:
TO DO LIST in javascript
I want to add a delete button beside each of the items that are to be added.
How to do this properly so the all the functions work?
I have tried the method below as you will see in the code. This seems correct to me but it's not working. This needs to be purely JavaScript`.
var button = document.createElement("BUTTON");
var ul = document.getElementById("list");
var li = document.createElement("li");
function handleAddNewItem() //adds new items and more
{
var item = document.getElementById("input").value;
var ul = document.getElementById("list");
var li = document.createElement("li");
if (item === '') {
alert("Input field can not be empty");
}
else {
button.innerText = "Delete";
li.appendChild(document.createTextNode("- " + item));
ul.appendChild(li);
ul.appendChild(button);
}
document.getElementById("input").value = ""; //clears input
//li.onclick = clearDom;
}//code deletes items by clearDom function
document.body.onkeyup = function (e) //allows items to be added with enter button
{
if (e.keyCode == 13) {
handleAddNewItem();
}
}
function clearDom() {
//e.target.parentElement.removeChild(e.target);//removeChild used
ul.removeChild(li);
ul.removeChild(button);
}
button.addEventListener("click", clearDom);
<body>
<input id="input" placeholder="What needs to be done?">
<button id="add_button" onclick="handleAddNewItem()">ADD</button>
<ul id="list">
</ul>
</body>
<script src="new.js"></script>
</html>
var button = document.createElement("BUTTON");
var ul = document.getElementById("list");
var li = document.createElement("li");
function handleAddNewItem() //adds new items and more
{
var item = document.getElementById("input").value;
var ul = document.getElementById("list");
var li = document.createElement("li");
if (item === '') {
alert("Input field can not be empty");
} else {
button.innerText = "Delete";
li.appendChild(document.createTextNode("- " + item));
ul.appendChild(li);
ul.appendChild(button);
}
document.getElementById("input").value = ""; //clears input
//li.onclick = clearDom;
} //code deletes items by clearDom function
document.body.onkeyup = function(e) //allows items to be added with enter button
{
if (e.keyCode == 13) {
handleAddNewItem();
}
}
function clearDom() {
//e.target.parentElement.removeChild(e.target);//removeChild used
ul.removeChild(li);
ul.removeChild(button);
}
button.addEventListener("click", clearDom);
<input id="input" placeholder="What needs to be done?">
<button id="add_button" onclick="handleAddNewItem()">ADD</button>
<ul id="list">
</ul>
<!-- commented out to reduce errors in the console
<script src="new.js"></script> -->
I am facing this error for now-
"The node to be removed is not a child of this node. at
HTMLButtonElement.clearDom new.js:33:7"
I want to implement the delete button in line with the items listed. so that it deletes the items added one by one separately.
A:
I'd suggest:
function handleAddNewItem() {
/* Move the creation of all variables within the function
in which they're being used: */
const button = document.createElement('button'),
ul = document.getElementById('list'),
li = document.createElement('li'),
item = document.getElementById('input').value;
// here we use String.prototype.trim() to remove leading
// and trailing whitespace from the entered value, to
// prevent a string of white-space (' ') being considered
// valid:
if (item.trim() === '') {
alert("Input field can not be empty");
} else {
button.textContent = "Delete";
// here we again use String.prototype.trim(), this time to
// avoid the creation of a ' task '
// with extraneous white-space:
li.appendChild(document.createTextNode("- " + item.trim()));
// appending the <button> to the <li> instead
// of the <ul> (of which it would be an invalid
// child element anyway):
li.appendChild(button);
ul.appendChild(li);
}
document.getElementById("input").value = ''; //clears input
}
document.body.onkeyup = function(e) //allows items to be added with enter button
{
if (e.keyCode == 13) {
handleAddNewItem();
}
}
// the e - the EventObject - is passed automagically from
// the later use of EventTarget.addEventListener():
function clearDom(e) {
// e.target is the element on which the event that we're
// reacting to was originally fired (the <button>):
const clickedButton = e.target;
// here we use DOM traversal methods to find the closest
// ancestor <li> element, and then use ChildNode.remove()
// to remove it from the DOM:
clickedButton.closest('li').remove();
}
// using event-delegation to catch the
// delete-button clicks:
// first we retrieve the element already on the page which
// will be an ancestor of the appended elements:
document.getElementById('list')
// we then bind the clearDom() function - note the deliberate
// lack of parentheses - as the 'click' event-handler:
.addEventListener('click', clearDom);
function handleAddNewItem() {
/* Creating all variables within the function: */
const button = document.createElement('button'),
ul = document.getElementById('list'),
li = document.createElement('li'),
item = document.getElementById('input').value;
if (item.trim() === '') {
alert("Input field can not be empty");
} else {
button.textContent = "Delete";
li.appendChild(document.createTextNode("- " + item));
li.appendChild(button);
ul.appendChild(li);
}
document.getElementById("input").value = '';
}
document.body.onkeyup = function(e) {
if (e.keyCode == 13) {
handleAddNewItem();
}
}
function clearDom(e) {
const clickedButton = e.target;
clickedButton.closest('li').remove();
}
document.getElementById('list')
.addEventListener('click', clearDom);
<input id="input" placeholder="What needs to be done?">
<button id="add_button" onclick="handleAddNewItem()">ADD</button>
<ul id="list">
</ul>
While this question is already, arguably, already answered, I had a few moments to spare and took advantage of this question to begin learning how to use custom elements. The code, as above, is explained so far as possible using comments in the code itself:
// using an Immediately-Invoked Function
// Expression ('IIFE') to handle the creation of the
// custom element:
(function() {
// creating an HTML <template> element, this could
// instead be placed in, and retrieved from, the DOM:
const template = document.createElement('template');
// using a template literal to create, and format
// the HTML of the created <template> (using a template
// literal allows for new-lines and indentation):
template.innerHTML = `
<style>
*, ::before, ::after {
padding: 0;
margin: 0;
box-sizing: border-box;
}
div.layout {
display: grid;
grid-template-columns: 1fr min-content;
}
div.buttonWrap {
display: flex;
flex-direction: column;
align-items: flex-start;
}
</style>
<div class="layout">
<p></p>
<div class="buttonWrap">
<button>delete</button>
</div>
</div>
`;
// using class syntax:
class TaskItem extends HTMLElement {
// the constructor for the class and, by extension,
// the element that we're defining/creating:
constructor() {
// it seems that super() must be placed as the
// first thing in the constructor function:
super();
// we're holding the contents of the custom
// element in the Shadow DOM, to avoid its
// descendants being affected by CSS in the
// parent page and to prevent JavaScript in
// the document from interacting with the
// contents:
this.attachShadow({
// we want to interact and use elements in
// the Shadow Root, so it must be 'open'
// (although 'closed' is the other valid
// mode-type:
mode: 'open'
});
// here we append the content - not the node
// itself - of the created <template> element
// using Node.cloneNode(), the Boolean true
// means that the descendant elements are also
// cloned and therefore appended:
this.shadowRoot.appendChild(
template.content.cloneNode(true)
);
// for easier reading we cache the shadowRoot
// here (otherwise line-lengths can be a bit
// silly):
const root = this.shadowRoot,
// retrieving the <button> element, which will
// handle the task deletion:
del = root.querySelector('button');
// binding the anonymous function - defined
// using an Arrow function as we don't
// want to change the 'this' in the function -
// as the event-handler for the 'click' event:
del.addEventListener('click', () =>
// here we traverse to the parentNode of
// the 'this', and then use
// parentNode.removeChild() to remove the
// 'this' node:
this.parentNode.removeChild(this));
}
// this callback is executed when the element is
// connected/attached to the DOM:
connectedCallback() {
// we find the Shadow Root:
this.shadowRoot
// find the descendent <p> element:
.querySelector('p')
// and set its text-content to be equal
// to that of the data-task attribute:
.textContent = this.dataset.task;
}
}
// here we define the custom element and its
// class:
window.customElements.define('task-item', TaskItem);
})();
// here we cache a reference to the <button> which will
// cause the addition of new tasks:
const addTask = document.getElementById('add_button'),
// define the function that will handle the
// addition of new tasks:
createTask = () => {
// caching the <input> element:
const taskSource = document.getElementById('input'),
// retrieving and trimming the entered
// <input> value:
task = taskSource.value.trim(),
// creating a new element (custom
// elements are created the same way
// as 'normal' elements):
createdTask = document.createElement('task-item');
// updating the data-task attribute, for
// retrieval/use later when the element
// is added to the DOM:
createdTask.dataset.task = task;
// if we have a task (a zero-length/empty
// string is considered falsey, a string
// with a length greater than zero is
// considered truthy and string with negative
// length is considered impossible (I think),
// and therefore falsey:
if (task) {
// we retrieve the element holding the
// <task-item> elements:
document.getElementById('list')
// and append the created element:
.appendChild(createdTask);
}
// removing the <input> element's value:
taskSource.value = '';
};
// adding createTask() as the event-handler for
// the 'click' event on the <button>:
addTask.addEventListener('click', createTask);
// binding an anonymous function as the handler for
// keyup events on the <body> (binding to a closer
// ancestor would be more sensible in production):
document.body.addEventListener('keyup', (e) => {
// if the e.which is 13 we trust that to be the
// enter key, and then we call createTask()
if (e.which === 13) {
createTask();
}
})
#list {
margin-top: 0.5em;
min-height: 1.5em;
background: transparent radial-gradient(at 0 0, skyblue, lime);
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
grid-gap: 5px;
}
#list:empty::before {
content: 'Add a new task!';
background: transparent linear-gradient(to right, #fffa, #fff0);
padding: 0 0 0 1em;
}
task-item {
border: 2px solid lime;
padding: 0.25em;
background-color: #fff9;
}
<input id="input" class="add_task" placeholder="What needs to be done?">
<button id="add_button" class="add_task">ADD</button>
<div id="list"></div>
JS Fiddle demo.
ChildNode.remove().
Classes.
Constructor.
document.createElement().
document.getElementById().
document.querySelector().
Element.attachShadow().
Event object.
event.target.
EventTarget.addEventListener().
Node.appendChild().
Node.parentNode.
Node.removeChild().
Node.textContent.
super().
Window.customElements.
| {
"pile_set_name": "StackExchange"
} |
Q:
Finding Windows API Vulnerabilities
Currently I'm reading about the Windows API's like CreateProcess and others.
Also I took a look into the Source of the Powershell Script which used the MS16-032 vulnerability.
My Question is: Which ways can you take to find such vulnerabilities in the API's or general to find things like this?
I watched Videos about Windows Kernel vulnerabilities, but can't find any good Documentation where this is described.
A:
Userland:
understand how the Windows native API works, e.g. something like (and probably not accurate): ZwCreateProcess -> NTCreateProcess -> CreateProcess
https://undocumented.ntinternals.net/
https://www.google.com/shopping/product/10794697035904501181?q=native+api+book&biw=1486&bih=811&sa=X&ved=0ahUKEwjFzNPlkLLWAhVkJcAKHdwGDRYQ8wIIgwMwAA
Windows Internals:
https://www.amazon.com/Windows-2000-Native-API-Reference/dp/1578701996
You'll also want to know x86/x64 calling conventions (STDcall), e.g. how arguments are passed and return values.
Last, what the OS expects and how the return values are used.
Ring0:
Uninformed.org has a good writeup:
http://uninformed.org/index.cgi?v=10&a=2
| {
"pile_set_name": "StackExchange"
} |
Q:
How to create a loop through LinkedHashMap>?
Please help me to create a loop through LinkedHashMap<String,ArrayList<String>> h:
if (h.get("key1").size() == 0)
System.out.println("There is no errors in key1.");
else
System.out.println("ERROR: there are unexpected errors in key1.");
if (h.get("key2").size() == 0)
System.out.println("There is no errors in key2.");
else
System.out.println("ERROR: there are unexpected errors in key2.");
if (h.get("key3").size() == 0)
System.out.println("There is no errors in key3.");
else
System.out.println("ERROR: there are unexpected errors in key3.");
if (h.get("key4").size() == 0)
System.out.println("There is no errors in key4.\n");
else
System.out.println("ERROR: there are unexpected errors in key4.\n");
A:
Like this?
for (String key : h.keySet())
{
System.out.println("Key: " + key);
for(String str : h.get(key))
{
System.out.println("\t" +str);
}
}
EDIT:
for (String key : h.keySet())
{
if(h.get(key).size() == 0)
{
System.out.println("There is no errors in " + key) ;
}
else
{
System.out.println("ERROR: there are unexpected errors in " + key);
}
}
A:
Try this code:
Map<String, ArrayList<String>> a = new LinkedHashMap<String, ArrayList<String>>();
Iterator<Entry<String,ArrayList<String>>> itr = a.entrySet().iterator();
while (itr.hasNext()) {
Entry<String,ArrayList<String>> entry = itr.next();
String key = entry.getKey();
System.out.println("key: " + key);
List<String> list = entry.getValue();
System.out.println("value: " + list);
}
A:
Another way in Java8 is with the foreach() method
Map<String, List<String>> test1 = new LinkedHashMap<String, List<String>>();
test1.forEach((key,value) -> {
System.out.println(key + " -> " + value);
});
| {
"pile_set_name": "StackExchange"
} |
Q:
subquery is not working in mysql
This is my mysql query
SELECT tm.MAGAZINE_ID, tm.MAGAZINE_NAME,tm.MAGAZINE_DESCRIPTION,pub.publisher_name,
tmi.COVER_PAGE_THUMB AS COVER_PAGE_VERTICAL,tmi.FROM_DATE AS ISSUE_DATE,
tm.html_flag AS HTML_EXIST,tm.CATEGORY_ID,tm.language_id,tm.is_free,tma.AppUrl,
(SELECT issue_id from tbl_magazine_issue WHERE magazine_id = 141
ORDER BY FROM_DATE DESC LIMIT 1) as temp_issue_id
FROM tbl_magazine_apps as tma
LEFT OUTER JOIN tbl_magazine_code as tmc ON tmc.Code = tma.AppsCode
LEFT OUTER JOIN `tbl_magazine` AS tm ON tmc.magazine_Id = tm.MAGAZINE_ID
JOIN `tbl_magazine_issue` AS tmi ON temp_issue_id = tmi.issue_id
LEFT OUTER JOIN mst_publisher AS pub ON tm.publisher_id=pub.publisher_id
WHERE
tmi.PUBLISH_STATUS IN(1,3)
AND tmi.`OS_SELECT` = '".$osType."'
AND tma.id IN (".$appIds.")
GROUP BY tm.MAGAZINE_ID
ORDER BY tmi.ISSUE_DATE DESC
but i got an error that
#1054 - Unknown column 'temp_issue_id' in 'on clause'
if any one know about this please help me. i am new to this
A:
AFAIK The subquery belongs to the from part:
http://dev.mysql.com/doc/refman/5.7/en/from-clause-subqueries.html
So I would join the subquery.
Like:
SELECT a.a, b.b
FROM table1 as a
JOIN (SELECT b from table2) as b ON a.key = b.key;
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it possible to match a url with xpath in a nightwatch assertion?
In the nightwatch documentation I can see that there are both the urlContains
and urlEquals assertions that can be specified in a nightwatch.js but these do not allow xpath selectors. I have applied the global parameter to use Xpath everywhere (I will explain below) but I am looking for a way to assert (in a fuzzy way) that the URL of the current page matches a pattern.
The reason is that I test on numerous instances of the same application which are distinguished by their subdomain(s). I am attempting to make my test automation run on any of these environments (without having to change or duplicate the tests).
A:
One way that you can do this is by using:
browser
.url(function(result) {
//match result.value to expression here
})
This is the best way I've been able to get the URL and doing any kind of matching with it.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use mathpix (a LaTeX OCR tool) to identify LaTeX from images?
I'd just heard about mathpix, a way that can identify formula from images and generate the LaTeX code. I have some handouts (already printed) from my teacher 10 years ago written in Chinese and many math formulas. I don't have the original digital file, but only have those documents on my shelves. I want to turn it into digital files, namely get the Chinese texts and math formula in LaTeX code so that I can reproduce and reprint it. However doing it by hand is a heavy work, so I want to seek some clever way. I think mathpix can help me a lot. But I have two main questions with respect to it:
If I have a picture like this, with many inline math: (just a demo, not the actual document I have)
Can I get the result both with the English words and LaTeX inline math? (I mean,
get the resulting string "Suppose $A$ is bounded subset of $\Bbb R^n$. If ...") It seems that I need a pure text OCR tool and mathpix work together nicely. How to achieve such task?
If I have bunches of images to identify, I guess I need to write some python program with the mathpix API provided in mathpix API. But the sample code given is not work in my python 3 now. I'm not good at python, how to modify it? Or is there other clever way to do? (Maybe I should ask this question in another board, but I think it would be fewer people know LaTeX there.)
A:
If you only have the hard copy version, you could also try using the Mathpix Android or iOS apps to take a picture of the documents and it will render the LaTeX. You can then export the LaTeX. Try it out and see if that works any better for you!
| {
"pile_set_name": "StackExchange"
} |
Q:
Handling default parameters in cython
I am wrapping some c++ code using cython, and I am not sure what is the best best way to deal with parameters with default values.
In my c++ code I have function for which the parameters have default values. I would like to wrap these in such a way that these default values get used if the parameters are not given. Is there a way to do this?
At this point the only way that I can see to provide option parameters is to define them as part of the python code (in the def func satement in pycode.pyx below), but then I have defaults defined more than once which I don't want.
cppcode.h:
int init(const char *address=0, int port=0, int en_msg=false, int error=0);
pycode_c.pxd:
cdef extern from "cppcode.h":
int func(char *address, int port, int en_msg, int error)
pycode.pyx:
cimport pycode_c
def func(address, port, en_msg, error):
return pycode_c.func(address, port, en_msg, error)
A:
You could declare the function with different parameters ("cppcode.pxd"):
cdef extern from "cppcode.hpp":
int init(char *address, int port, bint en_msg, int error)
int init(char *address, int port, bint en_msg)
int init(char *address, int port)
int init(char *address)
int init()
Where "cppcode.hpp":
int init(const char *address=0, int port=0, bool en_msg=false, int error=0);
It could be used in Cython code ("pycode.pyx"):
cimport cppcode
def init(address=None,port=None,en_msg=None,error=None):
if error is not None:
return cppcode.init(address, port, en_msg, error)
elif en_msg is not None:
return cppcode.init(address, port, en_msg)
elif port is not None:
return cppcode.init(address, port)
elif address is not None:
return cppcode.init(address)
return cppcode.init()
And to try it in Python ("test_pycode.py"):
import pycode
pycode.init("address")
Output
address 0 false 0
Cython also has arg=* syntax (in *.pxd files) for optional parameters:
cdef foo(x=*)
| {
"pile_set_name": "StackExchange"
} |
Q:
Styling issues in Safari
I've been working on a website for a little while now, doing most of my testing in Chrome, Firefox, and IE. As I'm wrapping things up, I've tried viewing it in Safari (on Mac, iPad, and iPhone). I've noticed that certain elements are misplaced in Safari. I've tried playing with the CSS, but I've had no luck.
The page can be viewed here - http://staging.princewebdesigns.com/gallais/
See specifically the logo (being pushed down into the banner), the font of the tagline in the banner (wrapping beyond the banner and extending too far to the left), the 'Featured Work' title wrapping, the project names wrapping, and the footer wrapping.
Here is how the page should look - http://staging.princewebdesigns.com/gallais/images/chrome.png
To see how it looks on my iPhone, change the link above to .../iphone.png
Any help is appreciated.
A:
The issue is (I think) that you have your browser's text zoomed in.
I loaded the page in Safari 5.1 on Mac OS 10.7.3, and it loaded fine initially. When I zoomed normally, the layout stayed intact. As soon as I tried zooming just the text, the layout broke per your description.
That being said, you may want to think hard about how to make the layout more 'flexible' in the event a user does have their text size increased. In IE, for example, the default zoom is full page zoom, but a user can still increase their text size apart from zooming. It's worth testing your layout in those situations to make sure it doesn't completely derail. I'm not saying it has to be perfect, but still legible.
One idea is to try out different units. I've found that when declaring horizontal lengths (e.g. margin-left) using relative measurements works, but when declaring vertical lengths, (e.g. margin-top) using pixel measurements works better. For super critical items, like the site logo, positon:absolute may be a good route to try.
| {
"pile_set_name": "StackExchange"
} |
Q:
Disable Link Using CSS but Enable Title
I want to enable link using CSS but still enable title. Here is my code:
CSS and HTML Code:
.disabledLink {
pointer-events: none;
opacity: 0.6;
cursor: default;
}
<a href="www.google.com" class="disableLink" title="My Title" />
<span datahover="test">My Link</span>
</a>
Unfortunately, title is not appearing when hovering my mouse to the link. Is there a better way to enable title?
A:
.disabled-link {
pointer-events: none;
}
<span datahover="test" title="My Title"><a href="www.google.com" class="disabled-link">My Link</a></span>
Change your markup into this. It will work. Instead of targeting <a> tag, I target the span for title. Hope it helps.
| {
"pile_set_name": "StackExchange"
} |
Q:
Problems fetching a md file using javascript fetch()
I am currently developing a web app, where most of the content is written in markdown. So for handling this, I thought I could create a github repo to host all of the markdown files and then use the fetch() api to grab the files from github.
My code looks like so:
fetch('https://github.com/erasabi/trekthroughs/blob/master/pen_testing/RickdiculouslyEasy.md')
.then(response => response.blob())
.then(result => console.log(result));
I am getting this error though when I do that:
Failed to load https://github.com/erasabi/trekthroughs/blob/master/pen_testing/RickdiculouslyEasy.md: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Is there anyway to go about doing this? End result is once I fetch the markdown file's content I would like to use showdown or markedjs to convert the content into html for the site.
A:
Figured it out basically you got to do something like this:
fetch('https://raw.githubusercontent.com/erasabi/trekthroughs/master/pen_testing/RickdiculouslyEasy.md')
.then(response => response.text())
.then(result => document.getElementById('content').innerHTML = marked(result));
| {
"pile_set_name": "StackExchange"
} |
Q:
PLINQ AsParallel() with lower priority?
is it possible to run some of my PLINQ AsParallel() - Queries with a lower priority than others?
(Or some with a higher priority than others)
Is this possible with PLinq or will I have to avoid PLINQ and do all the stuff on my own?
EDIT/UPDATE:
Would it be possible to call
Thread.Sleep(0)
inside the parallel executed method when I want to archive a lower priority?
Or is that a very bad practice/hack?
A:
Unfortunately, this is not directly possible in PLINQ.
You can do it in most of the rest of the Task Parallel Library via creation of a custom TaskScheduler. This would allow you to have custom "priorities" when using Parallel.For or ForEach.
However, the ability to customize the TaskScheduler was not provided with PLINQ, since PLINQ requires very strict guarantees from the TaskScheduler, and the fear was that exposing this would be very problematic.
EDIT/UPDATE:
Would it be possible to call
Thread.Sleep(0)
This would "lower" the priority, but unfortunately, has it's own issues, especially when combined with PLINQ. This will potentially cause thread starvation in the ThreadPool, since you'll be "sleeping" on ThreadPool threads.
In addition, there's a fundamental problem with this - PLINQ is designed and intended to handle queries, and is not designed for processing. Introducing logical code to control the flow structure is really against the theory behind PLINQ, and will likely cause unintended performance repercussions you aren't expecting, especially if you're using the default partitioning.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I insert a lot of entities in a Play! Job?
In my application I have to simulate various situations for analysis. Thus insert a (very) large amount of lines into a database. (We're talking about a very large amount of data...several billion)
Model
@Entity
public class Case extends Model {
public String url;
}
Job
public class Simulator extends Job {
public void doJob() {
for (int i = 0; i !=) {
// Somestuff
new Case(someString).save();
}
}
}
After half an hour, there is still nothing in the database. But debug traces show Play inserts some stuff. I suspect it is some kind of cache.
I've tried about everything :
Model.em().flush();
Changes nothing.
Model.em().getTransaction().commit();
throws TransactionRequiredException occured : no transaction is in progress
Model.em().setFlushMode(FlushModeType.COMMIT);
Model.em().setFlushMode(FlushModeType.AUTO);
Changes nothing.
I've also tried @NoTransaction annotations everywhere :
Class & functions in Controller
Class Case
Overriding save method in Model
Class & functions of my Job
Getting quite desperate. Every kind of advice is welcome.
EDIT : After a little research, the first row appears in database. The associated ID is about 550.000. That means about half a million rows are somewhere in between my application and database.
A:
Try
em.getTransaction().begin();
em.persist(model);
em.getTransaction().commit();
You can't commit a transaction before you begin it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Any suggestions about which questions to answer?
Greetings fellow stackoverflowers. I'm the only programmer in my company, so I thought stackoverflow might be a "fun" way to get some camaraderie. Also, I like to be helpful. I've been using the system actively for about two weeks, asking the occasional question and answering many (rep now=145). The "problem" I am facing is that I find my mileage varies a lot: sometimes people are very grateful and responsive to help. But a couple of times I have given very detailed answers, only to be ignored: the questioner just vanishes without up-voting or accepting my answer. That makes me feel like I'm wasting my time--and I feel "stupid" posting "please accept my answer" comments. Do y'all have any suggestions about how to discern which questioners are to be taken seriously? Or any other comments about how best to participate as someone who is more of an answerer than an asker? I am not so much interested in maximizing my rep as minimizing my frustration, and maximizing my benefit to serious users.
A:
People who put effort into their questions will generally take care of them. Ways to recognize:
proper spelling
well formatted
includes all the info you need but not more
tends to be longer
If you just want to get reputation there are strategies for that (search here on meta) but they're not too fun. Usually it involves answering easy questions in popular tags very quickly.
A:
Honestly, you kind of answered your own question:
Also, I like to be helpful.
Sometimes very good answers go without upvotes or acceptance. Conversely, sometimes a lot of rep is dished out when half a dozen people quickly give a simple answer to a simple question. There's really no rhyme or reason.
I guess if I were to define my "strategy" it would be that I just like to answer questions. Sure, getting rep is fun. But the real value is in the activity itself. By answering a question, I take a step to refresh (or even further) my own knowledge of a subject. Often I get just as much out of it as the person asking the question.
Though I don't have any stellar answers by measure of upvoting, I like to think I've written some good content on Stack Overflow and have contributed to the community. And, in doing so, have contributed to my own career growth. (I came close to the Unsung Hero badge once, but I think I'm pretty far now.) Maybe I just tend to answer less popular questions? I don't know. But the main thing is to just keep doing it. Answer stuff that's interesting to you. As I said, it helps the person answering as much as it helps the person asking.
Keep in mind also that these questions and answers are saved for posterity. Sure, maybe the person who asked the question has gone on their merry way without any gratitude. But that question and its answer (an answer you provided) is now there for all to see. It's not uncommon for people to search for things on Google and find what they need on Stack Overflow. That answer may help someone else days, weeks, months down the road.
If you enjoy contributing to the community, then by all means keep doing so. There's no shortage of appreciation, even if any given question doesn't indicate as much. If nobody else has said it to you yet... Thank you. Thank you for contributing.
| {
"pile_set_name": "StackExchange"
} |
Q:
Coffee choices for Zombie Apocalypse
(This is probably an opinion question.)
With the coronavirus I've been stocking up on foods that last, and that I like to eat. Example, I love figs and dried fruit and eat them daily so I've bought three months worth. I don't like canned sardines so I didn't stock up on that as I would never eat them unless I was under duress.
So the thought went to coffee. I buy my coffee from a local roaster with a store a few blocks from where I live. As a result I don't buy a lot of coffee at one time. In my experience coffee beans are at their best for about two weeks after roasting and start losing their flavor after about a month.
So - and this is an opinion question: How best to provide for a three to six month shortage.
Buy beans and have them get stale OR
Buy packaged coffee: Cafe Bustello or illy?
Yeah. I know. People are dying and hurting and I'm concerned about sitting around in my apartment without any coffee. Chances are this will not be the Zombie Apocalypse; nor will it be a major pandemic outside of China.
But the question remains - how best to stock up for 3-6 months AND not "waste" ones money buying things that one would not use under normal conditions?
TL/DR - After 6 months storage which would be superior - buying cans of illy or freshly roasted beans stored in a bail lid jars?
A:
Freshly roasted beans start degrading after a week or so. Maybe buying cans of beans - ground or not - would be your best choice for drinking not so great coffee during the end days. Check that they "vacuum pack". Maybe that helps.
I would make another choice: properly stored green coffee beans can last for year. Then I would roast it (your going to have filled propane tanks, right?). You can roast a lot of coffee from one 5 gal propane tank.
That is my selected choice and I am prepared.
| {
"pile_set_name": "StackExchange"
} |
Q:
EC2 ELB Wildcard SSL Renewal "Invalid Public Key Certificate"
We have an existing ELB on EC2. It's got a GoDaddy issued wildcard ssl cert. I've downloaded the new .crt and gd_bundle.crt from GoDaddy.
In ec2 I go to the load balancer, click the certificate, choose to upload a new cert. I copy the existing private key into the private key field. The contents of the new .crt into the public certificate field and the contents of gd_bundle.crt into the certificate chain field.
When I try and save it I get the error "Invalid Public Key Certificate."
The certs are in PEM format (or they seem to be)
A:
Turns out I was missing that my key was not an RSA key, I needed to do the following:
openssl rsa -in company.key -out company_rsa.key
| {
"pile_set_name": "StackExchange"
} |
Q:
Simulate correlated variables limiting deviations between observed and defined correlation coefficients
dev_allowance <- 0.15 #Deviation in r allowed
within_limit <- FALSE #Initiate
count <- 0 #Loop count
nvar <- 10 #number of variables to simulate
nobs = 50 #number of observations to simulate
#define correlation matrix
M = matrix(c(1., .0, .0, .0, .0, .0, .0, .0, .0, .0,
.0, 1., .0, .0, .0, .0, .0, .0, .0, .0,
.0, .0, 1., .8, .0, .0, .0, .0, .0, .0,
.0, .0, .8, 1., .0, .0, .0, .0, .0, .0,
.0, .0, .0, .0, 1., .2, .0, .0, .0, .0,
.0, .0, .0, .0, .2, 1., .0, .0, .0, .0,
.0, .0, .0, .0, .0, .0, 1., .8, .0, .0,
.0, .0, .0, .0, .0, .0, .8, 1., .0, .0,
.0, .0, .0, .0, .0, .0, .0, .0, 1., .2,
.0, .0, .0, .0, .0, .0, .0, .0, .2, 1.), nrow=nvar, ncol=nvar)
L = chol(M) # Cholesky decomposition
#Loop while not within limit
while (!within_limit) {
# Generate random variables
r = t(L) %*% matrix(rnorm(nvars*nobs), nrow=nvars, ncol=nobs)
r = t(r)
# Check if within limit
within_limit <- all(abs(cor(r) - M) < dev_allowance)
# Count loop
count <- count + 1
}
cat(paste0("run count: ", count))
I am trying to simulate some 10 random normal variables with defined correlations. Meanwhile, I want the correlation of the simulated variables to be within a certain range centered at the defined correlation.
But the run time is unacceptably, if not infinitely, long.
For now, I want to do nobs=50 and nobs=200. While I planned to set dev_allowance=0.05, what I have now is it can take more than a minute when dev_allowance is less than approx. 0.16 for nobs=50 and approx. 0.08 for nobs=200. Not dare to try smaller dev_allowance...
Is there a workaround if I am to stick to this current scheme of parameters?
A:
Well... half-way through typing the question this came into my mind:
sim_nvar <- matrix(rnorm(nobs), ncol=nobs)
for (i in 2:nvar) {
within_limit <- FALSE
while (!within_limit) {
#Generate random variables
sim_var <- t(L)[i, 1:i] %*% rbind(sim_nvar, matrix(rnorm(nobs), ncol=nobs))
sim_var <- t(rbind(sim_nvar, sim_var))
#Check if within limit
within_limit <- all(abs(cor(sim_var) - M[1:i, 1:i]) < dev_allowance)
}
sim_nvar <- t(sim_var)
}
sim_nvar <- t(sim_nvar)
all(abs(cor(sim_nvar) - M) < dev_allowance)
[1] TRUE
It seems okay to me. But is there any flaw if I separate the simulation this way? Or is it the best way yet?
| {
"pile_set_name": "StackExchange"
} |
Q:
Animating checkbox replace SVG with Font Awesome
I have created a CodePen here, that animates a checkbox. Currently is using a SVG for the tick, how can I replace it with a font? example.
input[type=checkbox] {
opacity: 0;
float: left;
}
input[type=checkbox] + label {
margin: 0 0 0 20px;
position: relative;
cursor: pointer;
font-size: 16px;
font-family: monospace;
float: left;
}
input[type=checkbox] + label ~ label {
margin: 0 0 0 40px;
}
input[type=checkbox] + label::before {
content: ' ';
position: absolute;
left: -35px;
top: -3px;
width: 25px;
height: 25px;
display: block;
background: white;
border: 1px solid #A9A9A9;
}
input[type=checkbox] + label::after {
content: ' ';
position: absolute;
left: -35px;
top: -3px;
width: 23px;
height: 23px;
display: block;
z-index: 1;
background: url('data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjE4MS4yIDI3MyAxNyAxNiIgZW5hYmxlLWJhY2tncm91bmQ9Im5ldyAxODEuMiAyNzMgMTcgMTYiPjxwYXRoIGQ9Ik0tMzA2LjMgNTEuMmwtMTEzLTExM2MtOC42LTguNi0yNC04LjYtMzQuMyAwbC01MDYuOSA1MDYuOS0yMTIuNC0yMTIuNGMtOC42LTguNi0yNC04LjYtMzQuMyAwbC0xMTMgMTEzYy04LjYgOC42LTguNiAyNCAwIDM0LjNsMjMxLjIgMjMxLjIgMTEzIDExM2M4LjYgOC42IDI0IDguNiAzNC4zIDBsMTEzLTExMyA1MjQtNTI0YzctMTAuMyA3LTI1LjctMS42LTM2eiIvPjxwYXRoIGZpbGw9IiMzNzM3MzciIGQ9Ik0xOTcuNiAyNzcuMmwtMS42LTEuNmMtLjEtLjEtLjMtLjEtLjUgMGwtNy40IDcuNC0zLjEtMy4xYy0uMS0uMS0uMy0uMS0uNSAwbC0xLjYgMS42Yy0uMS4xLS4xLjMgMCAuNWwzLjMgMy4zIDEuNiAxLjZjLjEuMS4zLjEuNSAwbDEuNi0xLjYgNy42LTcuNmMuMy0uMS4zLS4zLjEtLjV6Ii8+PHBhdGggZD0iTTExODcuMSAxNDMuN2wtNTYuNS01Ni41Yy01LjEtNS4xLTEyLTUuMS0xNy4xIDBsLTI1My41IDI1My41LTEwNi4yLTEwNi4yYy01LjEtNS4xLTEyLTUuMS0xNy4xIDBsLTU2LjUgNTYuNWMtNS4xIDUuMS01LjEgMTIgMCAxNy4xbDExNC43IDExNC43IDU2LjUgNTYuNWM1LjEgNS4xIDEyIDUuMSAxNy4xIDBsNTYuNS01Ni41IDI2Mi0yNjJjNS4yLTMuNCA1LjItMTIgLjEtMTcuMXpNMTYzNC4xIDE2OS40bC0zNy43LTM3LjdjLTMuNC0zLjQtOC42LTMuNC0xMiAwbC0xNjkuNSAxNjkuNS03MC4yLTcxLjljLTMuNC0zLjQtOC42LTMuNC0xMiAwbC0zNy43IDM3LjdjLTMuNCAzLjQtMy40IDguNiAwIDEybDc3LjEgNzcuMSAzNy43IDM3LjdjMy40IDMuNCA4LjYgMy40IDEyIDBsMzcuNy0zNy43IDE3NC43LTE3Ni40YzEuNi0xLjcgMS42LTYuOS0uMS0xMC4zeiIvPjwvc3ZnPg==') no-repeat center center;
-ms-transition: all .2s ease;
-webkit-transition: all .2s ease;
transition: all .3s ease;
-ms-transform: scale(0);
-webkit-transform: scale(0);
transform: scale(0);
opacity: 0;
}
input[type=checkbox]:checked + label::after {
-ms-transform: scale(1);
-webkit-transform: scale(1);
transform: scale(1);
opacity: 1;
}
<fieldset>
<input id="ham" type="checkbox" name="toppings" value="ham">
<label for="ham">Yay or Nay</label>
</fieldset>
A:
You could do this, and adjust the left, top and font-size values as needed.
input[type=checkbox] + label::after {
font-family: FontAwesome;
content: '\f00c';
}
input[type=checkbox] {
opacity: 0;
float:left;
}
input[type=checkbox] + label {
margin: 0 0 0 20px;
position: relative;
cursor: pointer;
font-size: 16px;
font-family: monospace;
float: left;
}
input[type=checkbox] + label ~ label {
margin: 0 0 0 40px;
}
input[type=checkbox] + label::before {
content: ' ';
position: absolute;
left: -35px;
top: -3px;
width: 25px;
height: 25px;
display: block;
background: white;
border: 1px solid #A9A9A9;
}
input[type=checkbox] + label::after {
font-family: FontAwesome;
content: '\f00c';
position: absolute;
left: -35px;
top: -3px;
width: 23px;
height: 23px;
display: block;
z-index: 1;
-ms-transition: all .2s ease;
-webkit-transition: all .2s ease;
transition: all .3s ease;
-ms-transform: scale(0);
-webkit-transform: scale(0);
transform: scale(0);
opacity: 0;
}
input[type=checkbox]:checked + label::after {
-ms-transform: scale(1);
-webkit-transform: scale(1);
transform: scale(1);
opacity: 1;
}
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css">
<form method="post" action="/">
<fieldset>
<input id="ham" type="checkbox" name="toppings" value="ham">
<label for="ham">Yay or Nay</label>
</fieldset>
</form>
Follow this link for how to include Font Awesome into your project.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can I add attributes to class methods in Python?
I have a class like this:
class A:
def __init__(self):
self.size=0
def change_size(self,new):
self.size=new
I want to add an attribute to the change_size method to say what it changes - i.e. so that
A(blah)
blah.change_size.modifies
returns
'size'
is this possible? I have tried:
class A:
def __init__(self):
self.size=0
def change_size(self,new):
self.change_size.modifies = 'size'
self.size=new
nope
class A:
def __init__(self):
self.size=0
self.change_size.modifies = 'size'
def change_size(self,new):
self.size=new
nope
class A:
def __init__(self):
self.size=0
def change_size(self,new,modifies='size'):
self.size=new
none of which seem to work.
A:
That's simple enough. It goes basically the same way you'd add attributes to any other function:
class A:
def __init__(self):
self.size=0
def change_size(self,new):
self.size=new
change_size.modifies = 'size'
print(A.change_size.modifies) # prints size
| {
"pile_set_name": "StackExchange"
} |
Q:
Concatenating strings retrieved from form $_Post in php
I am posting a string through an HTML form with the following code:
<html>
<body>
<form name="form" enctype="multipart/form-data" action="test.php" method="post">
<input name="message"
type="text" value=""><br/><br/>
<input type="submit" value="Upload"/><br/>
</form>
</body>
</html>
The code for test.php is the following:
<?php
$string1 = '$_POST["message"]';
$og_url = "http://thepropagator.com/facebook/testapp/issue.php?name=".$string1;
echo $og_url;
?>
The problem I'm having is that the posted string "$string1" does not seem to be showing at the end of the URL "http://thepropagator.com/facebook/testapp/issue.php?name=" that I am trying to concatenate it with. Can anyone please explain what I'm doing wrong?
A:
I think you want $string1 = $_POST['message'];, no quotes. Though I'd expect your code to come up with http://thepropagator.com/facebook/testapp/issue.php?name=$_POST["message"] url.
| {
"pile_set_name": "StackExchange"
} |
Q:
jquery - count li in each of multiple ul and return value
Using jQuery, how do I go through each nested ul.item, count the number of LI, and return that size/length in a parent span.someClass on that page?
<ul>
<li><span class="someClass">3</span>
<ul class="item" style="display:none">
<li>count me</li>
<li>count me</li>
<li>count me</li>
</ul>
</li>
<li><span class="someClass">1</span>
<ul class="item" style="display:none">
<li>count me</li>
</ul>
</li>
<li><span class="someClass">2</span>
<ul class="item" style="display:none">
<li>count me</li>
<li>count me</li>
</ul>
</li>
</ul>
A:
$("ul.item").each(function() {
// reusabilty
var context = $(this);
// count and populate
context.prev(".someClass").text(context.children().length);
});
Omit .someClass in prev() call if these span elements are always immediately before your ul elements. In that case filtering in prev is not necessary and superfluous.
A:
Try this,
Live Demo
$('.someClass').each(function(){
$(this).text($(this).next('ul').find('li').length);
})
A:
<html>
<head>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript">
$(function() {
var count = 0;
$('ul').each(function(){
if(count != 0){
var len = $(this).find('li').length;
$(this).parent().find('.someClass').html(len);
}
count++;
})
});
</script>
</head>
<body>
<ul>
<li><span class="someClass"></span>
<ul class="item" style="display:none">
<li>count me</li>
<li>count me</li>
<li>count me</li>
</ul>
</li>
<li><span class="someClass"></span>
<ul class="item" style="display:none">
<li>count me</li>
</ul>
</li>
<li><span class="someClass"></span>
<ul class="item" style="display:none">
<li>count me</li>
<li>count me</li>
</ul>
</li>
</ul>
</body>
</html>
| {
"pile_set_name": "StackExchange"
} |
Q:
postgresql: any on subquery returning array
I have a user_lists table that contains a user_list column of type integer[].
I'm trying to do this query, which seems basic enough:
select id, alias from users where id = ANY(select user_list from user_lists where id = 2 limit 1);
It gives this error:
ERROR: operator does not exist: integer = integer[]
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
I'm using postgres 8.3.11. Upgrading is not an option.
What am I missing?
A:
Try this instead:
select id, alias from users
where (select user_list from user_lists where id = 2 limit 1)
@> ARRAY[id];
| {
"pile_set_name": "StackExchange"
} |
Q:
Third Reich eagle on a pots & pans shop in Osaka - why?
During my visit to Osaka I came across a shop's door with this Third Reich eagle:
If I can remember correctly, it was most likely supposed to be a plain pots & pans shop.
Can anyone explain why would that eagle exist (and be kept) on the shop's door and what do the inscriptions say?
I only found those references to this door on the Internet, but they don't seem to offer much explanation:
http://osakarchit.exblog.jp/17717326/
https://twitter.com/kemta/status/209872323573071872/photo/1
https://www.flickr.com/photos/cthulhuswolves/15603605168
A:
In Japan and other Asian countries there is no stigma on Nazi culture, and until the last 25 years or so, Nazi culture was considered in many Asian communities to be fascinating or chic. Therefore, especially in older places, like the shop in your photo, Nazi symbology can sometimes be found as a marketing element. American sociologists call it "Nazi chic". Time magazine published an article about it in 2000. This fad has declined over the years and it is rare to see it in any new marketing campaigns or advertising.
Also, note that the swastika is a commonly used symbol in Japan because it is associated with Buddhism. When used like this it has no Nazi connotation to the Japanese, but just means Buddhist.
A:
That first link has some interesting information / speculation.
The first half of the top line 世界の冠たる can be taken as a translation of the German phrase "Über alles in der Welt", which has Nazi connotations. The company was founded in 1918, which invites the possibility that the logo was adopted during WWII.
All of that indicates that it is clearly more than "Nazi chic" like you would see hawked to edgy teenagers. On the other hand I wouldn't assume that the one-time owner was a Jew-hating ideologue; more likely he admired German culture, adopted the logo because that represented Germany at the time, and saw no reason to change it even after the war.
The second half of the first line says "Kawanishi's products". The text below the logo is just the company name, what they are, address, and phone. The address includes a district that was abolished in 1989, so the sign was made before then.
| {
"pile_set_name": "StackExchange"
} |
Q:
could vs might, different or same
Is the difference between could and might in a) much more significant than in b)?
a) You need to discuss with them how they could vs might help you.
b) You could vs might try calling the help desk.
Can we put might=would perhaps in a)?
A:
a) You need to discuss with them how they could help you.
This would imply the question, "Do they have the ability or capacity to help you?"
a) You need to discuss with them how they might help you.
This implies that they have the ability, but the question is, "In what ways may they help you?"
b) You could/might try calling the help desk.
In this instance, the implied meanings are similar enough to be almost identical; the difference, however would be in the tone--"could" implying a simple suggestion or even a mild command, while "might" is a polite suggestion with no implied command.
Can we put might=would perhaps in a)?
I wouldn't. The phrase "would perhaps" is unnecessarily wordy and weak.
| {
"pile_set_name": "StackExchange"
} |
Q:
Unloading ViewControllers from Apple's PageControl Example + UINavigationController's Relationship to its RootViewControllers
So I modified Apple's PageControl example to dynamically load various navigation controllers (along with their root view controllers) into the scroll view. I also added a technique that attempts to unload a navigation controller when it's no longer needed. I've only been at ObjC for a little over a month, so I'm not sure if I'm doing the unloading correctly. Please see my code below, followed by my questions.
First I create a mutable array and fill it with nulls, just like Apple does:
// Create dummy array for viewControllers array, fill it with nulls, and assign to viewControllers
NSMutableArray *array = [[NSMutableArray alloc] init];
for (unsigned i = 0; i <= kNumberOfPages; i++)
{
[array addObject:[NSNull null]];
}
self.viewControllers = array;
[array release];
...Later, I fill the array with UINavigationController objects like so (this is just partial code, please excuse the missing parts...the main idea is that I alloc a couple of things, assign them and then release):
id controller = [[classForViewController alloc] initWithNibName:NSStringFromClass(classForViewController) bundle:nil];
navController = [[UINavigationController alloc] initWithRootViewController:controller];
[controller release];
[self.viewControllers replaceObjectAtIndex:page withObject:navController];
[navController release];
...Finally, if a page doesn't need to be loaded anymore I do this:
[self.viewControllers replaceObjectAtIndex:i withObject:[NSNull null]];
Questions:
My understanding is that once I replace the navigation controller in my viewControllers array with null, the array releases the navigation controller. Thus the navigation controller's retain count hits zero and it no longer takes up memory. Is this correct?
What about the root view controller inside the navigation controller? Do I need to do anything with it or does it get released automatically once the navigation controller's retain count hit zero?
Thanks!
A:
Yes. Any object put into a collection is sent a retain message. Likewise any object removed from a collection is sent a release message, the cause of the removal is irrelevant.
Yes, all objects will release all the objects it owns when they are released.
This all boils down to the simple principle of ownership that Cocoa defines:
You own the object if you received it as return value by calling a method that:
Is named alloc or new.
Contains the word copy, such as copy and mutableCopy.
You own the object if you call retain.
You may only call release and autorelease on objects you own.
You must release all owned objects in your dealloc methods.
There is just one exception; delegates are never owned. This is to avoid circular references and the memory leaks they cause.
As a side effect this also means that when you yourself are implementing a method, you must return an auto released object unless you are implementing new, or a method with copy in it's name. Objects returned as out arguments are always autoreleased.
Follow this strictly and Objective-C can be treated as if it is garbage collected 95% of the time.
| {
"pile_set_name": "StackExchange"
} |
Q:
Wrong value with double.Parse(string)
I'm trying to convert a string to a double value in .Net 3.5. Quite easy so far with
double.Parse(value);
My problem is that values with exponential tags are not right converted.
Example:
double value = double.Parse("8.493151E-2");
The value should be = 0.0893151 right?
But it isn't!
The value is = 84931.51!!!
How can that be?
I'm totally confused!
I read the reference in the msdn library and it confirms that values like "8.493151E-2" are supported. I also tried overloads of double.Parse() with NumberStyles, but no success.
Please help!
A:
It works for me:
double.Parse("8.493151E-2");
0.08493151
You're probably running in a locale that uses , for the decimal separator and . for the thousands separator.
Therefore, it's being treated as 8,493,151E-2, which is in fact equivalent to 84,931.51.
Change it to
double value = double.Parse("8.493151E-2", CultureInfo.InvariantCulture);
| {
"pile_set_name": "StackExchange"
} |
Q:
How to explode two array fields to multiple columns in Spark?
I was referring to How to explode an array into multiple columns in Spark for a similar need.
I am able to use that code for a single array field dataframe, however, when I have a multiple array fields dataframe, I'm not able to convert both to multiple columns.
For example,
dataframe1
+--------------------+----------------------------------+----------------------------------+
| f1 |f2 |f3 |
+--------------------+----------------------------------+----------------------------------+
|12 | null| null|
|13 | null| null|
|14 | null| null|
|15 | null| null|
|16 | null| null|
|17 | [[Hi, 256, Hello]]| [[a, b], [a, b, c],[a, b]]|
|18 | null| null|
|19 | null| null|
+--------------------+----------------------------------+----------------------------------+
I want to convert it to below dataframe:
dataframe2
+--------------------+----------------------------------+----------------------------------+----------------------------------+
| f1 |f2_0 |f3_0 |f3_1 |
+--------------------+----------------------------------+----------------------------------+----------------------------------+
|12 | null| null| null|
|13 | null| null| null|
|14 | null| null| null|
|15 | null| null| null|
|16 | null| null| null|
|17 | [Hi, 256, Hello]| [a, b]| [a, b, c]|
|18 | null| null| null|
|19 | null| null| null|
+--------------------+----------------------------------+----------------------------------+----------------------------------+
I tried with the following code:
val dataframe2 = dataframe1.select(
col("f1") +: (0 until 2).map(i => col("f2")(i).alias(s"f2_$i")): _* +: (0 until 2).map(i => col("f3")(i).alias(s"f3_$i")): _*
)
But it is throwing an error saying it is expecting a ")" after the first "_*".
A:
+: is used in Scala to add a single element to a list. It can't be used to concatenate two lists together. Instead, you can use ++ as follows:
val cols = Seq(col("f1"))
++ (0 until 1).map(i => col("f2")(i).alias(s"f2_$i"))
++ (0 until 2).map(i => col("f3")(i).alias(s"f3_$i"))
val dataframe2 = dataframe1.select(cols: _*)
Note that to use this approach, you need to know the number of elements of the lists in advance. Above, I changed 2 to 1 for the f2 column.
| {
"pile_set_name": "StackExchange"
} |
Q:
WPF, possibly ribbon control: Adding icon to title bar
I believe that the little icon in the title bar in the image below is added using a ribbon control? I've been googling for the last while and I'm not getting much help.
There's a WPF ribbon control library from Microsoft, but the same from that does not set anything in the title bar.
I've also found a microsoft shell library which integrates some windows 7 bits into wpf and allows you to fiddle with the chrome. It hasn't seen updates in a year, so not sure if it's current yet.
Any ideas?
Related question Does it only work on Windows 7? I don't have any XP machines to see what Office 2010 looks like. I'd prefer a XP/Windows 7 WPF solution if possible.
A:
That area is known as the Quick Access ToolBar as described here: http://msdn.microsoft.com/en-us/library/dd940502(v=vs.85).aspx
I'm not sure on the specifics in regards to the Ribbon control library that you referenced, but we are using one from Actipro and it works great and both xp and win7 are supported.
A:
That is called the Quick Access Toolbar and there is an example at the end of this MSDN page.
It should work the same on both XP and Windows 7 (and Vista for that matter).
| {
"pile_set_name": "StackExchange"
} |
Q:
C programming-temperature conversion issue
Question from book:
Write a program that does temperature conversion from Fahrenheit to Celsius, your program should :
prompt the user for which type of conversion they want to do.
prompt the user for the temperature they want to convert.
I am getting incorrect output.I'm not sure where i'm going wrong.I'm new to c language. Any help is greatly appreciated.
Here is my code:
#include <stdio.h>
#include <stdlib.h>
int main()
{int f, c, f_or_c;
printf("Would you like to convert Fahrenheit (1) or Celsius (2)?\n");
scanf("%d", &f_or_c);
if(f_or_c==1)
{
printf("Enter the temperature in Fahrenheit to convert?\n");
scanf("%d", &c);
f = 1.8*c + 32.0;
printf("Celsius of %d is %d degrees.\n");
}
if(f_or_c==2)
{
printf("Enter the temperature in Celsius to convert?\n");
scanf("%d", &f);
c = (f-32)*5/9;
printf("Fahrenheit of %d is %d degrees.\n");
}
return 0;
}
A:
My guess is you just aren't printing the values out, but everything else looks pretty good.
printf("Fahrenheit of %d is %d degrees.\n");
You're not printing any variables.
This might work for you
printf("Fahrenheit of %d is %d degrees.\n", f, c);
You can take a look at general usage of printf here
http://www.cplusplus.com/reference/cstdio/printf/
| {
"pile_set_name": "StackExchange"
} |
Q:
Mysql query optimize for searching records!
I am learning to use mysql.So, I got some problems.
I have three table :users movie and movie watching history(moviewh)
the structure of movie table is:
movie_id[key] movie_title
and users is:
user_id[key] user_name
and moviewh is:
user_id[key] movie_id[key] watching_date[key]
and my sql query is:
EXPLAIN
SELECT m.movie_id,m.movie_name_cn FROM movie AS m
LEFT JOIN moviewh AS mwh
ON m.movie_id = mwh.movie_id
WHERE date_format(mwh.watching_date,'%Y-%m-%d') = '2010-11-01' AND mwh.user_id = 1
so . how can I optimize the table and query ??
A:
Why are you doing a left join if you are only interested in seeing what movies a user has watched on a particular date?
SELECT m.movie_id,m.movie_name_cn
FROM movie AS m, moviewh AS mwh
WHERE m.movie_id = mwh.movie_id
AND date_format(mwh.watching_date,'%Y-%m-%d') = '2010-11-01'
AND mwh.user_id = 1
assuming of course that a user can't have watched movies that don't exist in your db ...
| {
"pile_set_name": "StackExchange"
} |
Q:
What white point temperature should I set my LCD monitor to?
I've got the Xrite Eye One Display 2, and tried the advanced calibration today for the first time. It asked what white point I wanted, and had a default of 6500K. I didn't realize that was up for discussion! I would have assumed the daylight temperature, around 5500K, would be desired.
What temperature do I want my whites to be? Why can't white just be white?!
A:
This can be a complex answer, and quite often, the outcome is that it depends what you print on, meaning you might need to change it or recalibrate often.
On White Point
White point from the perspective of the human eye is a very subjective thing, as the eye automatically "recalibrates" itself to differing white points depending on the kind of light that dominates a scene. To start the discussion, lets start in the middle: Sunlight has a white-point of about 5500k (although it tends to range in reality from between 5000k and 6000k). As you noted, most screens these days are calibrated by default to a white point of 6500k, which appears to be more white than lower values. Some screens often come with a built-in range of settings, such as 5000k, 5500k, 6500k, and some even as high as 7500k and 9300k or around there (which have a bluish tinge to them.)
Why Set a White Point?
The key reason why we set a white point is not so that it appears "white" to our eyes. The main reason we set a white point is to match the "white" on screen to the "white" of the material and environment in which your photos will be viewed. There is no single correct, standard viewing environment, and depending on how you normally publish your images, the white-point you select may be different than other photographers. A couple of the most common viewing mediums are on a computer screen (i.e. you publish your work to Flickr, 1x.com, etc.) and print.
White Point for Screen Display
If you really don't care much about print, and only really exhibit your work online, you might want to stick with a white point of 6500k. That is a very common white point, and the default for many computer screens, particularly lower-end ones. The color profile sRGB, a standard and very widely used color gamut, is also aligned to a 6500k white point. Most images saved for viewing on a computer screen should be saved using the sRGB color profile when possible (using a wider gamut, such as Adobe RGB, may be necessary if your image has very vibrant colors, particularly greens, but also reds and violets, as sRGB is a more limited white point. Adobe RGB also uses a 6500k white point.) Using 6500k will mean that what you see with your calibrated display will generally be very similar to what your viewers see when they browse your work online. There is no guarantee of this, of course, as screen vary in minute to major ways for a very wide variety of reasons, but it is a reasonable baseline.
White Point for Prints
Things get more complicated when you involve print. Papers tend to have warmer white points much of the time, so the common default of 6500k makes white on a computer screen look quite a bit whiter than it does on print. Papers also come in an extremely wide variety, from very very warm (4800k or sometimes even warmer), to very bright, almost blue white (7500k or cooler.) This is where screen calibration is really important, as having your screen matched as closely to the papers you print on will make it easier to generate properly calibrated and color balanced prints.
When it comes to paper, the story is extremely complex. Paper is a very old enterprise that extends back over 600 years. There are some general buckets that you can put papers into, however: fine art paper, canvas, and coated/brightened. For me, and for many photographers, there is nothing quite like a good fine art paper. These papers come in a huge variety, from many material sources, including the common wood, but also uncommon sources like cotton, bamboo, and sometimes even mixed blends that may include animal fibers. The tones and textures of fine art papers are amazing, and can have a huge impact on the appearance and appeal of a final print. Fine art papers tend to be warmer, and its best to calibrate your display to a white point of 5000k. Canvas is another type of printable paper these days. There are also a variety of canvas papers, however much less variety than fine art papers. Canvas is also a warmer type of media, and can range from 5000k to 5500k. The third major bucket of paper includes coated papers. Many fine art papers are uncoated, non-brightened, letting the natural fibers produce the tone and texture of the paper. Coated papers cover the natural fiber base with one or more coatings to provide a smoother surface, surfaces that are more receptive and ideal for ink jet printing (or other types of printing), and protected from the elements allowing a longer-lasting print. Coated papers often also include optical brighteners to make the white point of the paper brighter and "whiter". Such papers often have much higher white points than natural papers, up to as high as 7500k or so. A white point of 7500k is extremely bright, bordering on blueish. Papers with optical brighteners are sometimes difficult to calibrate for, as the brighteners often depend on the type of light they are viewed with. Many brighteners use UV reactive components, and produce their brilliant white by reflecting UV rays from natural sunlight (or artificial gas lighting like flourescent tubes.) As such, their white point can change depending on the lighting.
Choosing a White Point
So, what should your white point be when you calibrate your display? It depends, and it may change frequently if you publish to a variety of media types. I myself use the DataColor Spyder3 system to calibrate all of my hardware. I normally calibrate to a white point of 5000k, for a few reasons. First, most of my work I print on my Canon 9500 II, on fine art papers. I am a big fan of Hahnemuhle, Moab, and a few others. All of the papers I use, such as Photo Rag and Canvas, have warmer 5000k white points. I also publish a lot of my work online, and every so often I recalibrate to 6500k to preview my images online and see how they look. (With the Spyder3 Pro, it is very easy and very quick to change white point and do a short recalibration that takes about 5 minutes.)
Another reason to use 5000k as your base white point is if you use Photoshop. Adobe Photoshop has its own color management system, and internally by default it is calibrated to a white point of 5000k (often abbreviated D50, along with D55/5500k, D65/6500k, etc.) By calibrating your display to 5000k, you sync your hardware to Adobe Photoshop's default settings, which makes it a little easier to convert and/or apply color profiles and see accurate results.
Finally, the light you view your prints under has a direct effect on what "white" looks like. If viewing prints in sunlight, they will generally be lit by "normal" white light, at a temperature of around 5500k. Artificial lighting can vary. Common light bulbs range in temperature from about 2500k to 4200k, which is quite warm. Flourescent lighting, which is less common in homes, is hard to nail down. Often cooler, 6500k to 7200k or so, they also output greener or violet lighting. Sometimes they come in warmer variants that are more similar to standard bulbs. Calibrating to a warmer white point helps balance out differences between what you see on screen, and what you see in print.
More to the Story
The calibration story does not stop at white point. If you really want to have accurate color calibration throughout your entire workflow, there are additional factors, such as luminance (how bright your screen is), gamma, environment lighting, etc. If you have additional questions about calibration, feel free to ask other questions, and I'll see if I can provide a useful answer.
A:
The ambient light temperature will effect how you perceive colours on your monitor --- since unlike a reflective medium the colour of light falling on it has no effect on the colour reflected. The colours of everything else you see around your monitor, and when you look away will differ though, and your eyes will adjust to the ambient colour. Hence the need to adjust colour balance.
I'm not sure about the exact values you quote -- I think 6,500K is pretty close to indirect sunlight, so it seems a reasonable default. I doubt many people use their monitors in direct sunlight!
A:
You should use D65, not 6500K. The difference is about 3% in the green channel, and it's noticeable. D65 has a "CCT" or (correlated color temperature) of just under 6504K.
Reason is: the sRGB standard uses D65 and so do most other common display white points.
Some more information on the difference: 6500K is an "ideal" Planckian radiator, and D65 is based on sunlight, so it has dips for oxygen and water and stuff in the atmosphere. People confuse the two frequently. But once again, it's the standard and the math everyone does to "convert" colors to sRGB (e.g., what your camera does to display colors on the screen) is a conversion to D65.
Your eyes can see the largest numbers of colors with the most contrast at this whitepoint, so that's why it was chosen as the standard. Our visual system finds it particularly difficult to distinguish shades of blue, for instance, so a bluer color temperature gives us a better discrimination of blue colors than does D50.
| {
"pile_set_name": "StackExchange"
} |
Q:
Creating iPhone-like badge notification on Android
ALL,
Everywhere I look I see a reply on how to make it work for application icon. My situation is a little different.
In my program I have a ListView which displays images. Every image is associated with the object underneath.
What I want to do is create a design like in the iPhone badge notification, but for all those images in the view.
Trying to ask Google, I found this link. Problem is - it does not work. I'm testing on the LG Android phone with 2.2 kernel and all I see is the small red dot which is not even located on the image itself couple of pixels higher and to the left.
Here's my code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent" >
<ImageView
android:id="@+id/user_image"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:contentDescription="@string/user_image_description"
android:src="@drawable/placeholder"
/>
<TextView
android:id="@+id/user_messages"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignLeft="@id/icon"
android:layout_alignBottom="@id/icon"
android:background="@color/red"
android:textColor="@color/silver"
android:layout_marginLeft="-4dp"
android:layout_marginBottom="-4dp"
android:textSize="12sp"
android:textStyle="bold"
android:gravity="center"
/>
</RelativeLayout>
Can someone please look?
I tried to change the margins as well as text size, but it didn't change anything.
Thank you.
A:
Try this way
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/main_widget"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:layout_marginTop="20dip"
android:focusable="true" >
<ImageView
android:id="@+id/icon"
android:layout_width="60dip"
android:layout_height="60dip"
android:layout_marginTop="8dp"
android:background="@drawable/logo"
android:contentDescription="image"
android:scaleType="center" />
<TextView
android:id="@+id/title"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@+id/icon"
android:gravity="center"
android:paddingLeft="3dp"
android:paddingTop="10dp"
android:shadowColor="#000000"
android:shadowDx="1"
android:shadowDy="1"
android:shadowRadius="1.5"
android:text="@string/app_name"
android:textColor="#FFF" />
<TextView
android:id="@+id/txt_count"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginLeft="-10dip"
android:layout_toRightOf="@+id/icon"
android:background="@drawable/badge_count2"
android:contentDescription="badge"
android:gravity="center"
android:text="1"
android:textColor="@color/White"
android:textStyle="bold"
android:visibility="visible" />
</RelativeLayout>
Create drawable/badge_count2.xml file like below
<shape xmlns:android="http://schemas.android.com/apk/res/android"
android:shape="rectangle" >
<solid android:color="@color/red" >
</solid>
<stroke
android:width="2dp"
android:color="#FFFFFF" >
</stroke>
<padding
android:bottom="2dp"
android:left="7dp"
android:right="7dp"
android:top="3dp" />
<corners android:radius="10dp" >
</corners>
Output:
| {
"pile_set_name": "StackExchange"
} |
Q:
Firefox add-ons
What Firefox add-ons do you use that are useful for programmers?
A:
I guess it's silly to mention Firebug -- doubt any of us could live without it. Other than that I use the following (only listing dev-related):
Console2: next-generation error console
DOM inspector: as the title might indicate, allows you to browse the DOM
Edit Cookies: change cookies on the fly
Execute JS: ad-hoc Javascript execution
IE Tab: render a page in IE
Inspect This: brings the selected object into the DOM inspector
JSView: display linked javascript and CSS
LORI (Life of Request Info): shows how long it takes to render a page
Measure IT: a popup ruler.
URL Params: shows GET and POST variables
Web Developer: a myriad of tools for the web developer
A:
Here are mine (developer centric):
FireBug - a myriad of productivity enhancing tools, includes javascript debugger, DOM inspector, allows you to edit the CSS/HTML on the fly which is highly valuable for troubleshooing layout and display problems.
Web Developer - again another great developer productivity tool. I mostly use it for quickly validating pages, disabling javascript (yes I disable javascript sometimes, don't you?), viewing cookies, etc.
Tamper Data - lets you tamper with http headers, form values, cookies, etc. prior to posting back to a page, or getting a page. Incredibly valuable for poking and prodding your pages, and seeing how your web app responds when used with slightly malicious intent.
JavaScript Debugger - has a few more features than javascript debugger provided by firebug. Although I must admit, I sparingly use this one since firebug has largely won me over.
Live HTTP Headers - invaluable for troubleshooting, use it frequently. Lets you spy on all HTTP headers communicated back and forth between client and server. It has helped me track down nefarious problems, especially when debugging issues when deploying your web app between environments.
Header Spy - nice addon for the geeky types, shows you the web server and platform a web site runs on in the status bar.
MeasureIt - I don't use this all too frequently, but I've still found it valuable from time to time.
ColorZilla - again, not something I use all that frequently, but when I need it, I need it. Valuable when you want to know a color and you don't want to dig through a CSS file, or open up a graphics editing app to get a color embedded in some image.
Add N Edit Cookies - this has been a great debugging tool in web farms where the load balancer writes a cookie, and uses the cookie value to keep your session "sticky". It allowed me to switch at will between servers to track down problems on specific machine. Also a good tool if you want to try to mess with a site that uses cookies to track your login status/account, and you want to see how your code responds to malformed or hacked info.
Yellowpipe Lynx Viewer Tool - yeah I know what your thinking, lynx, who needs it, its so 1994. But if you are developing a site that needs to take web accessibility into account (meaning accessible to users with visual impairments who use screen readers), or if you need to get a sense of how a web spider/indexer "sees" your site, this tool is invaluable. Granted, you could always just go out and grab Lynx for yourselfhere's the windows xp port that I use.
I've got a handful of other addons that I've used from time to time that I'll just quickly mention: FireFTP (one I installed wasn't stable and I've not tried a newer release), Html Validator (also found this one unstable, least back when I installed like a year ago), IE Tab (I usually just have both IE and FireFox open concurrently, but that is just me, I know many others that find this addon useful).
A:
I'd also recommend the Web Developer extension by Chris Pederick.
| {
"pile_set_name": "StackExchange"
} |
Q:
Bootstrap navbar links doesnt work on mobile
Anyone could help me with navbar links in Bootstrap3?
http://www.mebleroberto.co.uk/fabrics
Why links are working fine on desktop view, but not in a mobile view (nothing happened on click)?
A:
I inspected your click events it seems it doesn't retrieve the nav-section and also the event.preventDefault(); and return false; stops the click event.
Besides the homepage has links like <a href="#" data-nav-section="meble">FURNITURE</a> and also section that scroll through
My conclusion is that your mobile menu should not conflict the home menu by changing either the id #gtco-offcanvas or disabling this part of code on the other pages.
Replacing the class linki_menu_b with external on the menu of other pages except the homepage seemed to work. Hope this helps
| {
"pile_set_name": "StackExchange"
} |
Q:
Any tools to export .NET assembly member's metadata to Excel Sheet
Is there a tool that can reflect a .Net Assembly and gets its metadata, viz, Types, Type Members, Access Modifiers details, (Comments?) into an Excel Sheet(s)?
A:
Thanks for taking time for answering this. However, the following approach met my requirement:
Enabled XML documentation for the project
Imported the XML documentation into Excel
This not only brought in all types, methods with signature, properties, fields but also all the code comments that i had written for them.
| {
"pile_set_name": "StackExchange"
} |
Q:
Does python list(set(a)) change its order every time?
I have a list of 5 million string elements, which are stored as a pickle object.
a = ['https://en.wikipedia.org/wiki/Data_structure','https://en.wikipedia.org/wiki/Data_mining','https://en.wikipedia.org/wiki/Statistical_learning_theory','https://en.wikipedia.org/wiki/Machine_learning','https://en.wikipedia.org/wiki/Computer_science','https://en.wikipedia.org/wiki/Information_theory','https://en.wikipedia.org/wiki/Statistics','https://en.wikipedia.org/wiki/Mathematics','https://en.wikipedia.org/wiki/Signal_processing','https://en.wikipedia.org/wiki/Sorting_algorithm','https://en.wikipedia.org/wiki/Data_structure','https://en.wikipedia.org/wiki/Quicksort','https://en.wikipedia.org/wiki/Merge_sort','https://en.wikipedia.org/wiki/Heapsort','https://en.wikipedia.org/wiki/Insertion_sort','https://en.wikipedia.org/wiki/Introsort','https://en.wikipedia.org/wiki/Selection_sort','https://en.wikipedia.org/wiki/Timsort','https://en.wikipedia.org/wiki/Cubesort','https://en.wikipedia.org/wiki/Shellsort']
To remove duplicates, I use set(a), then I made it a list again through list(set(a)).
My question is:
Even if I restart python, and read the list from the pickle file, will the order of list(set(a)) be the same every time?
I'm eager to know how this hash -> list ordering works.
I tested with a small dataset and it seems to have a consistent ordering.
In [50]: a = ['x','y','z','k']
In [51]: a
['x', 'y', 'z', 'k']
In [52]: list(set(a))
['y', 'x', 'k', 'z']
In [53]: b=list(set(a))
In [54]: list(set(b))
['y', 'x', 'k', 'z']
In [55]: del b
In [56]: b=list(set(a))
In [57]: b
['y', 'x', 'k', 'z']
A:
I would suggest an auxiliary set() to ensure unicity when adding items on the list, thus preserving the order of your list(), and not storing the set() per se.
First, load your list and create a set with the contents
Before adding items to your list, check that they are not in the set (much faster search using "in" from the set rather than the list, specially if there are many elements)
Pickle your list, the order will be exactly the one you want
Drawback: takes twice as much memory than handling only a set()
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I turn off DwmExtendFrameIntoClientArea?
After I've called DwmExtendFrameIntoClientArea on a window, how can I turn it off again while remaining in Aero mode?
I've tried to call DwmExtendFrameIntoClientArea with all margins set to 0, which seems to work partially except that the background of the client area of my window is all black and exhibits redrawing artifacts.
I'm using Qt, and I call setAttribute(Qt::WA_TranslucentBackground, false) on my window after doing this, but it seems to have no effect. How can I get the client area to be redrawn correctly after resetting the window frame?
A:
Simple mistake - I also needed to set Qt::WA_NoSystemBackground to false so Qt would actually draw the window...
So, the procedure to turn off an extended frame is:
Call DwmExtendFrameIntoClientArea (misleading, right?) with all margins set to 0
Set WA_TranslucentBackground = false on the window (or non-Qt equivalent)
Set WA_NoSystemBackground = false on the window (or non-Qt equivalent)
| {
"pile_set_name": "StackExchange"
} |
Q:
Generating an Array of Vectors
I'm working a lot with nested lists of $(x,y)$ coordinates (of a FEM-like mesh, but that's not important), like so:
nodes = { {{0,0}, {0,1}, {0,2}}, {{1,0}, {1,1}, {1,2}} }
This has the nice convenient structure that nodes[[i,j]] gives me the $(x,y)$ coordinates of my "$(i,j)$th" node, so that nodes[[i,j]][[1]] is my $x$ coordinate and nodes[[i,j]][[2]] is my $y$ coordinate. This works nicely for my purposes.
How would I construct a variable size array of this kind using array[] et. al? If I use for instance
nodes = array[array[X,2],{4,4}]
I get a 4-by-4 array where nodes[[i,j]] is of the form {X[1],X[2]}[1,1]. This is close to what I want, and I could change a bunch of my code to work with this, but I'd rather not - I'd love to have something where nodes[[i,j]] looks like this:
{X[i,j][1], X[i,j][2]}
The reason I'd like to do this is I have defined some functions that take an array of nodes as input and computes some quantity, i.e. F[nodes_]:= (*something*), and I'd like to compute derivatives of this function with respect to each of its variables, i.e. $F_{x_i}$ or $F_{y_j}$.
I've tried various things with no success, and I've also tried looking for a similar construct in the documentation and here on SE, but I didn't see anything. Any ideas?
Edit: I had too many square braces in some of my code before. Fixed.
A:
A simpler way to give an array whose $(i,j)$ element is {X[i,j][1], X[i,j][2]} is to just use Table:
m = 3;
n = 4;
Table[X[i, j][k], {i, m}, {j, n}, {k, 2}]
Output is:
{{{X[1, 1][1], X[1, 1][2]}, {X[1, 2][1], X[1, 2][2]}, {X[1, 3][1],
X[1, 3][2]}, {X[1, 4][1], X[1, 4][2]}}, {{X[2, 1][1],
X[2, 1][2]}, {X[2, 2][1], X[2, 2][2]}, {X[2, 3][1],
X[2, 3][2]}, {X[2, 4][1], X[2, 4][2]}}, {{X[3, 1][1],
X[3, 1][2]}, {X[3, 2][1], X[3, 2][2]}, {X[3, 3][1],
X[3, 3][2]}, {X[3, 4][1], X[3, 4][2]}}}
Is this what you are looking for?
| {
"pile_set_name": "StackExchange"
} |
Q:
how to create chart control in code behind wpf
I have a chart control in xaml everythings work fine but now I want create this chart using code-behind:
this is my xaml:
<chart:ClusteredColumnChart>
<chart:ClusteredColumnChart.Series>
<chart:ChartSeries
Name = "chart"
DisplayMember = "Date"
ItemsSource = "{Binding}"
ValueMember = "Scores" />
</chart:ClusteredColumnChart.Series>
</chart:ClusteredColumnChart >
I wrote this code but data not generate
ClusteredColumnChart chart = new ClusteredColumnChart();
ChartSeries series = new ChartSeries
{
DisplayMember = "Date",
ItemsSource = "{Binding}",
ValueMember = "Scores"
};
series.ItemsSource = dt;
chart.Series.Add(series);
maingrid.Children.Add(chart);
What do I miss? In my opinion, in xaml codes 3 controls are inside each other
chart:ClusteredColumnChart --> chart:ClusteredColumnChart.Series -->
chart:ChartSeries
but in Code-behind I couldnt find this 3 controls and just used 2 controls
ClusteredColumnChart --> ChartSeries
A:
You can not use "{Binding}" in Code.
You have to create a Binding using
new System.Windows.Data.Binding(...)
see: https://docs.microsoft.com/en-us/dotnet/api/system.windows.data.binding.-ctor?view=netframework-4.7.2
Update:
And to answer your second question: < chart:ClusteredColumnChart.Series > is an attribute not an object.
Update 2:
Binding example:
var b = new System.Windows.Data.Binding {Source = dt};
series.SetBinding(ChartSeries.ItemsSourceProperty, b);
Or if you want to set the ItemsSource directly just use this without any Bindings:
series.ItemsSource = dt;
| {
"pile_set_name": "StackExchange"
} |
Q:
Multimeter to check car battery current
short background, my car battery will often be too flat to start the car if I don't use it for about 48 hours. Worst it happened the other day after leaving it at airport car park for 4 days and come 10pm ready to go home it was flat, AA came and it was reading 7 volts. Took car to Kwik Fit next day for a battery check and came back no problems.
So, given the battery's health seems to be fine, after doing some reading I thought I would get a multimeter to check how many amps the battery is using when the car isn't running, as my assumption is there must be a drain from somewhere. I managed to use it to check the volts late last night (12.56) and this morning, it was around 12.26. Following some guides (https://www.youtube.com/watch?v=zdIKNnwEjIs | http://www.popularmechanics.com/cars/how-to/a5859/how-to-stop-car-battery-drains/ | http://www.wikihow.com/Find-a-Parasitic-Battery-Drain) I tried to check the amps but when I do it, I believe I see a 3.something come up on the multimeter but when putting the test probe on the end of the black cable onto the negative battery post, it creates a spark, and so naturally I just took the probe off it straight away as to not cause any (further) damage.
This is the multimeter I bought - http://www.argos.co.uk/static/Product/partNumber/7015603.htm
Car is a 2004 Subaru Impreza GX Sport.
When checking the amps I put the black cable in the COM port and the RED cable in the 10A port and set the dial to A_ (DC) 20m/10A (just to the bottom left of the dial)
Just don't know if those settings are right and why there would be sparks? Don't know if it is a problem with the car/battery terminals/posts or just the settings on the multimeter.
Any help would be greatly appreciated as I really want to just be able to measure the amps so I can find a paristic battery drain.
A:
To check the battery voltage a meter is connected in parallel. This involves touching the black lead to the negative battery terminal and the red lead to the positive battery. In this configuration the meter has a very high resistance (usually over 2 million ohms).
Checking current is done differently, in series. If you did the procedure above you may have damaged your meter or blown the meter fuse. To check current disconnect the negative battery cable. Connect the black lead to the now naked battery post. Connect the red lead to the battery cable. Current will now flow through the meter allowing it to be measured. In this configuration the meter has a very low resistance (usually just a couple of ohms).
Set the meter to the 10A range. After the battery is disconnected and the meter is connected there will be a sudden inrush. If you use the 20mA setting the meter fuse may be blown during the inrush. After the inrush allow the car to fall asleep. This may take some time, sometimes upwards of an hour. A rough rule of thumb is less than 25mA of draw when the car is fully asleep.
The absolute best way to check it is to use a battery disconnect. Install this between the negative terminal and the negative battery post. Drive the car for a day or two. In the morning before going somewhere connect the meter across the green nob. Make sure that the leads are well connected and don't fall off. Now twist the green nob to disconnect the battery and read the current consumption.
A:
The safest way to test if you're not SURE of what you're doing (and to avoid other issues like bad lead connections) is to use a clamp meter. These are capable of measuring AC and DC voltage and amperage by simply clamping the wire. You don't have to disconnect anything for this to work. Not to mention that measuring this ONE wire coming right off of the battery probably isn't going to tell you much... even if you do determine that there is a reading of several amps coming off of your battery, you still have to track that down. The clamp meter would be better suited for the needle-in-a-haystack search than your traditional probe meter anyway.
Here's an example of one such meter:
http://www.northerntool.com/shop/tools/product_200460552_200460552
If you go this route, make sure you buy a clamp meter advertised to measure amperage on both AC AND DC. Some of them are AC only, for some reason.
EDIT: After I answered this I started thinking about why some clamps can't read DC amperage and looked it up. This is somewhat anecdotal as it does not answer the question, but it might be interesting to some people. Apparently, clamp meters use electromagnetic induction to read AC amperage, but this is not possible on DC. The clamps are constructed slightly different to make this possible. See this link for a more in-depth explanation:
http://www.kew-ltd.co.jp/en/support/mame_02.html
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I tell Mobile Safari to stop remembering to never remember my password?
I've changed my mind about having Safari/iCloud Keychain never remember my password for a given website. I'd like to remove that setting and store the password, but I can't find any ability to delete/change it anywhere. I've already gone into Settings->Safari->Passwords & Autofill -> Saved Passwords, and the website in question does not show up in the list with a 'never' entry to delete.
How can I make Safari forget to keep forgetting this login info?
A:
Manually enter your username and password. Tap "Passwords" right above your keyboard. Tap "Save This Password." It still won't autofill when you first load the page, but now you can tap in the username field and hit "Autofill Password." There seems to be no way to actually reset the "Never for this site" setting. Sounds like a job for http://apple.com/feedback.
A:
I’ve actually figured out a way to undo this. It’s klunky AF, but it works. Go to the login page you’re looking to undo this for, and tap the “passwords” label on top of the onscreen keyboard. From the pop up that appears, select “suggest new password”. Attempt to login with the new password. While you won’t successfully login with it, a login entry for the site will now be saved in your account password settings. Edit the entry with the correct information, and you’re done.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to tell if a InstrumentedAttribute is a relation in sqlalchemy?
Given a sqlalchemy model class
class Table(Base):
__table__name = 'table'
col1 = Column(...)
rel = relationship(...)
If you look at Table, both col1 and rel are of type InstrumentedAttribute. How can I distinguish these two? I tried to look into their attributes. But there are too many...and there's no document about this matter.
A:
Check the type of Table.<column>.property which can commonly be either instance of ColumnProperty or RelationshipProperty:
>>> type(Table.col1.property)
<class 'sqlalchemy.orm.properties.ColumnProperty'>
>>> type(Table.rel.property)
<class 'sqlalchemy.orm.relationships.RelationshipProperty'>
| {
"pile_set_name": "StackExchange"
} |
Q:
Know the types of properties in an object c#
I know how to get an object properties using reflection:
var properties = typeof(T).GetProperties();
now how do I know if properties[0] is a string? or maybe it is a int? how can I know?
A:
Each element of properties will be a PropertyInfo, which has a PropertyType property, indicating the type of the property.
So for example, you might use:
if (properties[0].PropertyType == typeof(string))
or if you wanted to check something in an inheritance-permitting way:
if (typeof(Stream).IsAssignableFrom(properties[0].PropertyType))
| {
"pile_set_name": "StackExchange"
} |
Q:
How many days of internship am I allowed to do with a F1 visa (foreign PhD student in the US)?
I (French citizen) am doing a PhD in computer science in the US: over the course of my PhD, how many days of internship am I allowed to do? I have a F1 visa. I plan to do several internships and would like to know the maximum amount of time I can work as an intern.
A:
There's no maximum amount of time you can work as an intern. As long as you're complying with the F1 requirements (including being in good standing with your school and various degree-related and course-work related requirements) - you can work as many days as you wish.
However, there's a difference between full-time and part-time internships. Part time (up to 20 hours/week) does not affect your OPT. Anything more than that comes from your OPT quota (which is up to a year, so if you have a year or more full time CPT - you lost your OPT).
Here's a detailed explanation on the CPT (foreign students' internship) program from UPenn. Your school probably has a similar page, or an international students' office where you can inquire.
| {
"pile_set_name": "StackExchange"
} |
Q:
Divs that float within container div
I'm working with creating a series of card type objects on screen that would be in different groupings. Within their own groups, they need to be float: left;
However, when I do this, I get them all floating regardless of their grouping.
example (in jade syntax):
div(ng-init="") // my angular view div
div(class="card-container")
div(ng-repeat="i in instance", class="card")
p {{i.instanceName}}
div(class="card-container")
div(ng-repeat="d in database", class="card")
p {{d.databaseName}}
So I'm hoping to have 2 divs, stacked on top of eachother in the flow. Then within those divs, have divs that float left only within it's container div.
My css is as follows:
.card {
height: 100px;
width: 100px;
float: left;
position:relative;
border-radius: 5px;
text-align: center;
margin: 10px;
font-weight: bold;
display:-ms-flexbox;
-ms-flex-pack:center;
-ms-flex-align:center;
/* Firefox */
display:-moz-box;
-moz-box-pack:center;
-moz-box-align:center;
/* Safari, Opera, and Chrome */
display:-webkit-box;
-webkit-box-pack:center;
-webkit-box-align:center;
/* W3C */
display:box;
box-pack:center;
box-align:center;
}
.card-container {
width:100%;
}
Thanks very much for the help.
A:
try adding .card-container { clear:both; }
| {
"pile_set_name": "StackExchange"
} |
Q:
warning: Duplicated ref when git push
when push master branch to remote target:tmp
git push tmp master
i get this message
warning: Duplicated ref: refs/heads/master
push can still success.
but what is this message mean?
how can i find more detail log info about this?
this is my .git/config
[core]
repositoryformatversion = 0
filemode = false
bare = false
logallrefupdates = true
symlinks = false
ignorecase = true
hideDotFiles = dotGitOnly
[remote "origin"]
fetch = +refs/heads/*:refs/remotes/origin/*
url = [email protected]:testuser/myproject.git
[branch "master"]
remote = origin
merge = refs/heads/master
[remote "tmp"]
url = [email protected]:testuser/myproject.git
fetch = +refs/heads/*:refs/remotes/tmp/*
and my git version is 1.7.11.msysgit.1
show-ref and ls-remote info
$ git show-ref
1696d17186db41cc70876f76f943e18ea4708ad3 refs/heads/master
3c51688bf27e712001db1b6e9f316748634643c4 refs/remotes/origin/HEAD
3c51688bf27e712001db1b6e9f316748634643c4 refs/remotes/origin/master
1696d17186db41cc70876f76f943e18ea4708ad3 refs/remotes/tmp/master
$ git ls-remote tmp
warning: Duplicated ref: refs/heads/master
1696d17186db41cc70876f76f943e18ea4708ad3 HEAD
1696d17186db41cc70876f76f943e18ea4708ad3 refs/heads/master
$ git ls-remote origin
3c51688bf27e712001db1b6e9f316748634643c4 HEAD
3c51688bf27e712001db1b6e9f316748634643c4 refs/heads/master
output of git show-ref on tmp
$ git show-ref
warning: Duplicated ref: refs/heads/master
1696d17186db41cc70876f76f943e18ea4708ad3 refs/heads/master
content of packed-refs on tmp
# pack-refs with: peeled
3c51688bf27e712001db1b6e9f316748634643c4 refs/heads/master
3c51688bf27e712001db1b6e9f316748634643c4 refs/heads/master
output of find . in bare repo myproject.git. the objects folder has too many subfolers, so i don't paste them.
$ find .
.
./branches
./packed-refs
./objects
./HEAD
./info
./info/exclude
./config
./description
./refs
./refs/tags
./refs/heads
./refs/heads/master
./hooks
./hooks/commit-msg.sample
./hooks/update.sample
./hooks/pre-commit.sample
./hooks/prepare-commit-msg.sample
./hooks/post-update.sample
./hooks/pre-rebase.sample
./hooks/post-receive
./hooks/pre-applypatch.sample
./hooks/update
./hooks/applypatch-msg.sample
A:
IIRC, it means that you somehow ended creating another ref that has the name master, but doesn't live in the normal location. The few times I've seen this it's because I was messing with a plumbing command and didn't provide the full path to the ref (refs/heads/master) when the command expected one. First, let's get your local repository up-to-date with the remote with:
git fetch --all
Check your local repository first with:
git show-ref | grep -i master
The -i is in there because you're on Windows, and case sensitivity can be an issue. I suspect you might see something like refs/master in the list. The idea is that there's a name that could be resolved two ways. refs/remotes/origin/master and refs/remotes/tmp/master are okay since they're namespaced properly.
If that doesn't turn up anything, check the remote:
git ls-remote url://to/remote/repo.git | grep master
I suspect the issue is in your local repository. For the local repository, you can remove the ref via update-ref:
git update-ref -m 'remove duplicate ref' -d <duplicate ref>
Where <duplicate ref> is the extra one you found from the show-ref command. Branches are stored under refs/heads. Be careful not to delete refs/heads/master.
If it's on the remote, you should be able to remove the duplicate via:
git push origin :<duplicate ref>
Where <duplicate ref> is the extra one found by the ls-remote command above. Again, be careful here. Do not use master or refs/heads/master.
If possible, update your question with the output of git show-ref and the git ls-remote. Also, I can walk you through it in the comments to help make sure you don't lose any data.
Now that we see packed-refs is the culprit
So the problem is that packed-refs has more than one line referring to master. I'm not entirely sure how that come to be, but I suspect there was a version of git that allowed it to slide through. A git gc will caused packed-refs to be re-written, so I'd just do that on your tmp remote.
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits