summary
stringlengths 15
147
| text
stringlengths 1
19.3k
| answer
stringlengths 32
22.6k
| labels
float64 0.33
1
| answer_summary
stringlengths 5
164
|
---|---|---|---|---|
Not able to access stored procedure from my application
|
I have written few stored procedures and executed in my server that is having php myadmin now while calling this stored procedure from my application it is showing
following exception
SQLException1 java.sql.SQLException: User does not have access to metadata required to
determine stored procedure parameter types. If rights can not be granted, configure
connection with "noAccessToProcedureBodies=true" to have driver generate parameters that
represent INOUT strings irregardless of actual parameter types.
While searching i find following query to resolve this exception:
GRANT SELECT,INSERT,UPDATE ON mysql.proc TO 'user'@'localhost';
my problem is that while running this query i am getting following error
#1142 - SELECT,INSERT,UP command denied to user 'qqq'@'localhost' for table 'proc'
but i have given all permission to user
GRANT ALL ON *.* TO 'qqq'@'localhost';
So how to access stored procedure from my application
I am using shared hosting server .
|
The error you provided is generated by your driver. See this question, which points you to a blog entry giving an answer. To sum it up, you have two options:
The GRANT statement you tried
adding noAccessToProcedureBodies=true to your database connection string (which is also suggested by the error message)
Since you are using a shared hosting server it is likely that you don't have privileges on mysql.proc and you don't have root access to the server, so you can't GRANT them. Therefore you should try the second approach.
| 1 |
The error you provided is generated by your driver
|
Framing section headings
|
I'm attempting to put a frame around my section headings. I am using the package mdframed which adds functionality to the framed package. It basically draws a box around an object, in an environment. So I wonder how I can do this. Can I use \renewcommand, for example? I'd really like to use mdframed to create the frame.
|
The titlesec package provides this functionality built-in. Here's an example:
\documentclass{article}
\usepackage{titlesec}
\titleformat{\section}[frame]
{\normalfont} {} {5pt} {\Large\bfseries\filcenter\thesection.\quad}
\begin{document}
\section{A section}
\end{document}
If this isn't what you want, you should probably use the package in conjunction with mdframed to modify your sections anyway, since it provides a simple, yet powerful system for modifying all section headings.
| 1 |
The titlesec package provides this functionality built-in .
|
Let a and b be relatively prime integers and let k be any integer. Show that a + bk and b are also relatively prime.
|
Let a and b be relatively prime integers and let k be any integer. Show that a + bk and b are also relatively prime.
IF a+bk and b are relatively prime that means their gcd is 1. But how do I prove that gcd(a+bk,b)=1?
|
HINT:
Let integer $d$ divides both $a+bk, b$
$\implies d$ divides $a+bk-k(b)=a$
$\implies d$ divides $a,b$
| 1 |
Let integer $d$ divides both $a+bk, b$ $implies d$
|
Invisible Physics to Certain Objects in the BGE
|
I'm making a first person view game. I have an invisible collision body that I use for interaction with other objects. I have it so that the player can "shoot" cubes. How ever, if the player looks down and tries to shoot (for example if an enemy is coming close) the bullet gets bounced off the collision body for the character. Is it possible to make it so that my bullets can collide with everything except for the collision body? Thanks!
|
You can do this with collision mask and collision group.
These options are found in the physics panel. To fix your particular problem, select the invisible collision box for the player and put it in collision group 2.
Next, select the bullet and while holding Shift click on the 2nd box in collision mask. This will stop the bullet from colliding with anything in collision group/layer 2.
| 0.888889 |
Invisible collision box and collision group
|
Why do you multiply one way and divide the other way with these trig problems?
|
I am practicing finding a side of an angle on Khan Academy. I understand SOH CAH TOA and which sin, cos, tan to choose from. But, I don't understand why they multiply sometimes to find the side and divide other times. I am using a calculator.
Here is a multiply example
and a divide example
thanks.
|
In the 1st problem, I think you understand it as far as the next-to-last line, $$\sin A={BC\over\sqrt{97}}$$ Also, you are given the value $$\sin A={9\sqrt{97}\over97}$$ So you have $${9\sqrt{97}\over97}={BC\over\sqrt{97}}$$ Since you want $BC$, you multiply both sides of this equation by $\sqrt{97}$ --- that gets $BC$ all by itself on one side of the equation, which is what you want.
In the second problem, you understand $$\tan A={10\over AC}$$ You want to isolate $AC$. This can be done by multiplying both sides by $AC$, getting $$(AC)(\tan A)=10$$ and then dividing both sides by $\tan A$ to get $$AC={10\over\tan A}$$
Now you are given $$\tan A={10\over3}$$ so you have $$AC={10\over10/3}$$ which simplifies to $3$.
OK?
| 0.888889 |
In the 1st problem, you understand $$sin A=BCoversqrt97$
|
Scope and SQL query in Rails 4
|
In app/controllers/ElementTypesController.rb, I have this:
l = params[:element_type_name]
@element_types = ElementType.by_name(l).page(params[:page])
and in the model app/models/ElementType.rb, I have this:
scope :by_name, (lambda do |name| { :conditions => ['name LIKE ?', "%#{name}%"]} end )
However, this throws an error, complaining that pagination (Kaminari) can't happen on a hash. What is wrong with my scope?
|
I believe scope is just an alias, so its also possible to do it like this
def self.by_name (name)
where('name LIKE ?', "%#{name}")
end
| 1 |
scope is just an alias, so its also possible to do it like this def self.by_name
|
Codeigniter login page missing
|
i was copying my friend's project that was created using codeigniter.
i copied all files from her www folder including mysql data and paste to my www folder and mysql data, but when i try run on my localhost it show "Not Found The requested URL simrm/login was not found on this server.
i'm newbie using codeigniter,so how to fix this problem?
|
Ohh i think your are new in codeigniter...
You have to used the .htaccess file to get rid from index.php,
Or you can redirect it by using redirect(base_url().'index.php/login');
| 0.888889 |
.htaccess file from index.php
|
Can I use Self-Modification in Android?
|
When I read Google play policy, I have a question in this sentence.
An app downloaded from Google Play may not modify, replace or update its own APK binary code using any method other than Google Play's update mechanism.
This means developsers can't use self-modifying code??
For example, when users run a my app for the first time, my app modify its binary code.
And if I use it, can I upload my app using self-modifying code??
|
Although the policy doesn't allow, if Google doesn't detect us doing, we are safe.
There are couple of ways to dynamically load binary code AKA DEX file. So at runtime you download updated dex file and run it. It is pretty well-known. You can find some info here http://android-developers.blogspot.sg/2011/07/custom-class-loading-in-dalvik.html
Facebook app does use dynamic class loading as the number of methods in their code exceeds 64K, so they have to split their code into multiple dex files and dynamically load it at run time. So if Facebook can, then I guess we are also good to go.
The following blog by Facebook states the problem they faced and how they solved it https://m.facebook.com/notes/facebook-engineering/under-the-hood-dalvik-patch-for-facebook-for-android/10151345597798920
| 0.888889 |
How to dynamically load binary code AKA DEX file
|
Change top's sorting back to CPU
|
A former coworker did something to top that whenever it runs as root the data is sorted by MEM usage instead of the default CPU usage. According to multiple searches, the man page and even the options within the top console itself (O), just pressing k it should be sorted by CPU, but instead when I hit k it asks me for a pid to kill.
So how can I get back default sorting to CPU?
|
To add to the answers already P (upper case P) makes top order by CPU, Then hit W (again upper case W) to save.
M goes back to memory
| 0.666667 |
To add to the answers already P (upper case P) makes top order by CPU, Hit W (again upper case W)
|
Do delegates defy OOP
|
I'm trying to understand OOP so I can write better OOP code and one thing which keeps coming up is this concept of a delegate (using .NET). I could have an object, which is totally self contained (encapsulated); it knows nothing of the outside world... but then I attach a delegate to it.
In my head, this is still quite well separated as the delegate only knows what to reference, but this by itself means it has to know about something else outside it's world! That a method exists within another class!
Have I got myself it total muddle here, or is this a grey area, or is this actually down to interpretation (and if so, sorry as that will be off topic I'm sure). My question is, do delegates defy/muddy the OOP pattern?
|
If OOP means each object is responsible for its own interface, then logically delegates could break OOP if it were not used properly. For example, if I made a FileStream class which, when asked to write, called a delegate to perform the write on behalf of the FileStream class, then it would breach OOP. This is because the responsibility of the write operation of the FileStream is purely upon FileStream if we're following OOP to the letter.
However, this doesn't mean that using delegates always goes against OOP. Using the same example, if I used a delegate to handle events before and after writes, this is still following OOP because it is not the responsibility of FileStream to know what should happen before and after writes. It should merely notify the classes which are responsible.
If passing delegates to an object were breaching OOP because it knows something outside its world, then also passing variables would breach OOP since you could think of delegates as passing code rather than data to classes. Acknowledging that there exists code outside of a single OOP class is not breaching anything.
| 0.666667 |
OOP means each object is responsible for its own interface, then delegates could breach OOP
|
Display relative date just for today, with "today" as prefix
|
I am trying to prefix today's (blog)posts with "Today". So that it would read:
Today, January 12, 2015
From tomorrow on the prefix should not be displayed anymore. I want to achieve an eyecachting effect for the latests posts like that.
I have looked into EE's relative dates but nothing outputs exactly what I need and I don't know how to stop it after today either.
Has anybody achieved this in the past. Seems to be a pretty basic thing actually...
|
You could do this:
{if '{entry_date format="%Y%m%d"}' == '{current_time format="%Y%m%d"}'}Today, {/if}{entry_date format="%F %j, %Y"}
| 0.888889 |
if 'entry_date format="%Y%m%d"
|
What is my lcd density?
|
How can I figure out what is my tablet's lcd screen density.
It is a cheapy one with no official site...
Is there any settings option I could check to see the display resolution configurations?
Is there any other way for it?
|
If you are rooted, the best way to check is to go to /system and open build.prop, and then check if there is a line called ro.sf.lcd_density. This will give you the current LCD density, and you can even change it if you want.
| 0.888889 |
Check if there is a line called ro.sf.lcd_density
|
Msg 1833, File 'ABC.ndf' cannot be reused until after the next BACKUP LOG operation
|
I am having a Database.
The size of my database is near about 4GB(Only .mdf file).
In that database only few tables present which have the maximum data of database. One of them and the biggest table is Table1. This is a huge table means everyday near about 80000 data inserted and same no. of records deleted. There are 30 days * 80000 records present in that table. This table is used very frequently. Hence it takes time to query large data. So my development team decided to create Partition of same table. Table11 has four columns in primary key.
Primary Key Columns :
ChequeDate Date
RunNo Int
SorterNo Int
SequenceNo Int
We decided to partition the table on the basis of SequenceNo column.
I created a partition script and ran it which partitions an existing table. This works perfectly.
Next I run a "rollback" script to remove the partitioning. This also runs perfectly.
When I try to reapply partitioning on the same table it gives me the error mentioned in title. I found a manual solution that worked for me.
My problem is this process should run from a script only. No manual work should be necessary.
I assume that the problem is in the rollback script:
ALTER TABLE [dbo].[Table1] DROP CONSTRAINT [PK_Table1] with (move to [primary]);
DROP PARTITION SCHEME Table1PartSch
DROP PARTITION FUNCTION Table1PartFunc
ALTER DATABASE [MyDataBase] REMOVE FILE Table125LData;
ALTER DATABASE [MyDataBase] REMOVE FILE Table117LData;
ALTER DATABASE [MyDataBase] REMOVE FILE Table17LData;
ALTER DATABASE [MyDataBase] REMOVE FILE Table12LData;
ALTER DATABASE [MyDataBase] REMOVE FILE Table1RestData;
GO
ALTER DATABASE [MyDataBase] REMOVE FILEGROUP Table125LFG;
ALTER DATABASE [MyDataBase] REMOVE FILEGROUP Table117LFG;
ALTER DATABASE [MyDataBase] REMOVE FILEGROUP Table17LFG;
ALTER DATABASE [MyDataBase] REMOVE FILEGROUP Table12LFG;
ALTER DATABASE [MyDataBase] REMOVE FILEGROUP Table1RestFG;
GO
If someone have better solution other than Partitioning, to this situation is Welcome.
|
File '%ls' cannot be reused until after the next BACKUP LOG operation.
SQL Server insists that a backup is taken before a file is reused so it can guarantee database recovery in case of disaster.
The minimum backup needed is a log backup, but the documentation recommends a full database backup after any operation that adds or removes files from a database:
By adding a file, removing it, and attempting to add it again, you run into this requirement.
If you have a genuine need to add and remove files, you will need to add a BACKUP DATABASE or BACKUP LOG statement to your script.
This backup will form part of the recovery chain, so it is important you keep it safe in case you need to recover the database.
See BACKUP (Transact-SQL) for syntax details and further information.
| 0.888889 |
Backup %ls cannot be reused until after the next BACKUP LOG operation
|
ASA5520 stops sending to splunk syslog
|
I have an ASA5520 that is set up to send logs to a splunk syslog server. the setup works for a while, usually around 24 hours or so, but then stops until either the logging is reconfigured (twiddling the ports) or the ASA is restarted.
what should i be looking at to resolve this issue? im not sure if its the splunk syslog daemon ignoring connections or the ASA that gets messed up and stops sending.
id like to enable the 'dont pass traffic without logging working' option, but without a stable connection to syslog, thats a non starter.
tried so far:
TCP and UDP, different ports, changing the logging level
|
Put a packet sniffer on the Splunk host, set up a capture filter to capture only packets that originate from the ASA, start a capture, when splunk stops "seeing" data from the ASA look at the capture and see if traffic is still coming in from the ASA, if it is then the problem is with Splunk, if it's not then the problem is with the ASA or the network between the two.
| 1 |
Set up a capture filter to capture only packets that originate from the ASA
|
Are estimates of regression coefficients uncorrelated?
|
Consider a simple regression (normality not assumed): $$Y_i = a + b X_i + e_i,$$ where $e_i$ is with mean $0$ and standard deviation $\sigma$. Are the Least Square Estimates of $a$ and $b$ uncorrelated?
|
This is an important consideration in designing experiments, where it can be desirable to have no (or very little) correlation among the estimates $\hat a$ and $\hat b$. Such lack of correlation can be achieved by controlling the values of the $X_i$.
To analyze the effects of the $X_i$ on the estimates, the values $(1,X_i)$ (which are row vectors of length $2$) are assembled vertically into a matrix $X$, the design matrix, having as many rows as there are data and (obviously) two columns. The corresponding $Y_i$ are assembled into one long (column) vector $y$. In these terms, writing $\beta = (a,b)^\prime$ for the assembled coefficients, the model is
$$\mathbb{E}(Y) = X \cdot \beta$$
The $Y_i$ are (usually) assumed to be independent random variables whose variances are a constant $\sigma^2$ for some unknown $\sigma \gt 0$. The dependent observations $y$ are taken to be one realization of the vector-valued random variable $Y$.
The OLS solution is
$$\hat\beta = \left(X^\prime X\right)^{-1} X^\prime y,$$
assuming this matrix inverse exists. Thus, using basic properties of matrix multiplication and covariance,
$$\text{Cov}(\hat\beta) = \text{Cov}\left(\left(X^\prime X\right)^{-1} X^\prime Y\right) = \left(\left(X^\prime X\right)^{-1} X^\prime\sigma^2
X \left( X^\prime X \right)^{-1\prime} \right) = \sigma^2 \left(X^\prime X\right)^{-1}. $$
The matrix $\left(X^\prime X\right)^{-1}$ has just two rows and two columns, each corresponding to the model parameters $(a,b)$. The correlation of $\hat a$ with $\hat b$ is proportional to the off-diagonal elements, which are just the dot products of the two columns of $X$. Since one of the columns is all $1$s, whose dot product with the other column (containing the of $X_i$) is their sum, we find
$\hat a$ and $\hat b$ are uncorrelated if and only the sum (or equivalently the mean) of the $X_i$ is zero.
This condition frequently is achieved by recentering the $X_i$ (by subtracting their mean from each). Although this will not alter the estimated slope $\hat b$, it does change the estimated intercept $\hat a$. Whether or not that is important depends on the application.
All this analysis applies to multiple regression: the design matrix will have $p+1$ columns for $p$ independent variables (remember, an additional column consists of $1$s) and $\beta$ will be a vector of length $p+1$, but otherwise everything goes through exactly as before. In conventional language, two columns of $X$ are called orthogonal when their dot product is zero. Thus,
Two multiple regression coefficient estimates $\hat\beta_i$ and $\hat\beta_j$ are uncorrelated if and only if the corresponding columns of the design matrix are orthogonal.
Many standard experimental designs consist of choosing values of the independent variables to make the columns orthogonal. This "separates" the resulting estimates by guaranteeing--before any data are ever collected!--that the estimates will be uncorrelated. (When the responses have Normal distributions this implies the estimates will be independent, which greatly simplifies their interpretation.)
| 0.777778 |
Multiple regression coefficient estimates are uncorrelated
|
SQL Server. Join/Pivot/Performance help please
|
I've created this view that is painfully slow to query and creating performance issues. I think I need to utilize PIVOT but I can't really wrap my head around it...
Here's what I've got currently.
select
p.instrumentsettingsid,
sum(case when p1.id is null then 0 else 1 end) as 'Cash',
sum(case when p2.id is null then 0 else 1 end) as HighInterest,
sum(case when p3.id is null then 0 else 1 end) as 'ML Loan',
sum(case when p4.id is null then 0 else 1 end) as 'Investment Note',
sum(case when p5.id is null then 0 else 1 end) as 'Trading',
sum(case when p6.id is null then 0 else 1 end) as 'DD Trading',
sum(case when p7.id is null then 0 else 1 end) as 'ML Trading',
sum(case when p8.id is null then 0 else 1 end) as 'NMLTrading',
sum(case when p9.id is null then 0 else 1 end) as 'Individual Custody',
sum(case when p10.id is null then 0 else 1 end) as 'ML Individual Custody',
sum(case when p11.id is null then 0 else 1 end) as 'Custody',
sum(case when p12.id is null then 0 else 1 end) as 'ML Custody Trading',
sum(case when p13.id is null then 0 else 1 end) as 'Portfolio',
sum(case when p14.id is null then 0 else 1 end) as 'ML Personal Portfolio',
sum(case when p15.id is null then 0 else 1 end) as Other,
i.status
from
(select distinct instrumentsettingsid from res_db..instrumentproducttypepermissions) p
inner join res_db..instrumentsettings i on p.instrumentsettingsid = i.id
left join res_db..instrumentproducttypepermissions p1 on p.instrumentsettingsid = p1.instrumentsettingsid and p1.enabled = 1 and p1.producttypeid = 1
left join res_db..instrumentproducttypepermissions p2 on p.instrumentsettingsid = p2.instrumentsettingsid and p2.enabled = 1 and p2.producttypeid = 2
left join res_db..instrumentproducttypepermissions p3 on p.instrumentsettingsid = p3.instrumentsettingsid and p3.enabled = 1 and p3.producttypeid = 3
left join res_db..instrumentproducttypepermissions p4 on p.instrumentsettingsid = p4.instrumentsettingsid and p4.enabled = 1 and p4.producttypeid = 4
left join res_db..instrumentproducttypepermissions p5 on p.instrumentsettingsid = p5.instrumentsettingsid and p5.enabled = 1 and p5.producttypeid = 5
left join res_db..instrumentproducttypepermissions p6 on p.instrumentsettingsid = p6.instrumentsettingsid and p6.enabled = 1 and p6.producttypeid = 6
left join res_db..instrumentproducttypepermissions p7 on p.instrumentsettingsid = p7.instrumentsettingsid and p7.enabled = 1 and p7.producttypeid = 7
left join res_db..instrumentproducttypepermissions p8 on p.instrumentsettingsid = p8.instrumentsettingsid and p8.enabled = 1 and p8.producttypeid = 8
left join res_db..instrumentproducttypepermissions p9 on p.instrumentsettingsid = p9.instrumentsettingsid and p9.enabled = 1 and p9.producttypeid = 9
left join res_db..instrumentproducttypepermissions p10 on p.instrumentsettingsid = p10.instrumentsettingsid and p10.enabled = 1 and p10.producttypeid = 10
left join res_db..instrumentproducttypepermissions p11 on p.instrumentsettingsid = p11.instrumentsettingsid and p11.enabled = 1 and p11.producttypeid = 11
left join res_db..instrumentproducttypepermissions p12 on p.instrumentsettingsid = p12.instrumentsettingsid and p12.enabled = 1 and p12.producttypeid = 12
left join res_db..instrumentproducttypepermissions p13 on p.instrumentsettingsid = p13.instrumentsettingsid and p13.enabled = 1 and p13.producttypeid = 13
left join res_db..instrumentproducttypepermissions p14 on p.instrumentsettingsid = p14.instrumentsettingsid and p14.enabled = 1 and p14.producttypeid = 14
left join res_db..instrumentproducttypepermissions p15 on p.instrumentsettingsid = p15.instrumentsettingsid and p15.enabled = 1 and p15.producttypeid = 15
group by p.instrumentsettingsid, i.status
Can someone who knows about SQL please tell me (show me?) how I can make this faster/less hideous please.
Also those 15 account names are hardcoded, in a perfect world it would get the product name from the table producttypes
Sorry to ask such an annoying question.
Thank you!
|
Even using Sql Server PIVOT you will have to know the Column names before hand, so no luck there, unless you make use of Pivots with Dynamic Columns or Dynamic PIVOT
| 0.888889 |
Pivots with Dynamic Columns or Dynamic PIVOT
|
Source where you can find if a CVE has a patch or not
|
I'm wondering if there is a source available which has a list of CVE numbers and shows whether they have been patched or not (and maybe a link to a relevant patch). I know Secunia has something like this but I was wondering if there any others.
|
I'm not aware of any good out-of-the-box services, but I usually cross-reference with sites like:
SecurityFocus: It usually references CVE. Patch status filed under "Solution".
PacketStormSecurity: Allows fulltext search (can search for CVE ID). Patch status filed under "Mitigation".
OSVDB: Has an option to search with CVE ID. Patch status filed under "Solution".
IBM XForce: Allows CVE ID search. Patch status filed under "Remediation".
| 0.333333 |
SecurityFocus: Can search with CVE ID
|
Lyx itemize returning to last number
|
I would like to have the following output:
Text
(a) Text
Text
Text
That is, after a subitem, I want to return to the last number without creating a new one. Is there any way to do this without ERT?
|
Insert a "custom item" for the oddly-placed entry
and leave the entry blank
This leaves you with:
| 0.888889 |
Insert "custom item" for oddly-placed entry
|
Expectation of drawing the second color from an urn
|
My urn contains 3 red, 2 green and 1 white ball. I pick a ball with replacement until I pick the second color.
What is the average number of picks for picking the second color?
With the expected value formula I got the following.
$EX=\sum\limits_{k=2}^\infty k[\frac{1}{2}r^{k-1}+\frac{2}{3}g^{k-1}+\frac{5}{6}w^{k-1}]$
Where r, g and w are the probabilites of drawing a red, green, or white ball.
I don't know how to calculate this sum, and I am not sure this is the right way to solve this excercise.
|
We discuss the probability based on the color of ball you first picked up. Then you continue to pick up the same color for totally $k-1$ time until you pick up another color at $k$th.
$E(X)=\sum\limits_{k=2}^\infty k[(\frac{1}{2})^{k-1}\frac{1}{2}+(\frac{2}{6})^{k-1}\frac{4}{6}+(\frac{1}{6})^{k-1}\frac{5}{6}]$
To calculate the sum, the easiest way is to make use of the fact: for $|r|<1$, $\sum\limits_{k=0}^\infty r^k=\frac{1}{1-r}$. Differentiate both sides, $\sum\limits_{k=1}^\infty kr^{k-1}=\frac{1}{(1-r)^2}$. Then $\sum\limits_{k=2}^\infty kr^{k-1}=\frac{1}{(1-r)^2}-1$
| 1 |
Calculate the probability based on the color of ball you first picked up
|
android magento - customer login(authentication) using SOAP API
|
We are devoloping an android app for a Magento site.
I am facing problem while I try to login(authentication) as magento customer.
I can't find a proper way to login as a customer using email and password.
how can I authenticate or login as a customer?
I followed this links answer: http://stackoverflow.com/questions/25055700/android-magento-customer-loginauthentication-using-using-soap-api
according to that, I created this file: http://pastebin.com/RJAGxTDS under http://hotwheelstoys.in/testing/testing/Mage3.php
But when we pass the values using JSON, we are getting "null".
I saw this link : http://www.magentocommerce.com/api/soap/introduction.html
But there is no API for login through android.
|
Magento SOAP Api has no method to authenticate customers.
What you can do is, get the customer info and then check the password. The problem is, doing this client side is a really bad idea.
The alternative is to implement your own method to just pass email and password to check it.
| 0.5 |
How to authenticate Magento SOAP Api customers?
|
What is the cardinality of the family of unlabelled bipartite graphs on n vertices?
|
I have attempted to calculate the number of unlabelled bipartite graphs as follows:
Let $G = (V_1, V_2, E)$ be a bipartite graph on $n$ vertices with $|V_1| = m$ and $|V_2| = n-m$. Assume without loss of generality that $|V_1| \leq |V_2|$ so $m \leq \left\lfloor \frac{n}{2} \right\rfloor$. If $G$ is complete bipartite then it has $m(n-m)$ edges since each of the vertices in $V_1$ is connected to each in $V_2$. Thus, the total number of bipartite graphs with parts of size $m$ and $n-m$ is $2^{m(n-m)}$. In order to find the total number of possible bipartite graphs on $n$ vertices we sum over all possible $m$:
\begin{align}
\sum^{\left\lfloor \frac{n}{2} \right\rfloor}_{m=1} 2^{m(n-m)}
\end{align}
However, I notice that I have counted labelled bipartite graphs where I need the number of unlabelled graphs. I'm struggling to see how to account for this.
|
The answer to your problem is detailed in the Thesis of Ji Li (2007)
Counting Prime Graphs and Point-Determining Graphs Using Combinatorial Theory of Species
http://people.brandeis.edu/~gessel/homepage/students/jilithesis.pdf
Section 4.4. p.112
The formulae are to be found p.115.
| 0.777778 |
Thesis of Ji Li (2007) Counting Prime Graphs
|
Measuring Resistance of a Wire With an ADC
|
I'm trying to design a circuit which can measure small resistances down to 0.1 Ohm and a max. of 10 Ohms. I won't be measuring actual resistors but rather large coil of wires, upto 500 m (as you can imagine, these wires are quite thick).
Here's the circuit I came up with:
The circuit works by maintaining a constant current through the device under test, R2. With a current of 100 mA, R2 would develop a voltage between 10 mV to 50 mV.
I think in an ideal world this would work but in practice I may have a hard time measuring 0.1 Ohms with this - mainly due to the ADC. Let's assume the ADC is 10-bit with VREF of 5V. This translates to 5mV per step. If R2 = 0.1 and Iout = 100 mA, then the voltage present at the ADC would be 50 mV - but I'm not sure how buried under noise this would be.
My question is, should I increase the gain to, say, 50. If the gain is 50, then the voltage present at the ADC would be 500 mV - but the max. measurable resistance would be 1 Ohms. To measure 10 Ohms, I would need to lower the current to 10 mA instead of 100 mA. A way to do that would be use an FET to switch out R1 and connect a 20 Ohm resistor at Iout.
I don't need the circuit to measure the resistance precisely - a tolerance of +/- 10% is fine.
|
Ok, you asked for my version of the circuit.
This uses an opamp+BJT current source with a three-decade range. The range of the current source is selected by grounding one of three resistors. You can probably achieve your accuracy goals by using AVR outputs to switch the three resistors. Switch between output low (for enable) or input (for disable). Analog input is better, but the voltage will be an unambigious high, so digital input is OK. For better accuracy, connect the 4K to resistor to two pins. The output resistance of an AVR digital out is about 25 ohms:
.
The +5V line is used for the reference of both the current source and ADC. Variations in supply voltage will cancel. The alternative would be to have a reference in the current source and a reference in the ADC... not necessary here. Microcontroller ADCs are generally happy to use the supply rails as reference.
You must make four connections to the device under test. Two of the connections deliver the current, and two of the connections present the voltage across the device under test to the measurement circuit. Four-wire connection is necessary to measure low resistances ( < 1 ohm )! Otherwise you are measuring your probe resistance by accident.
The opamp's offset voltage is the most important parameter. Use a chopper amp and don't worry about it. I've spec'd OPA2333, which is a nice slow amplifier that's always worked well for me.
If your probe resistance is higher than about an ohm, you should go for the full instrumentation amplifier. But with reasonable probes this should meet spec as-is.
| 1 |
opamp+BJT current source with three-decade range
|
Windows Server 2003 SP2 - JRNL_WRAP_ERROR (Sysvol)
|
SETUP:
Windows Server 2003 SP2 with a replication error (this is the only DC in the domain) [image below]: Journal Wrap Error on sysvol share
Is there a way to find out what is broken/corrupt/malformed in this DC's sysvol?
Is there a way to fix the sysvol share other than from a system-state restore point (which we don't have), or from another domain controller (which we also don't have)?
|
There isn't any "corruption". Rather, too many changes happened in the NTFS volume that hosts the SYSVOL in a short period of time, overflowing the NTFS change journal. Rather than explain it in detail, I'll direct you to some background on what causes journal wrap errors in this blog posting.
The two recovery methods are described in this Microsoft KB article.
Since you have no replication partners you should be fine performing an authoritative restore (as described in that KB article). In multi-DC scenarios this is not the preferred method.
Basically, you'll stop the File Replication Service (FRS), set the "burflags" registry value to "D4" (the full path is in the article), and restart the FRS. That should get you back up and running.
Before you try any of that, though, I'd at least make a current System State backup to disk.
You might want to think about getting a second DC after all of this is said and done. It's cheap insurance, makes disaster recovery a LOT easier, and in this scenario would've preserved client access to Group Policy during the outage.
| 0.777778 |
What causes NTFS change journal errors?
|
HP EVA4400 Vraid5 Virtual Disks Failure
|
I have a eva4400 with 8 disks six of them in a raid6 group
I have 2 vraid5 in this group.
I have lost 3 disks.
Since I get the administrator post after the disaster happened I have no clue on how where the disks distributed on eva but, I notice that I have 2 disks failed in the disk group, 1 disk failed on Ungrouped disks, and 1 good drive also in Ungrouped disks.
I dont now if the 2 disks (the good and the bad) where in the ungrouped disks from the start
I bought 3 new disks from HP and place them in so now I have 4 good disks on ungrouped disks and 1 bad disk and also still 2 bad disks on disk group
Since I have no support from HP is there anything I can do to save the data from failed vraid5 diks?
Any help is much appriciated.
|
is there anything I can do to save the data from failed vraid5 disks?
No, you'll have to restore from backup sorry.
That said I did just want to clarify that you meant you have 8 disks, 1 disk group with two spares and two virtual disks using VRAID5 - is that right? I just wanted to get the terminology right as your question wasn't as clear as it could be.
Can I suggest that when you get this back up and running you configure the disk group to only have a single spare, not double - not for such a small array. Then use either VRAID 6 or 10 (0+1 actually), they're much more resilient than VRAID 5.
Oh and on one final note - these aren't the 1TB FATA disks are they? a few years ago we saw a ridiculous number of these fail, many at the same time - due to miscommunicated duty-cycle requirements - we abandoned using those disks. I'm just keen to help you avoid a repeat of this issue.
| 0.888889 |
Is there anything I can do to save data from failed vraid5 disks?
|
Get latitude/longitude of friend's current_location using Graph FQL
|
I'm trying to get the latitude/longitude of all of a user's friends using a single API call. I believe I need to write a multi-query FQL statement, but I can't get the syntax correct.
I believe the two queries need to be something like the following:
'"friends":"SELECT uid,current_location FROM user WHERE uid IN (SELECT uid2 FROM friend WHERE uid1 = me())"'
'"location":"SELECT location FROM page WHERE id IN (SELECT current_location_id FROM #friends)"';
My problem is that somehow I need to get the ID of the current_location in the first query so that I can reference it in the second query, but I only know how to get an array that contains the id.
Thanks for any help!
Clarification:
I'm not trying to get check-ins of friends. Instead I want to graph the "current_location" (i.e. where they live) of all of a user's friends on a Geo map. I can get a user's "current_location" with the following FQL query:
"SELECT uid,sex,current_location FROM user WHERE uid IN (SELECT uid2 FROM friend WHERE uid1 = me())"
This query returns the following array:
[current_location] => Array
(
[city] => New York
[state] => New York
[country] => United States
[zip] =>
[id] => 108424279189115
[name] => New York, New York
)
Which does not contain the longitude and latitude of the of the current_location (New York in the above example)
Therefore I now need to take the ID of the current_location (e.g. 108424279189115) and run a second FQL query such as:
"SELECT location,name FROM page WHERE page_id = 108424279189115)"
To get the latitude and longitude of the current_location. Currently, I have a working code that runs the first FQL query, then extracts page_id in php, and then runs a second FQL query to get the current_location latitude/longitude. However, for performance reasons I would like to run only one multi-query if this is possible.
|
facebook.batch_fql({
'query1': "SELECT uid, name, current_location.id FROM user WHERE uid IN (SELECT uid2 FROM friend WHERE uid1=me())",
'query2': "SELECT page_id, name, description, location FROM page WHERE page_id IN (SELECT current_location.id FROM #query1)"
})
will return two responses
one will be a list of friends UIDs, names and their current locations IDs
the other will be a list of unique location IDs with their name, description and geolocation info
EDIT: if you query for current_location using FQL, you'll get the full latlong in the first response and you don't even need the second one
| 0.777778 |
if you query for current_location using FQL, you don't even need the second one
|
Tikz: how to change the order of overlapping objects?
|
I want to draw some overlapping rectangles, for example:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{shapes,backgrounds}
\begin{document}
\begin{tikzpicture}
\draw [fill=blue,ultra thick] (0.4,0.5) rectangle (0,1);
\draw [fill=red,ultra thick] (0,0.3) rectangle (1,1);
\draw [fill=green,ultra thick] (0,0) rectangle (0.5,0.5);
\end{tikzpicture}
\end{document}
But I want (in this example) the green rectangle under the red and the red under the blue. Is that possible (without changing the order of commands)?
More explanation: the latex commands are generated by a C++ program which computes coordinates of some rectangles. I want to draw them and they can overlap. The program computes the rectangles in some order and I want to show them in the reverse order. (It seems difficult to me to write the lines of C++ output to a file in reverse order.)
Here is what I have and what I want:
|
A simple sequence of \draw commands can also be reverted by TeX. In the following example, macro \reversedraws goes before the sequence of \draw commands, which should be reverted. It stores them in macro \collect@draws in reverse order. After the last \draw, the now reverted list is output:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{shapes,backgrounds}
\usepackage{etoolbox}% provides \preto
\makeatletter
\newcommand*{\collect@draws}{}
\newcommand*{\reversedraws}{%
\renewcommand*{\collect@draws}{}% initialize
\look@for@draw
}
\newcommand*{\look@for@draw}{%
\@ifnextchar\draw{%
\catch@draw
}{%
\collect@draws % output the result
}%
}
\def\catch@draw\draw#1;{%
\preto\collect@draws{\draw#1;}%
\look@for@draw
}
\makeatother
\begin{document}
\begin{tikzpicture}
\reversedraws
\draw [fill=blue,ultra thick] (0.4,0.5) rectangle (0,1);
\draw [fill=red,ultra thick] (0,0.3) rectangle (1,1);
\draw [fill=green,ultra thick] (0,0) rectangle (0.5,0.5);
\end{tikzpicture}
\end{document}
Of course fixing the C++ tool program to output the correct order is the better approach and can be done more efficient in linear time there. In the example above, the run time behavior is quadratic.
| 1 |
A simple sequence of draw commands can also be reverted by TeX
|
What would be the best way to design a real time clock for the MSP430?
|
Basically that. The way I am doing it now is with the TimerA set to 1 second interrupts. But I think that it's very annoying. Are there any other ways to do it?
I want to basically set timers on that clock, like, shutdown until 40 seconds have passed...
|
There are MSP430 devices with a low-power oscillator that use a standard 32.768 kHz watch crystal and are intended specifically for that sort of application. A typical one is the MSP430F1101.
| 1 |
MSP430F1101.
|
How to disable bash drivers in windows 8.1?
|
I'm trying to follow these instructions to stop Bash Driver BHDrvx86 or BHDdrvx64, but device manager in Windows 8.1 no longer displays non plug and play drivers (such as the Bash drivers), even as a hidden option. I'm unsure how else to disable them.
|
Device Manager generally should be run elevated, it doesn't show many things otherwise.
However, the same can be done from an elevated command line:
sc config BHDrvx86 start= disabled
sc config BHDdrvx64 start= disabled
(A space only after the = sign, exactly as shown in the example.)
| 0.777778 |
Device Manager should be run elevated
|
Is there any performance difference between ++i and i++ in C#?
|
Is there any performance difference between using something like
for(int i = 0; i < 10; i++) { ... }
and
for(int i = 0; i < 10; ++i) { ... }
or is the compiler able to optimize in such a way that they are equally fast in the case where they are functionally equivalent?
Edit:
This was asked because I had a discussion with a co-worker about it, not because I think its a useful optimization in any practical sense. It is largely academic.
|
In addition to other answers, there can be a difference if your i is not an int. In C++, if it is an object of a class that has operators ++() and ++(int) overloaded, then it can make a difference, and possibly a side effect. Performance of ++i should be better in this case (dependant on the implementation).
| 0.888889 |
Performance of ++i should be better in C++
|
How do I find the most populous cities in the world?
|
I am attempting to plot the most populous cities on a world map.
Currently, I have the line of code:
numberofpeople =
Map[{#, Length[CityData[#, "Population"]]} &, CityData[All]];
which should give me the City name, and the population. However, when I type in the line of code
Reverse[SortBy[numberofpeople, Last]][[1 ;; 20]]
I am getting odd data, which is that the most populous cities have 1 person.
Could somebody debug my code, and answer any future questions I have on the matter?
Much appreciated.
EDIT: How do I plot these points on a World Map? Currently I have
Graphics[{EdgeForm[Black], , CountryData[#, "SchematicPolygon"]} & /@
CountryData[]]
but I have no idea what goes between the points.
|
Leave out Length in Length[CityData[#, "Population"]], and delete cases of Missing data:
numberofpeople = Map[{#, CityData[#, "Population"]} &, CityData[All]];
Reverse[SortBy[DeleteCases[numberofpeople, {_, _Missing}], Last]] ~Take~ 20 // Grid
| 1 |
Leave out Length in length and delete cases of Missing data
|
Notification from timed-out batch query?
|
I have a batch process where it's intermittently failing due to what looks like the batch query timing out (details are here if you're interested).
The error ends up looking like this in my log.
I would like for me or the org's admin to receive an email notification if this occurs. Currently nothing's being sent by SF, I assume because this is a low-level unhandled exception in SF, and the batch is "Aborted" rather than "Failed".
Furthermore, since it's happening on the query, the execute method is never getting run, so I don't think I can do a try/catch in my batch to catch this error and report it.
How can I provide a notification on this sort of timeout?
Thanks!
|
Your best option right now would be to run a scheduled class that periodically check your batches (AsyncApexJob) for any jobs that meet that certain type of error message, and send a notification about the failures. You'll want to keep track of which notifications you've already sent out, since you can't simply "delete" the failed operations or affect their status in any way. Any time we see an error with a ORA code is an internal bug that somehow floated past the layers of code designed to catch those errors.
| 0.777778 |
Run a scheduled class that periodically check your batches for any jobs that meet that certain type of error message
|
Convert vector<int> to delimited string
|
As I see here there is a fast and short way to Convert vector to string separated with a character in c#:
var result = string.Join(";", data);
var result = string.Join(";", data.Select(x => x.ToString()).ToArray());
I wan to know is there a same way in c++ to do this?
|
#include <sstream>
#include <string>
#include <vector>
#include <iterator>
#include <iostream>
int main()
{
std::vector<int> data = {42, 1, 2, 3, 4, 5};
std::ostringstream oss;
std::copy(data.begin(), data.end(), std::ostream_iterator<int>(oss, ";"));
std::string result( oss.str() );
std::cout << result << "\n";
}
N.B. In C++11, you can use the more general form
using std::begin;
using std::end;
std::copy(begin(data), end(data), std::ostream_iterator<int>(oss, ";"));
Where the using-declarations are not required if ADL can be used (like in the example above).
Also possible, but maybe a bit less efficient:
std::string s;
for(auto const& e : v) s += std::to_string(e) + ";";
which can be written via std::accumulate in <algorithm> as:
std::string s = std::accumulate(begin(v), end(v), std::string{},
[](std::string r, int p){ return std::move(r) + std::to_string(p) + ";"; });
(IIRC there was some method to eliminate the copying, maybe by taking the lambda-parameter by reference std::string& r.)
A version w/o the trailing semi-colon (thanks to Dietmar Kühl):
std::vector<int> data = {42, 1, 2, 3, 4, 5};
std::ostringstream out;
if (!v.empty())
{
std::copy(v.begin(), v.end() - 1, std::ostream_iterator<int>(out, ";"));
out << v.back();
}
std::string result( out.str() );
std::cout << result << "\n";
| 0.888889 |
In C++11, you can use the more general form using std::begin .
|
Server 2008 DNS forwarders
|
I am trying to add a static DNS forward under DNS in Windows Server 2008 R2 however I can't figure it out.
Under Server 2003 you simply add an DNS domain name with it's forwarder IP list.
However under Server 2008 you only seem to be able to add IP addresses. The IP's I enter will not resolve/validate, plus it doesn't appear to allow you to enter a matching DNS domain name.
Can someone please help?
|
You are trying to set up a "conditional forwarder" - a feature which has been moved to a different place in the MMC snapin GUI. It has an own node within the DNS server tree at the same level as the forward and reverse lookup zones now:
(Screenshot from http://alicain.blogspot.de/2008/09/windows-2008-conditional-forwarding.html)
| 1 |
"conditional forwarder" in MMC snapin GUI
|
why wavelet is more suitable for image compression compared to DCT?
|
Everyone knows that JPEG2000 can obtain much better compression result than JPEG. The fundamental transform behind JPEG2000 is wavelet while the basic transform in JPEG is DCT. So my question is why wavelet is more suitable for image compression compared to DCT. Thanks.
|
Both JPEG and JPEG 2000 use the change of basis compression type.
Namely, we transform the data into a different representation assuming in this representation the number of parameters needed to describe to data is lower.
Or to the least, most of the information is gathered within few parameters.
Now, if you look at the energy level of the DCT coefficients of real world images you'd see most of the energy is limited to the very few coefficients.
This is what JPG does, keeps only the few dominant coefficients and throws the rest.
How efficient is DCT?
It depends on how less coefficients are needed to describe the image in an acceptable quality.
It turns out that for Wavelets the situation is better.
Namely on real world images less coefficients are needed to describe the image with the same perceived quality.
This property is called the ability to decorrelate the data / being energy densed, etc...
At the end, it is all up to how much less parameters are needed to describe the same data.
P.S.
Using SVD / KLT (They are equivalent in this sense) would be even more efficient in the sense of how much less coefficients are needed for the same quality (In the $ {\ell}_{2} $ sense).
The problem is those bases are adaptive to the data and hence if you use them you have to include them to be able to decompress the data.
What's so nice, you can ask which, in general, approximate the SVD better?
It turns out that Wavelets are better doing so.
| 0.888889 |
How efficient is DCT?
|
"Where Amazing Happens", is this poster slogan a grammatically correct expression?
|
China's Guangzhou Evergrande FC set up a mouthwatering FIFA Club World Cup semi-final against European champions Bayern Munich after seeing off Egypt's Al Ahly 2-0. And Guangzhou Evergrande FC challenges Bayern Munich on their newly released posters with this slogan: Where Amazing Happens.
I am wondering if this slogan is a right expression. Amazing is an adjective without doubt, then how could it be used alone as a subject? I have thought this over and now I guess maybe it is acceptable using some simplified, less-grammatical expressions for news titles and slogans, isn't it?
|
Ignoring the fact that it's not a full sentence, I think it should be grammatically correct because you could say
[This is] Where evil happens
so it's just a different adjective, but using amazing there just sounds ugly/wrong to me.
Where amazing things happen
sounds much better to me.
| 1 |
Where evil happens sounds ugly/wrong to me
|
Why can't I manage to get post data from html form send to express.js?
|
I have this code of server.js, I am really new to this.
I try to get what the html form send to node.js server (localhost)
server.js
//initialing the website
var express = require('express');
var app = express();
app.get('/',function(req, res){//get,put,post,delete
res.sendFile('C:\\work\\translator\\startbootstrap-freelancer-1.0.3\\index.html');
app.use("/css", express.static(__dirname + '/startbootstrap-freelancer-1.0.3/css'));
app.use("/img", express.static(__dirname + '/startbootstrap-freelancer-1.0.3/img'));
app.use("/font-awesome", express.static(__dirname + '/startbootstrap-freelancer-1.0.3/font-awesome'));
app.use("/js", express.static(__dirname + '/startbootstrap-freelancer-1.0.3/js'));
app.use("/less", express.static(__dirname + '/startbootstrap-freelancer-1.0.3/less'));
app.use("/mail", express.static(__dirname + '/startbootstrap-freelancer-1.0.3/mail'));
app.use("/fonts", express.static(__dirname + '/startbootstrap-freelancer-1.0.3/fonts'));
});
var server = app.listen(3000, function () {
var host = server.address().address;
var port = server.address().port;
console.log('Example app listening at http://%s:%s', host, port);
});
//sending mail with gmail
app.post('/contact', function (req, res) {
var nodemailer = require('nodemailer');
var transporter = nodemailer.createTransport({
service: 'gmail',
auth: {
user: 'my_email',
pass: 'my_password'
}
});
transporter.sendMail({
from: '[email protected]',
to: '[email protected]',
subject: 'contact',
text: 'hello world'
});
});
The Html form
<form name="sentMessage" action="/contact" method="post" id="contact" novalidate>
<input type="text" class="form-control" name="name">
<input type="email" class="form-control" name="email">
<input type="tel" class="form-control" name="phone">
<textarea rows="5" class="form-control" name="message"></textarea>
<button type="submit" class="btn btn-success btn-lg">send</button>
I tried req.params, req.body.email and all kind of things to get the email in the email field but it says: "email undefiend", it's seems that somehow the server.js doesn't get the values/ parameters or something...
|
Express is pretty minimal out of the box.You need to add body parser middleware to handle forms. Like this:
https://github.com/expressjs/body-parser
| 1 |
Express is pretty minimal out of the box.
|
What are the disadvantages of "puncture proof" (or puncture resistant) tires?
|
Some people refer to tires that are more resistant to punctures as "puncture proof" tires.
For example: Specialized All Condition Armadillo Tyre
Even though these tires are not really puncture proof, they do reduce a lot the chances of puncturing. So the advantage is obvious.
Apparently, they're not too expensive compared to regular tires. So, I was wondering why people would choose not to use these more resistant tires. What are their disadvantages?
|
They are typically heavier than a "normal" tire, so the rotating mass will slow you down.
| 1 |
The rotating mass will slow you down
|
How to remember 10 last read articles with timestamp in session for a user in Codeigniter?
|
I would like to make a PHP if condition code that will check if the last 10 articles or 10 minutes from the article reading by the user have already elapsed.
E.g.
A user open a page with id = 235 (this id value is in the url localhost/article/235 )
and this id value will be saved in session with a current timestamp and maybe his IP address
Then he read another article and the same will happen.
I need to remember the clicked stuff for another ten clicks and then reset that only for the first row. E.g. after the 10th click the id and timestamp will not became 11th row but will replace the 1st row in the list.
The php condition in CodeIgniter will then check these values and will update the article hit counter value in the articles table and column counter like this:
$this->db->where('id', $id);
$this->db->set('counter', 'counter+1', FALSE);
$this->db->update('articles');
But before calling this code I need to make this check from the session?
How to do that?
I think storing e.g. 10 entries in the session with timestamps per user will be enough.
Just don't save the same page in the session twice.
And the condition will check the current timestamp with the saved one and if it is more than e.g. 10 minutes or the user have read/clicked another 10 articles it will allow the update counter php code.
I don't need to have this bulletproof. Just to disable the increment using browser's refresh button.
So, if he wants to increment the counter he will need to wait ten minutes or read another 10 articles ;)
|
I assumne that $clicks is an array with up to ten visited articles. The id is used as key and the timestamp as value. $id is the id of the new article.
$clicks = $this->session->userdata('article_clicks');
//default value
$clicks = ($clicks)? $clicks : array();
//could be loaded from config
$maxItemCount = 10;
$timwToLive= 600;
//helpers
$time = time();
$deadline = $time - $timeToLive;
//add if not in list
if(! isset($clicks[$id]) ){
$clicks[$id] = $time;
}
//remove old values
$clicks = array_filter($clicks, function($value){ $value >= $deadline;});
//sort newest to oldest
arsort($clicks);
//limit items, oldest will be removed first because we sorted the array
$clicks = array_slice($clicks, 0, $maxItemCount);
//save to session
$this->session->>set_userdata('article_clicks',$clicks)
Usage:
//print how mch time has passed since the last visit
if(isset($clicks[$id]){
echo "visited ".($time-$clicks[$id]). "seconds ago." ;
} else {
echo "first visit";
}
EDIT: you have to use arsort not rsort or the keys will be lost, sorry
| 0.888889 |
$clicks is an array with up to ten visited articles
|
How do I connect to a remote MySQL database through an android device?
|
public static final String URL="jdbc:mysql://10.0.2.2/mydatabase";
public static final String USER="root";
public static final String PASSWORD="apple";
public static final String Driver_Class="com.mysql.jdbc.Driver";
The above URL works fine when i connect to it locally... My question is what change do I make in order to have a remote access through a device??? (Its to do with "http" but how do i write it,the URL?)
|
Make sure root is configured to be able to login remotely. Quick tutorial
Also make sure 3306 tcp port for MySQL server is open.
I've used this tutorial in past, give it a try.
| 1 |
Make sure root is configured to be able to login remotely
|
Why are straight lines sloping in render?
|
Why are the walls in my architectural renders sloping and how do I fix this?
|
The lens shift controls on the camera exist in Blender precisely to solve this problem. Photographers use tilt-shift lenses to do the same thing.
Rather than rotating the camera upwards to the building, you can keep it pointing straight ahead, and increase the camera Y shift. This will preserve straight vertical lines.
A good visualization of that is here:
http://digitized-life.blogspot.com.es/2013/02/blender-26.html
| 0.777778 |
Blender tilt-shift lenses to keep camera pointing straight ahead
|
So gravity turns things round
|
It makes sense, since gravity tends to push the surface of a body towards it's center. Unless I'm mistaken, everything with mass has it's own gravity, every atom and for instance, our own bodies should also have their own gravity. The question is: how strong is our own gravitational pull? I know it must be extremely weak, but is there actually anything at all that gets attracted to us, like maybe, bacteria or molecules?
And finally (this will sound ridiculous, but I'd really want to get an answer or at least a way of calculating it myself): What size would a human body have to reach in order for it to collapse into a sphere?
|
It makes sense, since gravity tends to push the surface of a body towards it's center
Yes, gravity tends to pull towards the center of mass. I think you're comparing human body to celestial bodies - for instance, stars. Stars collapse within themselves after their lifetime because, their internal (thermal) pressure is not sufficient enough to sustain the gravitational collapse.
How strong is our own gravitational pull? I know it must be extremely weak, but is there actually anything at all that gets attracted to us, like, maybe, bacteria or molecules?
Yes, it's extremely weak. The constant $G$ is already making the situation very weak. When you mention bacteria or molecule, you are actually reducing the effect further due to their mass. Gravity doesn't depend upon the size. Only MASS matters here. As nitrogen is around us mostly, you can calculate the force between you and a single nitrogen molecule at a distance of an angstrom, which is barely about $F\simeq 10^{-15}\text N$.
What size would a human body have to reach in order for him to collapse into a sphere?
Wiki has a nice quote on gravitational collapse:
Because gravity is comparatively weak compared to other fundamental forces, gravitational collapse is usually associated with very massive bodies or collections of bodies, such as stars (including collapsed stars such as supernovae, neutron stars and black holes) and massive collections of stars such as globular clusters and galaxies.
Human body is very small compared to any celestial object (just like the size of an atom in a large city). So, you'd probably need the size of some bigger (massive) celestial object to have a significant effect on the gravitational collapse.
| 0.777778 |
How strong is our own gravitational pull?
|
Prove that any real function defined on a real open interval has at most countably many simple discontinuities.
|
This is problem 17 in baby Rudin's chapter on continuity. He has a hint to use triplets of rationals that bound each simple discontinuity on the left, right, and in between the values of the limits from the left and right. It seems like this can be weakened to just rationals to the left and right.
Simple discontinuities are those in which the limit from the left and right exist, so there must be intervals to the left and right of a simple continuity on which no other simple discontinuity can exist. More precisely, let $c$ and $c'$ be simple discontinuities for $f$ on $(a,b)$ and consider the limit, $l$, of $f$ approaching $c$ from the left:
$\forall \epsilon >0, \exists \delta>0 : c-x<\delta \Rightarrow |l-f(x)|<\epsilon$
but $ \exists \epsilon '>0 \forall \delta ' >0 : |c'-x|<\delta ' \Rightarrow |f(c')-f(x)|>\epsilon '
\\ \therefore \epsilon = \epsilon ' , \delta ' = \delta \rightarrow \leftarrow
$
Therefore, you can make an injection from the set of simple discontinuities to a subset of rationals by associating each simple discontinuity with one rational in the aforementioned "free" interval to its left; from there compose with the map from rationals to integers to show countable. Is this argument correct?
|
so there must be intervals to the left and right of a simple continuity on which no other simple discontinuity can exist.
That is incorrect. Consider an enumeration $r_n$ of the rationals, and let
$$f(x) = \sum_{\substack{n \in \mathbb{N}\\r_n < x}} 2^{-n}.$$
Then $f$ is a strictly monotonic function that has a jump discontinuity in every rational number.
However, as the points of discontinuity approach any fixed $y \in \mathbb{R}$, the jumps tend to $0$, in fact, the sum of all jumps in the discontinuities $\neq y$ contained in a neighbourhood of $y$ tends to $0$ when the neighbourhood shrinks to a point.
| 0.777778 |
So there must be intervals to the left and right of a simple continuity .
|
With the hook entry_submission_end, is it possible to know if the entry is inserted or updated?
|
I'm working on an extension that indexes the content of the entry when publish form is submitted, but I need to know if the entry is a new one or an updated one, and in both case, I need the entry_id (the new one juste created or the old one).
Do you know a way to do that? Should I use an other hook?
--Solution--
In fact, entry_submission_end come with 3 parameters: entry_id, meta and data. If the entry is a new one, entry_id contains the new entry_id but $data['entry_id'] is 0, so you can have a condition on that to check if you are editing or creating an entry.
|
If anyone is looking for an answer here still then there is a simple test.
Along with the post data will be a value for entry_id, this is related to the originating entry id. So if this value is 0 then it is a new entry, if this value is > 0 then it is an update to an existing entry.
eg,
if ($_REQUEST['entry_id'] === 0) { // new post } else { // existing post }
| 0.777778 |
if entry_id value is 0 then it is a new entry
|
Should I tell other interviewers where else I've interviewed?
|
I am currently travelling for faculty interviews. Some professors and other interviewers have asked me where else I have interviewed. Should I tell interviewers where else I've interviewed?
Intuitively I would like to give them less information, but I also don't want to appear guarded and defensive as a person, either.
|
It may depend on your discipline, but I've noticed that faculty candidates for Computer Science that interview at my university tend to list their job talks on their CVs as invited talks.
Therefore, being cagey about where you're interviewing or trying to obfuscate it would be counter-intuitive. You're probably already leaving a pretty obvious trail from your job search, people in your field probably know others in your field and communicate with them, and it wouldn't be difficult information for anyone to find out, so it would seem like being up front and honest about it would be the best path?
| 1 |
What's the best path to obfuscate a job interview?
|
How to extend field in news letter in magento
|
I need to add some check boxes in magento news letter.
in newsletter I need to add 3 check boxes.
Men,women,kids and this will be pre selected.
Without changing the core files....I need to add 1 filed and also...save data in that field.
I need to know that how this can i do by local.
like what should we need and how....
|
You're question is pretty vague but I'll give it a try.
You can change the app/design/frontend/base/default/template/newsletter/subscribe.phtml by copying it to your theme directory and adding the required checkboxes.
To actually process the data you would need to overwrite the controller. Inchoo (who else) has a great article about this. From a custom extension it would look something like this
config.xml
<config>
<frontend>
<routers>
<tag>
<args>
<modules>
<[yourmodule]_newsletter before="Mage_Newsletter">[Yourmodule]_Newsletter</[yourmodule]_newsletter>
</modules>
</args>
</tag>
</routers>
</frontend>
</config>
SubscriberController.php
require_once(Mage::getModuleDir('controllers','Mage_Newsletter').DS.'SubscriberController.php');
class [Yourmodule]_Newsletter_SubscriberController extends Mage_Newsletter_SubscriberController
{
/**
* New subscription action
*/
public function newAction()
{
// edit the code from the origional Mage_Newsletter_SubscriberController::newAction
}
}
From there on out what you want to do with the data is up to you, if you want to store it in the Magento newsletter table you will need to include an install script in your extension that modifies the existing table.
For that, again, we have an Inchoo article about installer scripts.
| 0.666667 |
Inchoo: How to change the controller?
|
How to add blogger's RSS feed to facebook profile/page
|
I have a blog and I want that whatever is posted in the blog should get directly posted on my facebook profile/page. Basically I wish to post my blog's RSS feed to my facebook profile/page. I used RSS Graffiti to accomplish the task but somehow it seems like that isn't working.
|
At this point, Facebook does not let you add an RSS feed to automatically do that.
I would recommend using a tool like IFTTT which does it very well!
| 1 |
Facebook does not let you add an RSS feed to automatically do that
|
Test-Path returns differetn results in 64Bit and 32Bit powershell
|
I am developing a script which should run under 64 and 32Bit Powershell. Unfortunately it seems that Test-Path return different results in 64 and 32 Environment. Both sessions are running under same user, this user has full access on specific registry key.
64Bit Powershell
>test-path HKLM:\SOFTWARE\Citrix\ProvisioningServices
True
32Bit Powershell(x86)
>test-path HKLM:\SOFTWARE\Citrix\ProvisioningServices
False
Any Idea?
|
32-bit programs default to the WOW64 node in the registry, but see it as "normal". If the key does not exist in WOW64 then it is correctly returning false.
| 1 |
If the key does not exist in WOW64 then it is correctly returning false.
|
How do I automate the handling of a problem (no network device found) in Ubuntu 10.04 w/ preseed?
|
I have a preseed file that is doing some automation for an installation of Ubuntu 10.04. At the point where the network hardware is auto-detected, however, it fails to find hardware and displays a message, "No network interfaces detected". To make a long story short, I don't care if it can detect my network interface. How do I do one of the following:
Skip that step alltogether.
Handle the error page automagically.
PS. I found somewhere where it suggested this:
netcfg/no_interfaces seen true
That didn't work.
Thanks
|
If you want to skip network interface,just comment out the lines containing word "netcfg".
Warning : This will not load any network interface (as you are skipping it)
Check with this and let me know the result.
Regards,
S.Ragavendra Ganesh
| 1 |
Comment out the lines with word "netcfg"
|
What are the benefits of owning a physical book?
|
I have seen this question about updates of the D&D 4th Edition books, and it got me thinking.
Since I got my Kindle I have not read a single paper novel; they have fewer drawbacks compared to digital copies than rpg rulebooks.
Dead-tree types have some benefits like looking good on a bookshelf, but any ebook reader weighs less with 100 novels than the usual hard-cover book.
If you want to look for the damage of Ares Alpha, even with a half-decent tablet it takes less than 2 seconds.
Digital copies do not get worn, they never get unwanted earmarks, but you can bookmark them.
Rulebooks do get updates, and unless you are willing to take a pen to your book, your hard copies will never contain them. The pdfs can be edited and resent to the buyers.
Even better is the WotC approach with the DDI, you can look up any monster or item or (almost any) rule, in the most recent form, for 3 years at the cost of seven books.
I think this is the way to go, even considering the horribly slow character builder. Although I must admit good illustration can help build the athmosphere.
So what am I missing? Why are people buying rpg rulebooks in paper format? Why are books even published, I do not need to know if feats are supposed to be on the right page and skills on the left, I just want a list of them, filterable any way I want.
Is this just a necessary part of earning money? I understand that pdfs are copied illegally, but the Compendium is not.
|
As others have said - there's definitely something to be said about tactile navigation.
While digital formats (assuming they're text-parseable) can be searched, if you don't know the specific spelling or the specific term, they can be difficult to parse by hand. Quadruply so if the publisher did not provide bookmarks to the different chapters. (Which is really annoying, imo.)
By contrast, with a physical copy, you can pick up a general sense of where the desired content is physically located fairly quickly. For example: Combat Rules are towards the middle. Spell lists are towards the end. Character classes are near the beginning. The more familiar one is with the book, the quicker this is to process (and more accurate one tends to get).
Additionally - I stare at a digital monitor all day at work; then for most of the evening. So if I actually have to read something, I'll opt for ink on paper, just to save my eyes that itty little bit. :)
(Kindles are awesome, but they can be slow; plus, some books have art work and/or tables that I am unsure would translate over very well.)
| 1 |
Digital formats can be searched, but can be difficult to parse by hand
|
Permissions changing on few files under /etc/
|
It appears that new permissions on /etc/issue and /etc/motd are reverting back to the original even if we change them. This is on systems running RHEL 5 and RHEL 6. Is there any rc script which controls the permissions on /etc files?
|
Debian
If you're using a Debian based distro then this is likely what's causing your issue.
motd - Debian Wiki
excerpt
/etc/motd in Debian
Debian has a peculiar way of handling /etc/motd. The motd is updated at every reboot, in a boot script (/etc/init.d/bootmisc.sh in lenny and below, /etc/init.d/bootlogs in squeeze and above), which basically runs the following:
uname -snrvm > /var/run/motd
[ -f /etc/motd.tail ] && cat /etc/motd.tail >> /var/run/motd
Since /etc/motd is a symlink to /var/run/motd in Debian, this works.
How to update your /etc/motd
Since /etc/motd basically gets overwritten at every reboot, you need to instead update /etc/motd.tail and either reboot (!!) or also edit /etc/motd.tail or run the above commands. There is a bug report (437176) to provide an easier command to allow you to update only /etc/motd.tail.
Red Hat based distros (Fedora/CentOS/RHEL)
For these types of distros I'm not aware of any automated system that would revert these files back to known versions as part of a reboot. These files are often times statically included on these systems in RPM packages such as these:
CentOS 5.x
$ rpm -qf /etc/issue /etc/motd
centos-release-5-9.el5.centos.1
setup-2.5.58-9.el5
CentOS 6.x
$ rpm -qf /etc/issue /etc/motd
centos-release-6-5.el6.centos.11.2.x86_64
setup-2.8.14-20.el6_4.1.noarch
Fedora 19
$ rpm -qf /etc/issue /etc/motd
fedora-release-19-8.noarch
setup-2.8.71-1.fc19.noarch
Further more a simple search for /etc/issue or /etc/motd within /etc turns up no such mechanism.
$ sudo grep -r /etc/issue /etc/*
| 0.888889 |
How to update /etc/motd
|
Why does SnCl2 occur despite the octet rule?
|
Shouldn't reaching an octet be any atom's "goal"?
However, I've recently learned about cases that are either expanding octets, or have lesser than "enough" electrons for an octet abiding. e.g.:
S in Sulfur hexafluoride (Expanding octet with a $\text{3d}$)
Source
B in Boron trifluoride (It's "hexet", instead of octet)
Source
However, both $\ce{SnCl2}$ and $\ce{SnCl4}$ are existent. The latter is explainable with octet, but not the former. Surprisingly, $\ce{SnCl2}$ is more stable!
How?
Why is the phenomenon happening?
Dave pointed out that the octet nonabidingness is happening in the vapor phase. So I redirect the question to ask about the vapor phase; since that's what I'm looking for.
|
There is another rather more simple answer to the question:
$\ce{SnCl2}$ can be thought of as a salt to a much greater extent than $\ce{SnCl4}$. In fact, tin is so often thought of as a metal, that it takes some people time to understandt that $\ce{SnCl4}$ is in fact not a salt but much more a molecule.
Then, when looking at $\ce{Sn(II)}$, it becomes clear that the unaccompanied ion would have the electron configuration $\ce{[Kr] 4d^{10} 5s^2}$ — doesn’t that look stable to you? Add to chloride ions that took the electrons originally having been in the $\ce{5p}$-orbitals and voilà.
I know that this answer simplifies a lot of things that maybe shouldn’t be simplified in all cases. But to the best of my knowledge it explains the compound and its stability well, and should therefore be considered. Once the question turns into ‘why is $\ce{Sn(II)}$ ionic, while $\ce{Sn(IV)}$ better described as molecular?’, if of course becomes invalid.
| 0.888889 |
Why is tin ionic?
|
Very simple particle filters algorithm (sequential monte carlo method) implementation
|
I'm interested in the simple algorithm for particles filter given here. It seems very simple but I have no idea on how to do it practically. Any idea on how to implement it (just to better understand how it works) ?
Edit:
This is a great simple example that explain how it works. I've tried to implement it in C++: see my code here, but I'm note sure if I do it the right way. Can you please check if I understood it well, or there are some misunderstanding according to my code ?
|
@user995434 I do not program in C++ and don't think I would understand your code well enough to indicate how well you understood my point. Maybe you can explain what you are doing and that way I can comment. All that I was saying is that you know the value of the process at time t-1 and then conditioned on that value you generate the value for the process at time t by drawing a random value from the conditional distribution. You must specify an initial value X0 but after that each X is sampled from its distribution conditional on the value of the previous X.
| 0.888889 |
How well you understood C++ code?
|
Words for meat differ from the words for the corresponding animal
|
In English we have:
"beef" for "cow", "cattle"
"veal" for "calf"
"pork" for "pig"
"mutton" for "sheep"
I'm not aware of this separation for "fish", "goat" or "chicken" (Spanish has "pollo" and "gallina") and other poultry. Are these words used simply to distinguish the meat from the animal (i.e. to avoid saying "cow meat") or is there a psychological separation to avoid the association? I doubt the latter since these words developed when people were likely less squeamish than some are today.
Why are there not meat words for some animals?
What are some others I didn't list?
|
I believe that many of these come from the use of French in England amongst the aristocracy after the Norman conquest. Thus 'pork' (porc) is the posh word, 'pig' is the vulgar peasant (or English) word. I don't have any reference for this, but I heard it somewhere in my travels. Correct me if I'm wrong, but it does sound like a convincing story.
| 1 |
Correct me if I'm wrong, but it does sound like a convincing story
|
Office 365 SharePoint v1.0 API Authorization Issue
|
I have a client app that uses the Office 365 SharePoint preview API. Recently (as of October 2014), Microsoft published version 1.0 of that API. The authentication steps used with the preview API no longer work with version 1.0.
To demonstrate the problem I have created a short node.js script. The script does the following:
Authorizes by launching a browser. Gives a redirect URL to localhost and launches a server to catch the redirect post-authorization
POST to https://login.windows.net/common/oauth2/token to get an access token
GET to the Office 365 discovery service to get the SharePoint API endpoint
POST to https://login.windows.net/common/oauth2/token with a refresh token to get a new access token
GET to the SharePoint API endpoint to get a list of files
The script can be used with the preview API and version 1.0 of the API. It is able to get a JSON list of files from the preview API, but fails with the following for version 1.0 (on the last call):
{
"error": {
"code": "-2147024891, System.UnauthorizedAccessException",
"message": "Access denied. You do not have permission to perform this action or access this resource."
}
}
Does anybody see anything wrong with the sequence of calls?
Please take a look at the sample script for more details.
|
Thanks for getting in touch and we appreciate the feedback. A fix is being rolled out to address non-admin's access to Files/Folders through Files API. If you are still in the development/exploration phase, you could consider following measures to unblock:
a. Temporarily add the user as the admin on the my site host web
b. Temporarily get AllSites permissions for the app
I'll update this thread once the issue is patched in Production, which should happen very soon.
| 0.888889 |
Fix to unblock non-admin's access to Files/Folders
|
Right way to translate view Titles?
|
As far as I see the view titles are not translatable. As a quick fix I just created a new views-view.tpl.php file and changed the line 32 from:
<?php print $title; ?>
to
<?php print t($title); ?>
What do you think? Is this a right method?
|
I think the right, standard and easy way to translate view title is to install Internationalization views which use Entity type of translation which is newer type of translation in Drupal. I verified it worked well for this purpose. I hope it would be possible translate also view path in future.
| 0.666667 |
Translate view title in Drupal
|
Weren't there originally going to be nine Star Wars films?
|
I seem to remember reading that there were going to be three trilogies originally in Star Wars, i.e. they would add episodes 7 - 9.
Was that ever the case? What happened to that plan?
Update: looks like that original plan might be back after all!
|
Depending on his plans at the time and when you asked him, George Lucas has variously said there would be one film, three films, six films, nine films or 12 films.
A trilogy of trilogies
It was widely reported in 1980 that there would be nine films: a trilogy of trilogies.
Al Walentis wrote in the May 25, 1980 Reading Eagle:
Lucas originally envisioned "Star Wars" as a single feature, but his 200-page screenplay proved too unwieldy. He then began tinkering with his story line, cutting it apart, sorting our all the various subplots. The script finally was pieced together as three distinct trilogies.
"There are essentially nine films," Lucas said. "The first trilogy is about the young Ben Kenobi and the early life of Luke Skywalker's father when Luke was a young boy. The first trilogy takes place some 20 years before the second. About a year elapses between each story of the first trilogy. The whole adventure - encompassing the three trilogies - spans about 40 years."
Irvin Kershner, director of The Empire Strikes Back (1980), was also on-script with the nine-film plan. Tom Buckley of the N.Y. Times wrote on May 25, 1980's The Spokesman-Review:
"I told George I didn't want to do a sequel," Kershner said."He said, 'I don't blame you. Neither would I, but this isn't. It's the second act of the second trilogy of nine films I plan to make in this theme. I want it to be better than mine.' ..."
On the same page of the same paper, Richard Freeman of Newhouse News wrote:
Three years ago, Hamill signed up for three "Star Wars" films, of which "The Empire Strikes Back" is the second. The third - still in the planning stage - will be called "The revenge of the Jedi," and Hamill worries that these titles will suggest the various Pink Panther sequels to audiences.
If things work out, these three movies will eventually constitute only a third of the projected nine-movie "Star Wars" saga, but Hamill doesn't plan to be in any of the others.
On the following page of the same paper, Aljean Harmetz of the New York Times wrote:
The "Star Wars" George Lucas has created in his mind will take nine movies to tell. "Star Wars" is actually "Star Wars, Episode IV: A New Hope," the first movie of the second trilogy. "The Empire Strikes Back" is "Star Wars, Episode V," while "The Return of the Jedi' is episode VI. The first trilogy deals with the young Darth Vader and the young Ben Kenobi. At the end of the first trilogy, Luke Skywalker is four years old. Only the robots - R2D2 and C-3PO - will be characters in all the movies.
He chose to start in the middle because the first trilogy is, he says, "more plot-oriented, more soap-operaish." He adds that the "central core problem" of "Star Wars" hasn't even been stated yet. Although he originally saw Star Wars as six movies, his "dream" was only for "Star Wars" to do well enough so that he could finish the three movies in the second trilogy. "If people had laughed 'Star Wars' off the screen, I'd have been less surprised than I was at what did happen," he says. "Until the day it opened, I felt it would do $16 million and, if I pushed hard, I could make 'Empire.'"
An interview with Harrison Ford in Lakeland Ledger of July 4, 1980:
Like Mark Hamill (Luke Skywalker) and Carrie Fisher (Princess Leia), Ford has already signed up for the third "Star Wars" film, which is tentatively entitled "The Revenge of the Jedi." This will conclude the middle trilogy of the nine-part series, and Ford does not know whether it signals the end of Han Solo. It is up to George Lucas, the creator of the saga.
"He has an idea of doing one (involving what happened to Solo) about 15 years from now, when I'll be 53. That's something I'd like to do."
The Phoenix of June 21, 1980 also mentions the three trilogies spanning 40 years, and:
Thereafter, Lucas will go back to the first trilogy, starting with a story so far back that it does not include Darth Vader.
If interest sustains, Star Wars could be in production well into the 1990s. David Prowse figures he'll get killed off (by Luke Skywalker?) in the seventh or eighth story.
At least one trilogy
And it seems at least one trilogy was planned from the start. One week after Star Wars came out, The Leader-Post of June 3, 1977 says:
[Mark] Hamill said he believes Lucas plans a Star Wars trilogy because all the actors are under contract for two more films.
Reporting on the film's record-breaking success, an AP story in May 26, 1978's Schenectady Gazette says:
Lucas had originally conceived of "Star Wars" as a trilogy. Work on the first sequel is well under way...
12 films
An article by Charles Champlin of the Los Angeles Times, printed in The Tuscaloosa News of June 15, 1979, quotes The Empire Strikes Back producer Gary Kurtz:
Kurtz says that there are on paper the makings of a grand design of 12 films, including three stories that would historically precede the Skywalker takes (prequels, as they are dreadfully know these days).
A May 5, 1980 article quotes David Prowse as saying "They plan to do 12 movies."
13 movies
Finally, this 1979 promo advert for Kenner Star Wars claims:
"That's right, Star Wars is forever. George Lucas and 20th Century Fox have plans for twelve more block-busting chapters to the Star Wars story.
So does that make 13?
15 and more
Lucas is selling Lucasfilm and the Star Wars rights to the Walt Disney Company who according to CNN (October 31, 2012) will make at least three sequel films, and then another film every two years (which might might not be sequels), and also possibly a television series:
"It's now time for me to pass Star Wars on to a new generation of filmmakers," George Lucas said in a written statement. "I've always believed that Star Wars could live beyond me, and I thought it was important to set up the transition during my lifetime."
Lucas said he will work as a creative consultant on Star Wars Episode 7, the first of a planned new trilogy of live-action Star Wars movies. It is targeted for release in 2015, Disney said.
Disney hopes to essentially relaunch the Star Wars film franchise, which had its last installment in 2005 with Revenge of the Sith. Following the three planned sequels, the company envisions releasing even more Star Wars movies at a rate of a new film every two to three years.
Future movies may not be sequels but movies that focus on fringe characters. Disney also believes there is potential for a television series.
| 1 |
"Star Wars, Episode IV: A New Hope," "The Return of the Jedi" is episode VI .
|
Is SourceSafe really safe?
|
Having spent all morning trying to check something in - I now realise I've lost a couple of days worth of work.
Its happened before - and is apparently common occurrence with SourceSafe. Can SourceSafe be used successfully, without problems, and if so, how?
|
After 3 years of using it, complaining off and on to my manager because of all the more advanced/rational alternatives out there, I've never really had a problem with VSS, but I've never had an option either.
My views are that it both sucks and blows.
The most annoying part about it is not it's awful versioning and confusing branching ability, but the list box on the file menu doesn't let you hit the right arrow key to expand.
Truly painful.
| 0.888889 |
VSS sucks and blows
|
How to create a local webapp userscript
|
I'd like to write my own local webapps userscripts, but I don't want to compile and install the main webapps source code every time.
I would like to keep the original Ubuntu Webapps package installed, but use my local userscripts as well. Is this possible?
|
What you could do is create your userscript and symlink it to /usr/share/unity-webapps/userscripts/unity-webapps-$NAME/$NAME.user.js:
($MYSCRIPTPATH is the full path to your script, probably somewhere in your home directory;$NAME is the name of your script)
sudo mkdir /usr/share/unity-webapps/userscripts/unity-webapps-$NAME
sudo ln -s $MYSCRIPTPATH /usr/share/unity-webapps/userscripts/unity-webapps-$NAME/$NAME.user.js
You'll also need to manually create a manifest file on /usr/share/unity-webapps/userscripts/unity-webapps-$NAME/manifest.json - you can just copy one from the other apps in the userscripts dir and modify the values according to your webapp.
| 0.888889 |
Create your userscript and symlink it to /usr/share/unity-webapps/userscripts
|
How can I disable a region at with the context module at all pages but the <front> (home page)
|
How can I disable a region at with the context module at all pages but the (home page).
I've been trying to disable a region on all pages, except the , without having to write all the paths.
I can't make the wildcards * ~ work on the <front> page.
In the condition field, I've tried to put alot of different combos i cant show here, because my post is deemed spam:
Imagine me using * ~ wildcards in front and in back of all the ways you can write the front page path: eks. http//www.domain.com/
same goes for <front> with all the different combos.
Which one is supposed to work? Hope you get the idea.
Thanks!
Best Regards Elias
|
Set ~<front> in path. This will exclude only the front page.
I just tried it and it works properly.
| 1 |
Set <front>
|
Without a map or miniatures, how to best determine line-of-sight, etc?
|
I think the title says it all. Let's say you're mastering a game without the benefit of miniatures, maps, or any kind of physical representation of the environment. How would you keep track of details like line-of-sight, ranged attack viability, and all the other small nuances which go with creating a "believable-enough" environment in which your PCs live and thrive?
(I admit that @LoganMacRae, who is my own DM, is an inspiration for this question. He does this with panache, and in my one-off, I feel like I have some mighty big shoes to fill!)
|
Not all RPGs work like D&D. The best you can do is to choose a game which does not rely on wargame-like details such as line-of-sight or movement-per-round. First couple examples off the top of my head:
Storming the Wizard's Tower (by Vincent Baker, still in development, unfinished, play at your own risk), which is basically "D&D done a different way", abstracts advantageous positioning and battlefield stunts into a supplementary die roll, leaving players the freedom to describe the battlefield in detail.
Anima Prime (by Christian Griffen, available free as a playtest release, but still very solid) is all about flashy moves and interacting with the environment in cinematic battles, but abstracts maneuvering into changing dice pools. Describing your actions with great flare is the heart of the game.
Beast Hunters (by Christian and Lisa Griffen, available for sale or as a free SRD) is the immediate precursor to Anima Prime and it clearly shows that such a line of thought also works for more gritty, down-to-earth battles, and not just for uber-cinematic superhero showdowns.
And all of the above are fantasy games with heroic PCs, monsters and lots of fights. Should you be out for a real change in pace and tone, instead, the possibilities are (literally) thousands.
| 0.666667 |
Anima Prime is about flashy moves and interacting with the environment in cinematic battles .
|
Who should pursue a Ph.D degree?
|
I am asking myself the question "Should I do PhD or should I leave academia and go for an industrial career?"
My life-goal is being a professor. And I love to do research.
PhD is surely a bite that not everyone can chew.
But I wonder who can chew it?
I never was good at tests and exams. My BSc. GPA was 2.84/4.00 but finished my MSc. with 3.50/4.00
However, currently I am working on a conference paper and I feel like even that is too much for me. It has been nearly 3 months and still, the paper draft is to be improved (not the wording but the content).
I am surely a hard-worker but not always. Sometimes, I let go of my work and absorbed in other stuff (composing, amateur radio etc). If this period is too wide, I have to spend double effort to warm-up and remember where I left.
I don't know how things work in PhD. It usually is 5-6 years. It is the one of two most-challenging milestones in academic career (the other is getting the title Assoc. Prof).
Should I completely be a "nerd" and work on my thesis systematically (something I could never make in my entire life) or working periodically but with extra effort is still sufficient?
So, here's my question: If I say "I'm considering to do PhD" and ask your advice, what would you ask me? What kind of skills/characteristics do you look for a potential academician?
I know it is way too late for me to ask this kind of question, as a person who almost finished his master's degree. But better lose the saddle than the horse.
|
So, here's my question: If I say "I'm considering to do PhD" and ask
your advice, what would you ask me? What kind of
skills/characteristics do you look for a potential academician?
The first questions I would ask is: are you really interested in the subject? Can you imagine spending the next 5+ years thinking about pretty much nothing else?
Why do you want to do a PhD in the first place?
However, currently I am working on a conference paper and I feel like
even that is too much for me. It has been nearly 3 months and still,
the paper draft is to be improved (not the wording but the content).
Now imagine the same thing but replace 3 month with 3 years.
Sometimes, I let go of my work and absorbed in other stuff.
This is also not very helpful for pursuing a PhD.
I think in most fields (that might be different for some fields of science) getting a PhD is only for people who want to do research. Apart from that it is only a waste of time and money. So the question you should ask yourself is not: Am I able to get a PhD? But rather: Do I want to get a PhD?
However, if you decided that you really want to give it a try: talk to someone from your university about it, maybe the supervisor of your thesis. Grades are not always a good indicator of the quality as a researcher.
EDIT:
My life-goal is being a professor. And I love to do research.
That answers most of the questions.
But I don't know if I'm capable of doing a Ph.D.
No one here can answer this question. You should try to talk to a professor at your university, the supervisor of your Master's Thesis, or someone who is doing research you are interested in. However, do not let your grades disencourage you, I know many students who had pretty bad grades but are great as PhD students and many excellent students who struggle with their research.
| 1 |
What skills/characteristics do you want to do a PhD?
|
C# save/retrieve an array of structures from settings file
|
I have a simple question (I think) that I'm not making much progress finding an answer to with Google. I have a structure as follows:
/// <summary>
/// A class to represent the sync settings for a single camera.
/// </summary>
public class CameraSyncSettings
{
public string ID { get; set; }
public string SyncPath { get; set; }
public bool OverwriteExisting { get; set; }
};
And then an array of these in the program, one for each camera:
List<CameraSyncSettings> MyCameraSettings = new List<CameraSyncSettings>();
Now, what I want to do is have a property in my settings such that I can read/write this array into it to persist the information between sessions.
How can I do this and what is the best/most efficient way?
|
You can achieve it by using Properties.Settings of type ListDictionary
Example:
Properties.Settings.Default.Example.Add("Setting1", new CameraSyncSettings());
Properties.Settings.Default.Example.Add("Setting2", new CameraSyncSettings());
Properties.Settings.Default.Example.Add("Setting3", new CameraSyncSettings());
Properties.Settings.Default.Save();
see link for more information : http://msdn.microsoft.com/en-us/library/aa730869(v=vs.80).aspx
NB: You can set the Scope of Properties.Settings.Default.Example to Application or User
| 0.888889 |
Scope of Properties.Settings.Default.Example to Application or User
|
Unable to shut down laptop after Logging Out of Ubuntu 12.04
|
I logged into the Ubuntu 12.04 LTS and instead of installing it... I pressed the Log Out button, thinking of installing it some other time.
But then a screen appeared asking for Username and Password.
I typed in my name as the User name and a password of my choice..I was halfway through typing my password..just then I remembered that I had to Login as 'root' so I hurriedly pressed the Back button... but instead the user name and half password I typed.. got submitted.
Now it is asking for User name and Password and when I type the ones which I had initially typed..its showing Login Error :(
But the problem doesn't end here... m unable to shut down my laptop.. its been more than 12 hours and m stuck at this login screen... plzzz help me out!
|
If you’re trying Ubuntu in live mode, the username is “ubuntu” without any passwords, i.e. hit Enter when prompting for password.
Meanwhile, in such situations simply press and hold the power button to turn the computer off.
| 0.888889 |
If you’re trying Ubuntu in live mode, the username is “ubuntu” without any passwords
|
Nuclear fusion - Hydrogen isotopes
|
What is the isotope composition of hydrogen atoms in the sun? Are the ratios of protium:deuterium:tritium similar to those we find on earth?
What does the nuclear fusion of hydrogen atoms in the sun look like? Can protiums participate too? Where does, then, the helium atom get a neutron from, as there are no neutrons in protiums?
|
The main fusion reaction in the sun is the proton-proton chain reaction, which takes six protons and produces two protons, one alpha particle, two anti-electrons, and two electron neutrinos.
The deuterium nucleus is only barely bound and can be destroyed — dissociated into a proton and neutron — by absorbing a gamma ray with energy more than 2 MeV. This means that all of the sun's "primordial" deuterium, formed in the first minutes after the Big Bang, was destroyed before the sun got hot enough to begin fusing on its own. Essentially all of the deuterium in the sun exists as an intermediate step in the proton-proton fusion cycle.
Tritium, furthermore, is naturally unstable, with a half-life of only 12 years. This means that any tritium in the sun (or anywhere else) has been mostly produced in the past few dozen years. On Earth tritium is produced by cosmic rays interacting with stable matter. In the sun most of the tritium would come from neutron capture on deuterium (where I suppose the neutrons would have come from dissociated deuterium) or nucleon-exchange or proton-knockout reactions on helium-3 and helium-4.
A relatively authoritative solar model I grabbed some time ago from this paper (or more properly, from here) lists the helium-3 fraction, but not the deuterium fraction or tritium fraction. If we assume they are comparable (or at least related) then you can see that it still depends on where you are in the sun:
For what it's worth, more than 99% of the sun's energy is produced deeper in than that peak at 0.27 R_sun.
| 0.777778 |
The main fusion reaction in the sun is proton-proton chain reaction
|
Reading from /dev/random does not produce any data
|
I often use the command
cat /dev/urandom | strings --bytes 1 | tr -d '\n\t ' | head --bytes 32
to generate pseudo-random passwords. This doesn't work with /dev/random.
Specifically
cat /dev/urandom | strings --bytes 1 | tr -d '\n\t ' produces output
cat /dev/random | strings --bytes 1 produces output
cat /dev/random | strings --bytes 1 | tr -d '\n\t ' does not produce output
NB: When using /dev/random you may have to wiggle your mouse or press keys (e.g. ctrl, shift, etc.) to generate entropy.
Why does the last example not work? Does tr have some kind of large internal buffer that /dev/urandom fills quickly but /dev/random doesn't?
P.S. I'm using CentOS 6.5
cat /proc/version
Linux version 2.6.32-431.3.1.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) ) #1 SMP Fri Jan 3 21:39:27 UTC 2014
|
Generating random numbers for many security applications requires sufficient entropy — entropy measures how unpredictable the randomness is. A deterministic processor cannot generate entropy, so entropy must come from the outside — either from a hardware component with non-deterministic behavior, or from other factors that are sufficiently hard to reproduce such as the timing of user actions (that's where wiggling the mouse comes in). Once sufficient entropy is available, cryptography can be used to generate a practically unlimited stream of random numbers.
Linux works by accummulating entropy in a pool, then using cryptography to produce acceptable random numbers both through /dev/random and /dev/urandom. The difference is that /dev/random applies an extremely conservative entropy calculation that reduces the estimate of the entropy in the pool for every byte that it generates, whereas /dev/urandom does not concern itself with the amount of entropy in the pool.
If the estimate of entropy in the pool is too low, /dev/random blocks until more entropy can be accumulated. This can severely cripple the rate at which /dev/random can produce output. This is what you're observing here. It has nothing to do with tr; but strings reads output with buffering, so it has to read a full buffer (a few kB) from /dev/random just to produce at least one byte of input.
/dev/urandom is perfectly acceptable for generating a cryptographic key, because entropy does not in fact decrease in any perceptible way. (If you keep your machine running for longer than the universe has existed, you can't neglect these considerations, but otherwise you're good.) There is only one case where /dev/urandom is not good, which is on a freshly installed system that hasn't had time to generate entropy yet, or a freshly booted system that boots from read-only media.
Elimitating strings from your boot chain will probably speed up your process:
</dev/random LC_ALL=C tr -dc '~-!'
But you can use /dev/urandom here, as long as you take care not to generate passwords on a system that hasn't had time to accummulate sufficient entropy. You can check the level of Linux's entropy pool in /proc/sys/kernel/random/entropy_avail (if you use /dev/random, the figure in this file will be conservative, possibly very much so).
| 1 |
entropy measures how unpredictable the randomness is
|
Why IntegerPart[x/(x/2)]=1?
|
Consider this code
x = 0.109354682484;
IntegerPart[x/(x/2)]
(* 1 *)
Precision[x]
(* MachinePrecision *)
Why does it give 1 ?
Version number: 9.0 on Mac 10.9.2
screenshot
Update:
If we use an undefined variable, IntegerPart[x0/(x0/2)] gives 2. Since Mathematica never gives warnings about this x0, I'm assuming for any x0 it is true.
If we calculate the same integer part using fortran, we get 2 instead of 1.
program main
implicit none
real(8):: x=0.109354682484
real(8):: y=1.4
write(*,*) int(x/(x/2))
write(*,*) nint(x/(x/2))
write(*,*) int(y)
end program main
compiled with ifort -O0 main.f90
output of above fortran code is
2
2
1
according to here, int is a fortran intrinsic function that calculate the integer part.
Is this a bug?
|
The explanation is interesting here. I tried the same in C++, and worked a bit extra to make sure the compiler won't optimize away the divisions (looking at the assembly output, it may optimize it away if you're not careful). Indeed, I get 2 with C++.
And here's why:
C++ does the equivalent of
IntegerPart@Divide[x, Divide[x, 2]]
(* ==> 2 *)
while in Mathematica you're computing the equivalent of
x*(1/((1/2)*x))
which gives a different result.
If I do the calculation as (x* (1/ (0.5*x))) in C++, then the result is less than 2 and the integer part is 1.
Relevant reading:
Is there a difference between Divide[a,b] and a/b?
| 0.666667 |
Is there a difference between Divide[a,b] and a/b?
|
Why is this type of hesitation considered unethical?
|
In two previous questions, it was clearly agreed that hesitating before playing a singleton specifically, and hesitating during play as a bluff generally constitute "wrongful/unethical hesitation." As an advanced beginner, I don't understand this rule at all.
I know that at the highest level, the best way to play and not give away any information about your holding to the opponents is to play each card at a nice even tempo. I am not a world class player; I am not even an advanced player (if I endplay you, I promise it wasn't intentional). As such, I frequently need to pause and calculate what to play, even in situations where the correct play should be obvious.
Knowing that I will inevitably need to pause and consider a play and that that pause will provide my opponents with the information that I had a potentially difficult decision to make, why is it considered unethical for me to occasionally insert a similar pause before a routine play?
It isn't as if I could use such a pause to reliably signal something to my partner. In fact, this strategy removes the temptation for partner to draw unauthorized inferences from my pauses.
|
If you routinely take 2-3 seconds before each play of the cards, that is ethical. If you usually play cards more quickly, but take a pause when you have a problem, that is ethical.
If, on the other hand, you usually play cards more quickly, take a pause when you have a problem, and sometimes take a pause when you don't have a problem to throw declarer off, this is unethical. Even though your partner is not entitled to know when you have a problem, declarer is entitled to know. Pausing in order to mislead declarer is called "coffee-housing." It is unethical because in bridge only your actual plays should mislead declarer. It is related to the idea that you and your partner may not have secret bidding agreements. Bridge is not poker; you are not trying to bluff your opponents based on your demeanor.
| 1 |
If you take 2-3 seconds before each play of the cards, that is ethical.
|
SSD Drive - external vs Internal
|
I have 2 computers, a faster iMac at work (i5) and an old macbook (C2DUO 2.0Ghz) at home.
The macbook at home has a SSD drive in it, I upgraded it and it's performance is really great.
Now I was wondering, instead of having to my macbook to work everyday, would be worth it to take the SSD out and boot into it from an external case?? This way I only need to take the drive and just plug it in.
I only have an uSB2 case.
|
Yes, you can do it. Depending on the quirks of OS/X you may need to different OS installations on the SSD. That would be slightly more space using, but not a real problem.
However USB2 is slow. Sequential transfer rates via the USB2 bus are about 35MB/second, way slows than a HDD or a SSD. You would gain some performance gain on random IOs, but I expect the nett result to be a lot slower.
Now if you can replace the external USB casing with something faster (e.g. eSATA or thunderbolt) then: Yes, it will work and be a lot faster.
| 1 |
USB2 is slow, way slows than a HDD or SSD
|
How to remember 10 last read articles with timestamp in session for a user in Codeigniter?
|
I would like to make a PHP if condition code that will check if the last 10 articles or 10 minutes from the article reading by the user have already elapsed.
E.g.
A user open a page with id = 235 (this id value is in the url localhost/article/235 )
and this id value will be saved in session with a current timestamp and maybe his IP address
Then he read another article and the same will happen.
I need to remember the clicked stuff for another ten clicks and then reset that only for the first row. E.g. after the 10th click the id and timestamp will not became 11th row but will replace the 1st row in the list.
The php condition in CodeIgniter will then check these values and will update the article hit counter value in the articles table and column counter like this:
$this->db->where('id', $id);
$this->db->set('counter', 'counter+1', FALSE);
$this->db->update('articles');
But before calling this code I need to make this check from the session?
How to do that?
I think storing e.g. 10 entries in the session with timestamps per user will be enough.
Just don't save the same page in the session twice.
And the condition will check the current timestamp with the saved one and if it is more than e.g. 10 minutes or the user have read/clicked another 10 articles it will allow the update counter php code.
I don't need to have this bulletproof. Just to disable the increment using browser's refresh button.
So, if he wants to increment the counter he will need to wait ten minutes or read another 10 articles ;)
|
You should definitely go for Sessions. It saves you bandwidth consumption and is much easier to handle. Unless, of course, you need the data on the client-side, which, by your explanation, I assume you don't. Assuming you went for sessions, all you gotta do is store an array with the data you have. The following code should do it:
$aClicks = $this->session
->userdata('article_clicks');
// Initialize the array, if it's not already initialized
if ($aClicks == false) {
$aClicks = array();
}
// Now, we clean our array for the articles that have been clicked longer than
// 10 minutes ago.
$aClicks = array_filter(
$aClicks,
function($click) {
return (time() - $click['time']) < 600; // Less than 10 minutes elapsed
}
);
// We check if the article clicked is already in the list
$found = false;
foreach ($aClicks as $click) {
if ($click['article'] === $id) { // Assuming $id holds the article id
$found = true;
break;
}
}
// If it's not, we add it
if (!$found) {
$aClicks[] = array(
'article' => $id, // Assuming $id holds the article id
'time' => time()
);
}
// Store the clicks back to the session
$this->session
->set_userdata('article_clicks', $aClicks);
// If we meet all conditions
if (count($aClicks) < 10) {
// Do something
}
| 0.666667 |
How do you store an array with the data you have?
|
Diff between backups: see permissions, group and owner and sym link differences
|
Question
Using only the command line, is it possible to tell if a rsync backup (on a NAS) and the original folder tree (on a web server) are exactly the same? I mean in terms of file sizes, attributes, and symbolic links?
I feat the most for symbolic links. In case of a hard-drive failure – let's say I can solve all the booting issues – will I be able to put back all the files with all the symbolic links done right and their permissions / attributes?
Context
I am setting up an rsync backup system in which my NAS (arm processor) connects to my dedicated webserver hosted far far away and does a backup of the / partition.
In case of a disk crash, I plan on making the exact same partition setup on the new disk, on installing the same operating system (14.04) so I get the MBR / boot loader in place.
Then I will rsync back the backup from the NAS to the server, reboot and cross my fingers.
The issue is that I will only know if everything works the day I need the backup…. But I can take measures to know if it will work before. One of them is checking that every single file, folder, sym link… in the backup has the same permissions, attributes, whatever than the original. I also need to check that the sym links and similar are correctly done.
The options I use with rsync are :
rsync -rlptgoDXvxzH --numeric-ids --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"}
The option A is causing issues.
EDIT: ADDITIONAL INFO
What I am looking for is a working backup strategy that will allow me:
– to perform blazing fast backups.
Because I do backup the / system partition, I take most services down on the server (apache, postfix, dovecot, mysql, etc.) so I can have a safe backup. Any solution such as rsync is good, because I can update a mirror directory on the NAS. Then, offline, I can take care of the backup strategy (incremental, etc…) from the updated mirror.
– to have fast restore (I don't want to transfer a 100 GB disk image).
I have a really really fast connection between the NAS and the server. I mean really fast. Transferring a 10 GB disk image is a matter of minutes.
What I can't do: change the physical setup of the hosted server. I have one 500 GB disk. That's all.
What I can do: boot the server in a rescue mode in which the disk isn't mounted at all. Rescue mode even works with a dead disk. I use this mode to perform binary images of partitions.
Current setup:
/ 10 GB, 17 % full
SWAP 512 Mo
/var 80 GB
/bkup the rest… to 500 GB.
|
My experiment
Setup
At first I created an Ext4 Partition on my PC using gparted.
Then I booted into a small Arch GNU/Linux installation in the hope that this would bring a better performance. Then I attached my Raspberry Pi with a only a bit modified Raspbian GNU/Linux Installation via Ethernet switch to the PC.
Then I mounted the Ext4 partition.
Copying
After this I copied the / partition of the Pi to my PC.
rsync -rlptgoDXvxzH --numeric-ids --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} root@raspberrypi:/ rsync_mount/
This took about 20 minutes. If you copy your server to your NAS you may need
a flatrate (or a lot of money;) )
a lot of time
bricking the Raspberry Pi
Then I bricked my Raspberry Pi using
ssh root@raspberrypi
rm -r /run /var /etc /usr
Then I had to unplug the power cable because nothing was working anymore.
repairing the Raspberry Pi
Finally I plugged the SD-Card of the Rpi into my computer, mounted it and repaired it using
cp -rpvxH rsync_mount/* rpi_mount/
after unmounting the SD-Card and booting the Rpi everything worked fine again.
Fazit
The part with rsync worked fine, but I do not know if it will work with a complete new installation. Maybe then the kernel will not fit to what you copy back on the server.
I will make another try, where I install a newer kernel version after rsyncing.
| 1 |
Ext4 Partition on my PC
|
WIFI USB adapter not detected
|
I'm using a WIFI USB adapter : TP-Link TL-WN725N Nano Adaptateur USB wireless N 150 Mbps. I plug it to my raspberry and configured it in /etc/network/interfaces . I added this :
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-ssid "MY NETWORK SSID"
I dont need password to connect.
then sudo reboot
when I launch ifconfig I got this (no wlan detected).
eth0 Link encap:Ethernet HWaddr b8:27:eb:d5:44:b8
inet addr:192.168.0.2 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:53 errors:0 dropped:0 overruns:0 frame:0
TX packets:90 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7022 (6.8 KiB) TX bytes:9278 (9.0 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:14 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1330 (1.2 KiB) TX bytes:1330 (1.2 KiB)
|
Check out this tutorial
I think you need to install the drivers. Since url answers my disappear, here is what it basically says:
Now, plug your USB wifi adapter to one of the ports and issue: lsusb. You should see something along the lines of:
$ lsusb
...
Bus 001 Device 004: ID 0ace:1215 ZyDAS ZD1211B 802.11g
...
Okay, it looks like the chipset we have here is ‘zd1211′ (Have a look at http://wiki.debian.org/WiFi for a list of supported chipsets)
Let’s see if there are any firmware packages we can install to get this up and running:
$ apt-cache search zd1211
zd1211-firmware - Firmware images for the zd1211rw wireless driver
Sweet, let’s install that:
$ sudo apt-get install zd1211-firmware
We should be good to go now. Unplug the adapter, plug it back in again and have a look at the output of lsmod:
$ lsmod
Module Size Used by
arc 4764 2
zd1211rw 40444 0
mac80211 171628 1 zd1211rw
cfg80211 123084 2 zd1211rw,mac80211
fuse 49036 1
You should see mention of zd1211.
dmesg should also give you an indication whether things are loaded or not:
$ dmesg
...
usb 1-1.2: new high speed USB device number 4 using dwc_otg
usb 1-1.2: New USB device found, idVendor=0ace, idProduct=1215
usb 1-1.2: New USB device strings: Mfr=16, Product=32, SerialNumber=0
usb 1-1.2: Product: USB2.0 WLAN
usb 1-1.2: Manufacturer: ZyDAS
cfg80211: Calling CRDA to update world regulatory domain
usb 1-1.2: reset high speed USB device number 4 using dwc_otg
ieee80211 phy0: Selected rate control algorithm 'minstrel_ht'
zd1211rw 1-1.2:1.0: phy0
usbcore: registered new interface driver zd1211rw
zd1211rw 1-1.2:1.0: firmware version 4725
zd1211rw 1-1.2:1.0: zd1211b chip 0ace:1215 v4810 high 00-1a-ee UW2453_RF pa0 -7---
...
Awesome, looks like the adapter is up and running! To see which networks are available, do:
$ iwlist wlan0 scan
…which should give you a list of wireless networks around you.
Good luck
| 0.777778 |
Plug USB wifi adapter to one of the ports and issue: lsusb
|
Good backup options for Mac pre-TimeMachine
|
I have a friend with an iBook G4 who is looking for a cheap backup option for her Mac running OS 10.4. Money is tight, so getting 10.5 is not really an option (in addition to buy a backup drive etc, yes money is really that tight).
What suggestions can you offer for backups that's better than trying to remember to burn a CD once a month?
|
For free incremental backup on a Mac - or almost any *nix box for that matter - rsync is hard to beat. It should already be installed, and at its simplest will do an incremental copy - set up Launchd or cron to run it automatically at a convenient interval and it'll copy only what has changed over to the backup disk (or a remote machine over the network for that matter).
With a bit of prodding it can even be set up to emulate Time Machine pretty closely, with the exception of the fancy restore interface.
| 0.777778 |
rsync is hard to beat for free incremental backup on a Mac
|
Is it ok to mix 2 GB and 4 GB DIMMS in an iMac (mid 2011)?
|
I've been told it is important to use matching DIMMs when installing memory to take advantage of the 128 bit throughput. I'm looking to buy a new 27" iMac (mid 2011) which has 4 memory slots. If I put matching 2x2GB in 2 of the slots and matching 2x4GB in the other two slots, will that reduce its throughput or cause any problems?
Do all 4 have to match or do each set of 2 need to match?
|
Yep, totally okay to do. All Intel-based macs support dual-channel memory, so in a nutshell: You'll get the best performance with equal-size RAM sticks, but it won't kill it if you don't. Luckily, most RAM is sold in pairs these days.
| 1 |
All Intel-based macs support dual-channel memory
|
Does tagging content affect SEO?
|
Tagging content (i.e. categorizing content using "tags") is a sensible way to group content on a web site, but does tagging actually affect SEO?
From an SEO standpoint, how does tagging content affect search ranking? Is there any way to optimize the set of tags for SEO? Should I limit the number of tags in any way?
|
The tags won't affect the pages directly by being on that page (although I am sure someone will say having the tag will increase your keyword density but I wouldn't even give too much thought to that). So adding tags in the hopes that they are directly a ranking factor would not be a good use of time.
How it can help SEO is when you have a dedicated page(s) for each tag that page will be naturally optimized for that keyword. That page will (theoretically) potentially rank well for it. That page can then drive traffic and potential link givers to pages tagged with that keyword. Also, the interlinkng by your pages will help to increase each page's PR as well as boost each other's rankings due to the advantages of anchor text and everything else incoming links have to offer. (Yes, internal links are very good to have. Just ask Wikipedia).
| 0.888889 |
How to add tags in the hopes that they are directly a ranking factor?
|
Splicing tiny wire inside broken mouse cable
|
I don't know a lot about electronics repairs, but I've got a relatively expensive laser mouse that got a frayed connection on the wire:
I'm wanting to repair it as it's out of warranty. I've cut the cable on either side of the "stopper", isolated each of the individual wires, and stripped the ends off in preparation for splicing.
I've read some instructions that indicate I should do an inline wrap and then apply some solder.
Is there a better way for wires this small?
Is there a particular type of heat shrink wrap I should put on this after it's spliced? Or will electrical tape suffice?
Inside the mouse, the cable is connected to a little plug. To me, it looks a lot like the fan plugs inside a PC. It's got 5 pins each 1mm apart. If there were a replacement plug I could buy and crimp the wires into, that'd be great!
|
As neat multi-wire splices are hard to make, I'd recommend that instead of splicing, you simply shorten the cable by discarding the inside piece and connect the outside end to the circuitry inside. Try to replicate the function of the strain relief somehow.
| 1 |
Using multi-wire splices
|
Iterator Implementation
|
I had to write my own iterator implementation for my structure which has ArrayList<Vector> field. An iterator is supposed to iterate over mentioned List. Any suggestions on improving it, or anything else?
public class ExamplesIterator implements Iterator<Vector> {
private List<Vector> examples; //ArrayList<Vector> will be set here
private int index;
public ExamplesIterator(List<Vector> examples) {
this.examples = examples;
index = 0;
}
@Override
public Vector next() {
if(hasNext()) {
return examples.get(index++);
} else {
throw new NoSuchElementException("There are no elements size = " + examples.size());
}
}
@Override
public boolean hasNext() {
return !(examples.size() == index);
}
@Override
public void remove() {
if(index <= 0) {
throw new IllegalStateException("You can't delete element before first next() method call");
}
examples.remove(--index);
}
}
|
I think your implementation is overall very good, two small comments:
Improving readability for return statement in hasNext to return examples.size() != index;
Making the examples field final: private final List<Vector> examples;
However, if the Vector class here is java.util.Vector you should know that it is considered deprecated in favor of the ArrayList class.
Also, since you create your iterator from a List<Vector> you could get an Iterator<Vector> by calling list.iterator();
| 0.777778 |
Improving readability for return statement in hasNext
|
Blender game engine camera
|
My problem is setting up a camera in the game engine.
I wanted 4 camera views instead of one.
Front, Back, eyes view and side view; Something like minecraft game.
It will be very helpful if you can give me a tutorial of it.
Please include how to change camera view; for example, pressing f1 = front view , f2 = eyes view.
When I try to set my camera on eyes view I want the camera to see my hand when I walk or run, but the character body is blocking the view. Is there anyway to solve this or make the camera look through the body?
|
Your question covers a lot of ground.
For starters, I recommend you set up 4 separate cameras. This is easier than having your camera move to the different desired positions.
switching between cameras requires a spot of code (at least the way I've done it.)
however if you wish to display your separate views simultaneously, a tutorial to check into is https://www.youtube.com/watch?v=-TtTBGSOii4.
as far as seeing through your own head: make sure your render settings are set to 'Blender Game' and turn to the material properties of your object. Click the check mark next to Backface culling. This should allow you to view the world though your own head.
| 0.888889 |
How to set up 4 separate cameras
|
How should I design a table with nested categories for users?
|
I am really confused with user interface design of the website.
I want to design the Detailed view of the student.
So on the page there will be information about the students like name , class roll number etc.
Then i will have many rows for his smemesters , then each semester will have many subjects , then each subject will have many assignments.
Now i am not able to figure out how can i design the layout of the page.
can anyone give me idea or show me something on , how should i go
|
So, I wish I had the time to sketch this out but I'll see if I can describe it. First off, you're going to need more than just a table.
The student information is the parent of all the other. Have that be the head section of your page, it should stay constant and as the main "breadcrumb" of the data. Then, use the semesters (if you're tracking years, use those first) as containers for the classes.
For the semesters, you could use tabs. Under the student information you could tab across each semester to see the class listing. This could be a table list view or an accordion. There are lots of options here.
From the list view, I would have a detail view that you could bring in with a number of different interactions for the assignments (modals, accordions, etc). Depending on if the assignments are actionable or not; are they an archive of completed assignments or a way to get to current assignment information? That would change how you show them I think.
I know that's hard to process in words, so I'd take a look at other UI that's have similar issues. Ones that come to mind are project tracking or high-end todo list apps. I'd look at those and see what patterns they use to help guide you. @Michael Lai also had great advice about grabbing paper and pencil and quickly starting to explore ideas. Just breaking out of the constraint of the original idea for a bit to sketch can often produce great answers.
| 0.888889 |
How do I show assignments in UI?
|
Is there any word for "small hand written note/letter" in English?
|
In some languages there is a name for a amicable note that you hand write to a lover or to someone to show appreciation, it's not as long and less formal than a letter. does english have an equivalent for that?
|
In America, if a note is sent in appreciation, it is often written on a Thank You card.
These are fold-out cards (similar to Birthday Cards or Christmas Cards, but maybe half the size.) They typically have a decorated front side preprinted with "Thank You" (or some variation), and a totally blank inside, in which the writer pens a short note. They are often sold in a pack of 10, 12, 20 or so, with little envelopes. Here's what they look like:
http://www.google.com/search?q=thank+you+card+images&client=safari&hl=en&source=lnms&tbm=isch&sa=X&ei=ZAYlVfeZENGzoQSqq4C4CA&ved=0CAYQ_AUoAQ&biw=320&bih=356
| 0.888889 |
In America, if a note is sent in appreciation, it is written on a Thank You card
|
Adding parent_id attribute value in model
|
I have created a module and EAV tables for that. Module's tables are very similar to customer_Address entity and i have defined Attribute AttrA, AttrB and AttrC for this table.
So table structure is very similar to as follows:
+------------------+----------------------+------+-----+---------------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------+----------------------+------+-----+---------------------+----------------+
| entity_id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| entity_type_id | smallint(5) unsigned | NO | | 0 | |
| attribute_set_id | smallint(5) unsigned | NO | | 0 | |
| increment_id | varchar(50) | YES | | NULL | |
| parent_id | int(10) unsigned | YES | MUL | NULL | |
| created_at | timestamp | NO | | 0000-00-00 00:00:00 | |
| updated_at | timestamp | NO | | 0000-00-00 00:00:00 | |
| is_active | smallint(5) unsigned | NO | | 1 | |
+------------------+----------------------+------+-----+---------------------+----------------+
and AttrA, AttrB and AttrC values are stored in tables with respective data type.
Now using the basic syntax, I am able to add value of attributes AttrA, AttrB and AttrC. Which is:
$model = Mage::getModel('customer/address');
$model->setAttrA('ValA')->setAttrB('ValB')->setAttrC('ValC');
doing this correctly sets the values of AttrA, AttrB and AttrC but value of parent_id in customer_entity sets to NULL by default.
Here I want to be able to set the parent_id to some other value.
For this i tried suggestion given in different post using setParentId(<parent-id-value>). Which is not working in my case. Any idea how i can do this?
|
Even if parent_id is a column in the main table, you still need it as an attribute in the eav_attribute table.
Just create one attribute called parent_id (the same as the column) with the backend_type "static".
This is how it works for the sku attribute for products.
| 0.888889 |
Create parent_id with backend_type "static"
|
Non Qt console app in Qt Creator
|
I want to create the simple console app below in Qt Creator:
#include <iostream>
int main(int argc, char* argv[])
{
std::cout << "Hello WOrld";
return 0;
}
I've seen some possible duplicates on SO, I have ticked the "Run in Terminal" option in Run Settings. A console window does pop up on CTRL+R, but it does not display "Hello World", simply "Press Enter to exit".
The above is by creating an Empty Project.
I have tried creating a "Qt Console Application" which generates the code below. This does work fine, but I want the simple non Qt version above.
#include <QtCore/QCoreApplication>
#include <iostream>
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
std::cout << "Hello World";
return a.exec();
}
|
Besides ticking "Run in Terminal" you need to add "CONFIG += console" to your .pro file (if you are using qmake).
TEMPLATE = app
CONFIG += console
SOURCES += main.cpp
| 0.777778 |
Add "CONFIG += console" to .pro
|
iPhone app - Persistent hamburger menu vs last page visited
|
I'm wondering which option is best for an iPhone app using a hamburger menu (placed at the top left):
The menu is persistent on every single page even when the user goes
to a sub-level.
When the user goes to a sub-level, the hamburger menu is replaced by a back button or a button whose label is the name of last page visited.
Both. The menu is persistent on every page and a back button appears when needed.
Thanks for your help :)
|
I'd recommend iOS standards unless you really need to break from convention. Standard iOS expectation is that you always have a back / up-level nav in the upper left corner. If you put 2 nav icons side-by-side in the upper left (Hamburger and Back Nav) that makes them both less usable and creates challenges - which is the one on the left most side? Your #2 is the most common implementation. Do users need to get back to the top most menu frequently enough to require having the hamburger nav on all screens? Then consider...
iOS standards also call for the use of the Tab Bar across the bottom - it is an option if you have a small (7) number of items in your nav (though it does persistently take up screen real-estate so isn't ideal unless the user needs to quickly access a few key screens).
The big question for any use of Hamburger Menus is... what are you putting in there? @Majo0od incorrectly states the reasons why hamburger menus are problematic. Hamburger menus have been being maligned considerably lately and most of the critiques have zero to do with user experience. List Menus (Hamburger Menus) work great when they provide a consistent list of items clearly delineated. The majority of user testing with List Menus shows that users very easily understand what they are and often go there to find stuff if the thing they're looking for isn't obvious elsewhere on the screen. This has led to some bad practice among designers who started tossing everything in there - this leads to a garbage pile of items which makes it difficult for users to find what they're looking for or to know what to expect on subsequent visits to that menu. A similar List Menu metaphor used all over both iOS and Android is 3 dots on the right (doesn't look like a hamburger so it isn't as maligned but has identical problems when not used consistently). This is usually used to show a contextual list based on where the user is rather than an omnipresent top left list menu. Going back to my point for this paragraph... what are you using it for? Is it a clearly delineated list of concise items that is contextually relevant to where the user is in the app? Could it be better implemented using a different design pattern? Use the right tool for the job.
| 0.888889 |
What are the reasons why hamburger menus are problematic?
|
Replace my recurring task scheduler with some existing product
|
I have a couple tasks that I like to run periodically:
Push all git repositories (5 times a day)
Do a virus scan on files (every 90 days)
Download backup from server (one a day)
For all those tasks, I have a script. To orchestrate those, I wrote a script called maintenance which is part of my maintenance project. The current version of the script.
It has a JSON config file for the tasks:
"git-autopush": {
"disk": true,
"internet": true,
"interval": 0.3,
"local": false
},
I would like to replace my script with some existing task scheduler, since I would like to cut down on custom software on my system. However, I have the following requirements:
Do not run when running on battery.
Provide some visible status. That could be through kdialog even.
Run only one disk heavy task at a time.
The maintenance script takes care of that, and I can run it whenever I have sufficient time to let the tasks run. If not, I can just abort all the tasks and restart it later.
Plain cron would not be good, since it would not work when my computer is powered off. anacron might be better, but I assume that it would run all the tasks right after booting. Also, if I happen to need my disk for virtual machine, how would I tell the scheduler to stop?
Is there some existing program out there, or should I just continue to use my on?
|
if I happen to need my disk for virtual machine, how would I tell the scheduler to stop?
Cron and anacron are (normally) daemon services. You can stop and start them the way you stop and start any other service.
cron would not be good, since it would not work when my computer is powered off
That's a pretty tough criterion to get around. What is it you want the software to do while the computer is powered off?
anacron might be better, but I assume that it would run all the tasks right after booting.
Apparently. Here's an idea though: start and stop anacron with cron.
Run only one disk heavy task at a time
Anacron has this option:
-s
Serializes execution of jobs. Anacron does not start a new job before the previous one finished.
This just leaves you to implement these two:
Do not run when running on battery.
Provide some visible status. That could be through kdialog even.
The first could be accomplished by having cron check the battery status before it starts anacron. The second could be done a wide variety of ways; you've already suggested one yourself.
| 0.888889 |
How do I stop and start anacron with cron?
|
Unable to make Zsh not to suggest a fix for an alias
|
I have an alias for
alias tdA='todo -a'
I get the following in Zsh
tdA
zsh: correct 'tdA' to 'tda' [nyae]?
How can you make Zsh not to suggest a fix for an alias?
|
I've been using zsh for about 18 years now, and I must say I don't like the accepted solution. Here's why:
You need to figure out the source of the problem - determine why 'tda' is being offered as a correction option. What you have done is completely disabled spelling correction globally. This denies you some very nice functionality while trying to get rid of a tactical problem. It's like wanting to kill a fly in your house by setting off some dynamite just because you're too lazy to figure out where the fly swatter is: It may get rid of the problem, but you sacrifice a whole lot in return. :)
Until you can determine your zsh's current spell correction configuration, you should consider setting the value of the special shell variable $CORRECT_IGNORE to a value of 'tda'.
Here is the entry in the zsh man page:
CORRECT_IGNORE
If set, is treated as a pattern during spelling correction. Any
potential correction that matches the pattern is ignored. For
example, if the value is `_*' then completion functions (which,
by convention, have names beginning with `_') will never be
offered as spelling corrections. The pattern does not apply the
correction of file names, as applied by the CORRECT_ALL option
(so with the example just given files beginning with `_' in the
current directory would still be completed).
This should help get you by until you can determine the source of where 'tda' is actually coming from.
Also note that you can use the precommand modifier 'nocorrect' to disable spelling correction on a per-command basis. You could use this to do something a little hacky, but effective:
alias tdA="nocorrect tda"
alias tda="todo -a"
aliases are just tokens substituted into the command-line by zsh, and those substitutions are re-scanned for additional aliases. So the above should work.
Hopefully these alternatives give you a more selective approach to solving your problem, while still giving you the benefit of zsh's rich spelling correction features.
Good luck!
| 1 |
How to disable spelling correction on a per-command basis
|
Weird jquery 302 error in Firebug
|
I am posting data to servlet using jquery .post() method. The data is being posted and servlet is able to catch the data. But in my firebug I see 302 once the post request is done.
Also I am redirecting the user to success page in my servlet. But the redirect is not happening!!
Should'nt I be getting 200 on the post request?
And why is a simple response.sendRedirect("success.jsp") not happening?
When I click on the response tab I can see the success.jsp html tags.
Please throw light on what is happening? How do I correct it?
|
That is not wierd. because you have called sendRedirect method in the server, server send back the response with 302 for informing the client browser to redirect to another url (which is also sent in the response header).
You can use forward method instead of redirect method if you want to do the redirection from the server side.
| 1 |
sendRedirect method in server
|
Do black holes have charges?
|
Do black holes have charges? If so, how would they be measured? Also, does electricity behave the same way? Black holes affect photons, which are carriers of EM radiation, so do black holes have any effect on the electric force?
|
These are two separate questions. It's better if you don't try to combine two questions into one.
In answer to the first question, yes, a black hole can have a measurable charge. You measure it the same way you'd measure any other charge. This is all purely theoretical, however. Many real-life black holes have been observed and characterized by their masses, but there is no realistic chance that any black hole will ever have enough charge to be measurable.
The way we know that black holes can be charged is that there exist solutions to the Einstein field equations such as the Reissner–Nordström metric, in which a charged test particle experiences a force not experienced by a neutral test particle. The no-hair theorems for black holes explicitly allow electric charge as an observable type of "hair" that it is possible for a black hole to have.
In answer to the second question, Maxwell's equations can be expressed in tensor form, and in this form, they work equally well in general relativity.
| 0.777778 |
a black hole can have a measurable charge
|
Fill: SelectCommand.Connection property has not been initialized. what to do?
|
When I try to use the method GetCities(), it says:
Fill: SelectCommand.Connection property has not been initialized.
Can anyone please advise on what to do?
public class CitiesService
{
public DataSet DS;
public OleDbConnection myConnection;
public OleDbDataAdapter adapter;
public OleDbDataAdapter adapter2;
public CitiesService()
{
}
public DataSet GetCities()
{
OleDbCommand myCmd = new OleDbCommand("SPtblCities", myConnection);
myCmd.CommandType = CommandType.StoredProcedure;
OleDbDataAdapter Adapter = new OleDbDataAdapter();
Adapter.SelectCommand = myCmd;
DataSet dataSet = new DataSet();
try
{
Adapter.Fill(dataSet, "tblCities");
dataSet.Tables["tblCities"].PrimaryKey = new DataColumn[]
{
dataSet.Tables["tblCities"].Columns["CityID"]
};
}
catch (OleDbException ex)
{
throw ex;
}
return dataSet;
}
}
|
I would recommend you use this pattern and remember that some of these objects are disposable.
public class CitiesService
{
public DataSet DS;
public OleDbConnection myConnection;
public OleDbDataAdapter adapter;
public OleDbDataAdapter adapter2;
public CitiesService()
{
}
public DataSet GetCities()
{
using (DataSet dataSet = new DataSet())
{
using (OleDbConnection myConnection = new OleDbConnection(ConnectionString))
{
myConnection.Open();
using (OleDbCommand myCmd = myConnection.CreateCommand())
{
myCmd.CommandType = CommandType.StoredProcedure;
using (OleDbDataAdapter Adapter = new OleDbDataAdapter())
{
Adapter.SelectCommand = myCmd;
try
{
Adapter.Fill(dataSet, "tblCities");
dataSet.Tables["tblCities"].PrimaryKey = new DataColumn[] { dataSet.Tables["tblCities"].Columns["CityID"] };
}
catch (OleDbException ex)
{
throw ex;
}
}
}
myConnection.Close();
}
return dataSet;
}
}
}
| 0.888889 |
Use this pattern and remember that some of these objects are disposable
|
Persistent Connection Attempts By Akamai Technologies
|
I have seen this question asked in different places, but have never seen a satisfactory answer:
On different computer builds, ASUS/GIGABYTE/MSI, I constantly have outgoing connections to XXX-XXX-XXX.deploy.static.akamaitechnologies.com over HTTP Port 80, and if the connection attempts are denied, the network card (or at least internet connectivity) stops working... LAN connections still appear to work. The outgoing connections occur on every restart, and must be allowed for the network card to function properly. If the attempts are denied, the system must be restated so that the connections will be attempted again and can be allowed.
This post suggests narrowing down an offending application, but I don't see how akamai technologies can be so intertwined in so many systems, cut-off connectivity if not allowed, etc. Something about it feels like unauthorized (or at least unwanted) information transfer, at the very least a phone-home, but I have no Wirshark captures to confirm this :)
My primary guess after looking at all the symptoms: the connection attempt is trying to see if there is internet connectivity, and if the attempt fails Windows assumes there is none... But this sounds really, really dumb.
|
CDNs commonly serve their pages based on the geographical allocation of the ip addresses of the connecting user. You could use a proxy server for all your http requests but directed at a country that is nowhere close to where you are located. At the end of it, if the requests still head off towards those ip addresses, you'll know that something isn't quite right.
However, that was a while back. Now CDNs also serve contents such as updates, etc. So if you can corroborate the connection as a valid service for something like Apple updates, especially with real life users, then you can be sure that it is a valid request.
| 1 |
if you can corroborate the connection as a valid service for something like Apple updates, then you can be sure that it is
|
LinkedIn / OAuth question
|
I have a client that has an (I think) odd request: I know there are modules like Social Login Pro that connect someone's Facebook / LinkedIn to a member account, but what my client wants to do is:
Allow users to submit "tips". To submit a tip, you don't need to register, you just enter your tip, after which you're prompted to either "enter your personal information yourself" or "use your linkedin profile". This should be stored with the entry, not in a member-profile, and people should be allowed to submit multiple tips.
How would this even work OAuth-wise? Are you allowed to request someone's personal info again?
Any input would be greatly appreciated!
Thanks!
|
With Social Login Pro, you can let user add connection to LinkedIn profile after he's already registered and logged in to your system.
If that is what you're asking, as the question is not too clear :)
| 0.888889 |
Social Login Pro lets user add connection to LinkedIn profile
|
call/cc implementation?
|
I'm trying to find how call/cc is implemented. The best I've found is this Haskell snippet:
callCC f = Cont $ \k -> runCont (f (\a -> Cont $ \_ -> k a)) k
Although this is not as simple as I want due to the Cont and runCont. I've also found descriptions of what it does, although never as clear as actual code.
So how is it implemented in its simplest form? I am tagging this with Scheme and Haskell as those are two languages I prefer.
|
"Implementing call/cc" doesn't really make sense at the layer you're working in; if you can implement call/cc in a language, that just means it has a built-in construct at least as powerful as call/cc. At the level of the language itself, call/cc is basically a primitive control flow operator, just like some form of branching must be.
Of course, you can implement a language with call/cc in a language without it; this is because it's at a lower level. You're translating the language's constructs in a specific manner, and you arrange this translation so that you can implement call/cc; i.e., generally, continuation-passing style (although for non-portable implementation in C, you can also just copy the stack directly; I'll cover continuation-passing style in more depth later). This does not really give any great insight into call/cc itself — the insight is into the model with which you make it possible. On top of that, call/cc is just a wrapper.
Now, Haskell does not expose a notion of a continuation; it would break referential transparency, and limit possible implementation strategies. Cont is implemented in Haskell, just like every other monad, and you can think of it as a model of a language with continuations using continuation-passing style, just like the list monad models nondeterminism.
Technically, that definition of callCC does type if you just remove the applications of Cont and runCont. But that won't help you understand how it works in the context of the Cont monad, so let's look at its definition instead. (This definition isn't the one used in the current Monad Transformer Library, because all of the monads in it are built on top of their transformer versions, but it matches the snippet's use of Cont (which only works with the older version), and simplifies things dramatically.)
newtype Cont r a = Cont { runCont :: (a -> r) -> r }
OK, so Cont r a is just (a -> r) -> r, and runCont lets us get this function out of a Cont r a value. Simple enough. But what does it mean?
Cont r a is a continuation-passing computation with final result r, and result a. What does final result mean? Well, let's write the type of runCont out more explicitly:
runCont :: Cont r a -> (a -> r) -> r
So, as we can see, the "final result" is the value we get out of runCont at the end. Now, how can we build up computations with Cont? The monad instance is enlightening:
instance Monad (Cont r) where
return a = Cont (\k -> k a)
m >>= f = Cont (\k -> runCont m (\result -> runCont (f result) k))
Well, okay, it's enlightening if you already know what it means. The key thing is that when you write Cont (\k -> ...), k is the rest of the computation — it's expecting you to give it a value a, and will then give you the final result of the computation (of type r, remember) back, which you can then use as your own return value because your return type is r too. Whew! And when we run a Cont computation with runCont, we're simply specifying the final k — the "top level" of the computation that produces the final result.
What's this "rest of the computation" called? A continuation, because it's the continuation of the computation!
(>>=) is actually quite simple: we run the computation on the left, giving it our own rest-of-computation. This rest-of-computation just feeds the value into f, which produces its own computation. We run that computation, feeding it into the rest-of-computation that our combined action has been given. In this way, we can thread together computations in Cont:
computeFirst >>= \a ->
computeSecond >>= \b ->
return (a + b)
or, in the more familiar do notation:
do a <- computeFirst
b <- computeSecond
return (a + b)
We can then run these computations with runCont — most of the time, something like runCont foo id will work just fine, turning a foo with the same result and final result type into its result.
So far, so good. Now let's make things confusing.
wtf :: Cont String Int
wtf = Cont (\k -> "eek!")
aargh :: Cont String Int
aargh = do
a <- return 1
b <- wtf
c <- return 2
return (a + b + c)
What's going on here?! wtf is a Cont computation with final result String and result Int, but there's no Int in sight.
What happens when we run aargh, say with runCont aargh show — i.e., run the computation, and show its Int result as a String to produce the final result?
We get "eek!" back.
Remember how k is the "rest of the computation"? What we've done in wtf is cunningly not call it, and instead supply our own final result — which then becomes, well, final!
This is just the first thing continuations can do. Something like Cont (\k -> k 1 + k 2) runs the rest of the computation as if it returned 1, and again as if it returned 2, and adds the two final results together! Continuations basically allow expressing arbitrarily complex non-local control flow, making them as powerful as they are confusing. Indeed, continuations are so general that, in a sense, every monad is a special case of Cont. Indeed, you can think of (>>=) in general as using a kind of continuation-passing style:
(>>=) :: (Monad m) => m a -> (a -> m b) -> m b
The second argument is a continuation taking the result of the first computation and returning the rest of the computation to be run.
But I still haven't answered the question: what's going on with that callCC? Well, it calls the function you give with the current continuation. But hang on a second, isn't that what we were doing with Cont already? Yes, but compare the types:
Cont :: ((a -> r) -> r) -> Cont r a
callCC :: ((a -> Cont r b) -> Cont r a) -> Cont r a
Huh. You see, the problem with Cont is that we can't sequence actions from inside of the function we pass — we're just producing an r result in a pure manner. callCC lets the continuation be accessed, passed around, and just generally be messed around with from inside Cont computations. When we have
do a <- callCC (\cc -> ...)
foo ...
You can imagine cc being a function we can call with any value inside the function to make that the return value of callCC (\cc -> ...) computation itself. Or, of course, we could just return a value normally, but then calling callCC in the first place would be a little pointless :)
As for the mysterious b there, it's just because you can use cc foo to stand in for a computation of any type you want, since it escapes the normal control flow and, like I said, immediately uses that as the result of the entire callCC (\cc -> ...). So since it never has to actually produce a value, it can get away with returning a value of any type it wants. Sneaky!
Which brings us to the actual implementation:
callCC f = Cont (\k -> runCont (f (\a -> Cont (\_ -> k a))) k)
First, we get the entire rest of the computation, and call it k. But what's this f (\a -> Cont (\_ -> k a)) part about? Well, we know that f takes a value of type (a -> Cont r b), and that's what the lambda is — a function that takes a value to use as the result of the callCC f, and returns a Cont computation that ignores its continuation and just returns that value through k — the "rest of the computation" from the perspective of callCC f. OK, so the result of that f call is another Cont computation, which we'll need to supply a continuation to in order to run. We just pass the same continuation again since, if everything goes normally, we want whatever the computation returns to be our return value and continue on normally. (Indeed, passing another value wouldn't make sense — it's "call with current continuation", not "call with a continuation other than the one you're actually running me with".)
All in all, I hope you found this as enlightening as it is long. Continuations are very powerful, but it can take a lot of time to get an intuition for how they work. I suggest playing around with Cont (which you'll have to call cont to get things working with the current mtl) and working out how you get the results you do to get a feel for the control flow.
Recommended further reading on continuations:
The Mother of all Monads (linked previously)
The continuation passing style chapter of the Haskell Wikibook
Quick and dirty reinversion of control (showing a "real-world" use of continuations)
The continuations and delimited control section of Oleg Kiselyov's site
| 1 |
What's the "rest of the computation"?
|
Open Source Scripting Languages that work well with Microsoft SQL Server
|
I have been using PHP for a number of years now and have found it to be (relatively) well documented, easy to work with, to setup and to use.
One of its main advantages is its capablility of working, without many modifications, similarly on a Windows or *nix platforms.
It also has good support for connecting to Microsoft SQL Servers, in particular 2005 + 2008 through the mssql module + ntwdblib.dll or FreeTDS.
That being said, the more time I spend developing PHP, the more I feel the need to pick up another scripting language to broaden my skill set and to develop better web-based applications.
Because of this I've spent some time exploring alternative scripting languages in an effort to evaluate their suitability.
At present the biggest hurdle I have come accross is support provided by recent versions of the Open Source scripting languages, specifically: Python 3.2.2, Ruby 1.9.2, Node 0.5.7 for the Windows OS (WinXP in my case) and Microsoft SQL Server (2005 + 2008).
My existing working environ necessitates the requirement for connecting to a MSSQL database.
I'm looking for answers from developers who have experience working with either Python, Ruby or Node.js and using them to interacting with a MS SQL Server.
What would be your recommended Open Source scripting language which has good support for MS SQL Server.
Ideas welcome.
-P.
|
Ruby on Rails / ActiveRecord has support for mssql, using the activerecord-sqlserver-adapter gem.
| 1 |
Ruby on Rails / ActiveRecord supports mssqlserver
|
What is the distinction between a city and a sprawl/metroplex... between downtown and a commercial district?
|
I am trying to understand what kinds of places the spam values on p 231 refer to in the 5th Edition main book for Shadowrun.
Per p 15, a sprawl is a plex, a plex is a "metropolitan complex, short for metroplex". Per Google a metroplex is " a very large metropolitan area, especially one that is an aggregation of two or more cities". A city downtown and sprawl downtown would tend to have similar densities, but for some reason the sprawl (which includes suburbs?) has a higher spam zone noise rating (p 231). Similarly, I'd think of a downtown as being more dense and noisy (e.g. Office buildings and street vendors) than a commercial district, e.g. an outdoor mall. The noise ratings make me think that I am thinking about this incorrectly. What is a better way of thinking of them?
|
It might be helpful to look into the definition of spam zone:
(p.216) spam zone: An area flooded with invasive and/or viral AR advertising, causing noise.
Because a metroplex has so many marketing targets, it seems a safe assumption that marketers would drown the plex with spam. Spam from the less dense areas would bleed into the urban cores. A smaller city with less urban/suburban territory surrounding it ostensibly wouldn't have as much spam.
| 1 |
Spam from the less dense areas would bleed into the urban cores
|
pre hung doors to fit into old frame
|
I have purchased some pre hung doors at 30" by 80"
the jambs are 4 5/8 wide.
The house is 50 years old and the existing door frame is 5 1/4 wide.
Any solution as to how to fit these doors into the old frame.
|
All you need to do is install the door with what is called a "jamb extension". Just get some strips of 5/8"x3/4" inch molding, then glue them to the non-hinge side of the new door jamb. On a painted jamb these will for all intents and purposes disappear with sanding and some wood putty. On a stained jamb you'll be able to see the extension, but you can minimize how much it stands out by picking wood with a similar grain pattern.
| 1 |
Install the door with what is called a "jamb extension"
|
What does biting off the Pitum have to do with Pregnancy?
|
I recently read that it's a Segulah (others put it as a minhag) for a pregnant woman to bite off the Pitum of the esrog on Hoshana Rabbah for an easy labor.
(please correct me if I misunderstood it)
What is the connection between biting off the pitum and childbirth?
I read in the Halacha For Today email for Wednesday, 21 Tishrei 5772 (October 19, 2011). The original email read:
It is a Segulah for pregnant women to bite off the Pitum of the Esrog on Hoshana Rabbah, and to give Tzedakah and daven for an easy labor. (See Likutei Maharich Sukkos page 106a. See also Elef Hamagen Siman 660:6 and Sefer Moed L'Kol Chai Siman 24:25 where a special Tefilah text is printed for the woman to say)
|
You may be mixing two separate customs together.
Quoted in Taamei Minhagim (pg. 521 paragraph 68), saying he saw it in some Sefer:
There is a custom for women to bite off the pitum of the Etrog. The reason is because our sages say that (according to one opinion) the forbidden fruit Adam and Chava ate was an Etrog. Therefore she bites the Pitum in order to show that "just as I have no benefit/pleasure from biting a Pitum so too I had no benefit/pleasure from the sin"
[Notice he does not say anything about Hoshanah Rabba]
Two parter brought in the Kaf HaChaim 664:60, and was mentioned in my answer here:
There is a custom to take the Etrog used on Sukkot, turn it into Etrog Jelly, and eat it on Tu B'Shevat.
There is a custom for pregnant women to eat Etrog Jelly while in labor, as a Segulah for an easy labor, and children that grow up with a good and peaceful life.
Or perhaps not, since the custom is mentioned explicitly in Likutei Maharich Part 3 (pg 106A) by Yisroel Chaim Friedman. (as linked to on the webpage quoted by yydl in the comments to the question.)
| 0.666667 |
There is a custom for women to bite off the pitum of the Etrog .
|
How to increase saturation or intensity of this printed pattern
|
The image on the left is what I see on my monitor ( CMYK Vector )
The image on the right was produced using flexography for a packaging design
The image on the right is dull and murky and doesn't 'pop' as much as it should. I think the yellow is the issue. Is it a possibility that my printer is at fault? Could I ask them to increase the % of yellow?
Would a matte or glossy finishing technique help?
OR is the design itself the issue with too many gradients making it hard to accurately reproduce through print?
|
What you see on your monitor is RGB. Yes, it's a CMYK file, and it's likely trying to compensate, but it's still using an RGB model.
As such, what you see on screen is usually always more vibrant/saturated than what CMYK can produce. This is due to the reproducible colors being different in each color space:
Using coated paper and/or applying a top-coat can certainly help your CMYK print look brighter, but ultimately, you are at the mercy of what CMYK can do.
At this point, you need to talk to your printer and get their feedback. They may be able to tweak the file to accommodate. Alternatively, if it is yellow that is the issue, perhaps they can swap Y for a more vibrant Pantone Spot Yellow.
| 0.777778 |
What you see on your monitor is RGB
|
Brute force login attempt from spoofed IP's
|
I see that many of my WordPress installs are being hit with 1000+ failed login attempts using non-existing 'admin' account name. The requests come from different IP's every time, and I see IP's such as 8.8.8.8 (google's public dns) as the origin of some of the login attempts.
I use WordFence to detect and block these attempts, but the block is based on IP, so it's not so efficient.
My question is:
Is it 'normal' for low profile WordPress sites to get these 'attacks'? I've notices an increase in the logs during the first days of 2013.
Is it something to worry about, and is it possible to detect/verify if a login request is coming from a spoofed IP?
|
Its impossible to spoof your ip address of a TCP connection due to the 3 way handshake.... Unless of course the application is vulnerable to CWE-291: Trusting a Self Reported IP address
Sure enough in ./wordfence/lib/wfUtils.php on line 77:
public static function getIP(){
$IP = 0;
if(isset($_SERVER['HTTP_X_FORWARDED_FOR'])){
$IP = $_SERVER['HTTP_X_FORWARDED_FOR'];
So yes, the reason why you are seeing brute force attempts from 8.8.8.8 is because WordFence is vulnerable to CWE-291. I am reporting this vulnerability to WordFence, but to be honest this vulnerability is so painfully obvious. If the developer doesn't understand even the most basic flaws of trusting attacker input, then they have probably made other serious mistakes that impact security, I smell blood.
Its possible that a security system can make your system as a whole less secure. This is nothing new, remote code execution vulnerabilities have been found in anti-virus software. Complexity is the worst enemy of security.
| 0.888889 |
CWE-291: Trusting a Self Reported IP address
|
How to move questions from stackoverflow to dba
|
I know some questions were migrated from SO to dba.SE. My question is how can one one flag that a particular question be moved to DBA. I have reputation around 1400 on SO. How can on do it?
I found this question and I though this is DBA related but it might not be.
Roles vs Schema
Also I want to know how strictly this rule is followed that dba related question should be migrated to DBA although SO is also programming related forum.
|
Think of it this way:
On which site would the question have the best chance of getting a good answer? That is... should the question be answered by a developer or a DBA?
That is what the mod will consider when reviewing your flag to move it. If you don't know which would be better, I would suggest leaving it be (if a question truly crosses the boundaries between dev and DBA, for instance).
| 0.888889 |
Should the question be answered by a developer or a DBA?
|
XPATH Selecting by position returns incoherent results
|
I need some help with an issue I can't figure out.
I have the following xml:
<?xml version="1.0" encoding="ISO-8859-1"?>
<?xml-stylesheet type="text/xsl" href="prueba.xsl"?>
<ficha>
<titulo></titulo>
<bloque>
<texto></texto>
<pregunta id="1" tipo="checkSN">
<texto>Acredita curso bienestar animal minimo 20 h</texto>
</pregunta>
<pregunta id="2" tipo="texto">
<texto>Sistemática inspección</texto>
</pregunta>
<grupo>
<texto>trato adecuado enfermos</texto>
<pregunta id="3" tipo="desplegableSNP">
<texto>Recetas correspondientes</texto>
</pregunta>
<pregunta id="4" tipo="multiple">
<texto>Disponen de comida y bebida</texto>
</pregunta>
</grupo>
<grupo>
<texto>
Heridos/Enfermos
</texto>
<pregunta id="5" tipo="multiple">
<texto>Se aprecian heridos o enfermos momento inspeccion</texto>
</pregunta>
<pregunta id="6" tipo="multiple">
<texto>Separados del resto</texto>
</pregunta>
<pregunta id="7" tipo="multiple">
<texto>Disponen de comida y bebida</texto>
</pregunta>
<pregunta id="8" tipo="multiple">
<texto>Disponen de comida y bebida</texto>
</pregunta>
</grupo>
</bloque>
<bloque>
<texto>Condiciones específicas de alojamiento y manejo</texto>
</bloque>
</ficha>
And The folliwng XSL sheet:
<?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<html>
<head>
<link rel="stylesheet" type="text/css" href="prueba.css" />
</head>
<body>
<h2><xsl:value-of select="/ficha/titulo"/></h2>
<h3>1: <xsl:value-of select="//pregunta[1]/@id"/></h3>
<h3>2: <xsl:value-of select="//pregunta[2]/@id"/></h3>
<h3>3: <xsl:value-of select="//pregunta[3]/@id"/></h3>
<h3>4: <xsl:value-of select="//pregunta[4]/@id"/></h3>
<h3>5: <xsl:value-of select="//pregunta[5]/@id"/></h3>
<h3>6: <xsl:value-of select="//pregunta[6]/@id"/></h3>
<h3>7: <xsl:value-of select="//pregunta[7]/@id"/></h3>
<h3>8: <xsl:value-of select="//pregunta[8]/@id"/></h3>
<h3>c: <xsl:value-of select="count(//pregunta)"/></h3>
</body>
</html>
</xsl:template>
</xsl:stylesheet>
When I load them I got this result:
1: 1
2: 2
3: 7
4: 8
5:
6:
7:
8:
c: 8
I don't understand why it's ignoring some nodes . If I include new nodes or move them, it always shows 4 results, from node at position 5 to 8 it never shows anything. I need to use this type of selecting because it's from a Java application, the stylesheet is just for testing.
|
Put //pregunta in parenthesis. Change your XPath expressions to (//pregunta)[1]/@id, (//pregunta)[2]/@id ...
Without parenthesis e.g. //pregunta[4] evaluates to all pregunta elements which are at the fourth position of their parent element.
However (//pregunta)[4] first calculates the sequence of all pregunta elements and then takes the fourth element of that sequence.
| 0.888889 |
//pregunta in parenthesis
|
How to Set GPO to Update Compatibility View List in IE9
|
We recently started upgrading people in our organization to IE 9. One of the websites that our employees will be going to on a daily basis does not display the scroll bars correctly in IE 9. It does however display them correctly when the site is added to the compatibility view list in IE 9.
How can I set this through a group policy? Has this option been released yet?
In Windows Components\Internet Explorer\Compatibility View I only see options for IE7. Also in Preferences\Control Panel\Internet Settings it is only letting me create new group policy settings for IE 5-8.
The DC is running on W2K8R2 and is fully up to date.
Thanks for your help!
|
From a Google search (which you should have done before posting your question) it appears that if you install IE9 on the computer where you have the GPMC installed (presumably your domain controllers) that the inetres.admx template will be updated with the IE9 settings.
| 0.888889 |
Inetres.admx template will be updated with IE9 settings
|
How are electromagnetic waves differentiated?
|
I would like to know how the signals for remote controlled cars, radios, etc.. That use radio waves are told apart from each other. I know that the radio waves are modulated to encode data and the frequency or amplitude are changed, so then the waves are propagated through the air and received at another location via a receiver that is tuned to a certain frequency that the waves were emitted, but I'm sure in most places in the world by now there are numerous amounts of waves traversing at any point, why doesn't the receiver of this device happen to catch another wave of the same frequency instead of the one that was intended? or is there anything that stops me from having a device that emits a wide range of frequencies or amplitudes that would manipulate nearby electronics?
|
why doesn't the receiver of this device happen to catch another wave of the same frequency instead of the one that was intended?
It does catch other waves at the same frequency. This is called noise. Communications are engineered so that the signal is significantly stronger than the potential noise such that it can still be reliably demodulated.
More generally, all wireless communications use the same medium (the electromagnetic field), and the problem of allowing multiple users access to this shared medium is multiplexing. While frequency division multiplexing is most common, it is not the only way: for example ultra-wideband communications may use time-division multiplexing, and spread-spectrum may use code-division multiplexing.
These methods can also be used in combination. For example, we may tune a receiver to a particular frequency (frequency-division multiplexing), and simultaneously use a directional antenna to reduce sensitivity in directions where there is only noise (space-division multiplexing). A system's ability to distinguish the desired signal from noise is called selectivity, and the more known about the desired signal (frequency, timing, phase, polarization, etc), the more selective a receiver can be.
is there anything that stops me from having a device that emits a wide range of frequencies or amplitudes that would manipulate nearby electronics?
Regulatory bodies, such as the FCC, in conjunction with international bodies such as the ITU, establish licenses and laws which grant specific users or classes of users exclusive access to allocated spectrum. Violators are fined, or made by force if necessary to stop transmitting.
A device that transmits over some range of frequencies with the intent of disabling other radio devices would be called a jammer. There's no technical reason you couldn't build one, but ultimately the government's monopoly on violence will discourage you. A military, having no reason to fear these consequences, makes jammers all the time.
| 0.888889 |
Why does the receiver catch another wave of the same frequency instead of the one intended?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.