text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
Apache wildcard at domain level
I have few sites, and they all have identical setup on a single server. Now, instead of the separate configuration file for each of them in sites-enabled directory, I want to have a common file.
Idea is this:
www.abc.com should have /var/www/abc as DocumentRoot,
www.xyz.com should have /var/www/xyz as DocumentRoot, etc.
All other parameteres like log files, contact emails etc should also have identical setup (abc.com should have [email protected] as admin email, xyz.com should have [email protected] as admin email etc).
I couldnt find any tutorial on how to backreference wildcards, etc.
regards,
JP
A:
Aha. Found the solution. VirtualDocumentRoot is the answer.
A single line like:
VirtualDocumentRoot /var/www/%0
does the job. Havent really figured the logs stuff but should be similar and easy.
See https://serverfault.com/questions/182929/wildcard-subdomain-directory-names for a nice related thread.
You gotta enable vhost_alias module for this. (sudo a2enmod vhost_alias on ubuntu).
| {
"pile_set_name": "StackExchange"
} |
Q:
How to perform a multi-path data update on Firebase when data is changed from the Firebase Console?
I am currently working on an iOS app and I'm using Firebase to power it.
Since my app is still relatively small I'm using the database to often perform manual amends on the data. My users can submit places (that I display on a map) and I review entries manually to ensure the data is complete and correct.
I have recently started using GeoFire and thus had to start denormalizing my data for the coordinates (lat & lon) of each places.
As a result I have coordinates at 2 locations in my database
under /places/place_key/...
under /geofire/place_key/...
I'm currently looking for a way to automatically update the /geofire side of my database when I update the latitude or longitude of a places on the /places side of the database directly from the Firebase Console.
I'm looking for tips on how to do that. Could Firebase Functions help me for this?
Cheers,
Ed
A:
If someone happens to look for an answer to this question in the future, I followed @J. Doe advice and used Firebase Cloud Functions.
The setup is super simple, steps here.
Here is sample code that let's me update several endpoint of my database when one of my object is updated.
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
exports.placeDataUpdated = functions.database.ref('/places/{placeId}').onUpdate(event => {
const place = event.data.val();
const key = event.params.placeId;
console.log("Updated place data for key: ", key);
var dataToUpdate = {};
dataToUpdate["places_summary/"+key+"/city"] = place.city;
dataToUpdate["places_summary/"+key+"/country"] = place.country;
dataToUpdate["places_summary/"+key+"/latitude"] = place.latitude;
dataToUpdate["places_summary/"+key+"/longitude"] = place.longitude;
dataToUpdate["places_summary/"+key+"/name"] = place.name;
dataToUpdate["places_summary/"+key+"/numReviews"] = place.numReviews;
dataToUpdate["places_summary/"+key+"/placeScore"] = place.placeScore;
dataToUpdate["places_summary/"+key+"/products"] = place.products;
dataToUpdate["places_summary/"+key+"/visible"] = place.onMap;
dataToUpdate["places_GeoFire/"+key+"/l/0"] = place.latitude;
dataToUpdate["places_GeoFire/"+key+"/l/1"] = place.longitude;
return event.data.ref.parent.parent.update(dataToUpdate);
});
It's super convenient and took next to no time to setup.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does `log_slow_queries` break `my.cnf`?
Why can't I use slow_query_log on MySQL 5.6 on CentOS 6.4?
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
...
## Logging
## *** THESE LOGS WORK JUST FINE ***
log_error = /var/log/mysql/error.log
general_log_file = /var/log/mysql/mysql.log
general_log = 1
## *** THESE LOGS BREAK MYSQL ***
#log_slow_queries = /var/log/mysql/slow.log
#long_query_time = 5
#log-queries-not-using-indexes
Here's the /var/log/mysql directory:
$ ls -lh
total 100K
-rw-r----- 1 mysql root 47K Nov 22 06:02 error.log
-rw-rw---- 1 mysql root 42K Nov 22 06:05 mysql.log
-rw-rw---- 1 mysql mysql 0 Nov 22 06:01 slow.log
If I uncomment the log_slow_query lines in the /etc/my.cnf I receive the following error:
$ /etc/init.d/mysql restart
Shutting down MySQL.. SUCCESS!
Starting MySQL..... ERROR! The server quit without updating PID file (/var/lib/mysql/server.domain.com.pid).
What am I missing?
A:
Looks like MySQL changed the format. Now it's slow_query_log not log_slow_queries.
This works:
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 5
| {
"pile_set_name": "StackExchange"
} |
Q:
Random order & pagination Elasticsearch
In this issue
is a feature request for ordering with optional seed allowing for recreation of random order.
I need to be able to paginate random ordered results.
How could this be be done with Elasticsearch 0.19.1 ?
Thanks.
A:
This should be considerably faster than both answers above and supports seeding:
curl -XGET 'localhost:9200/_search' -d '{
"query": {
"function_score" : {
"query" : { "match_all": {} },
"random_score" : {}
}
}
}';
See: https://github.com/elasticsearch/elasticsearch/issues/1170
A:
You can sort using a hash function of a unique field (for example id) and a random salt. Depending on how truly random the results should be, you can do something as primitive as:
{
"query" : { "query_string" : {"query" : "*:*"} },
"sort" : {
"_script" : {
"script" : "(doc['_id'].value + salt).hashCode()",
"type" : "number",
"params" : {
"salt" : "some_random_string"
},
"order" : "asc"
}
}
}
or something as sophisticated as
{
"query" : { "query_string" : {"query" : "*:*"} },
"sort" : {
"_script" : {
"script" : "org.elasticsearch.common.Digest.md5Hex(doc['_id'].value + salt)",
"type" : "string",
"params" : {
"salt" : "some_random_string"
},
"order" : "asc"
}
}
}
The second example will produce more random results but will be somewhat slower.
For this approach to work the field _id has to be stored. Otherwise, the query will fail with NullPointerException.
A:
Good solution from imotov.
Here is something much more simple and you don't need to rely in a document property:
{
"query" : { "query_string" : {"query" : "*:*"} },
"sort" : {
"_script" : {
"script" : "Math.random()",
"type" : "number",
"params" : {},
"order" : "asc"
}
}
}
if you want to set a range that would be something like:
{
"query" : { "query_string" : {"query" : "*:*"} },
"sort" : {
"_script" : {
"script" : "Math.random() * (myMax - myMin) + myMin",
"type" : "number",
"params" : {},
"order" : "asc"
}
}
}
replacing the max and min with your proper values.
| {
"pile_set_name": "StackExchange"
} |
Q:
REACT Todo-List : How to check if an item already exist in the array
I was trying a simple approach, But this approach doesn't seem to be working.
I want to check if the item exists when the button is clicked. using IF statement
//Adding Items on Click
addItem = () =>
{
let newValue = this.state.inputValue;
let newArray = this.state.inputArray;
if (newValue === newArray) {
console.log("Exist"); // this part doesnt work
} else {
newArray.push(newValue); //Pushing the typed value into an array
}
this.setState({
inputArray: newArray //Storing the new array into the real array
});
console.log(this.state.inputArray);
};
A:
Change your function like below:
addItem = () =>
{
let newValue = this.state.inputValue;
let newArray = this.state.inputArray;
if (newArray.includes(newValue)) {
console.log("Exist");
return;
}
this.setState(previousState => ({
inputArray: [...previousState.inputArray, newValue]
}, () => console.log(this.state.inputArray)));
};
and don't push new value to state directly instead use it like below:
this.setState(previousState => ({
inputArray: [...previousState.inputArray, newValue]
}, () => console.log(this.state.inputArray)));
or
let inputArray= [...this.state.inputArray];
inputArray.push("new value");
this.setState({inputArray})
| {
"pile_set_name": "StackExchange"
} |
Q:
OpenDKIM starts before MariaDB on Ubuntu 18.04
I installed OpenDKIM on Ubuntu Server 18.04, using it with Modoboa, so the config file contains DSN for KeyTable and SigningTable to connect to MariaDB.
I noticed that the service always fails to start on reboot, but afterwards I can start it manually with no problem, so I checked syslog and saw these lines:
Jul 31 10:28:35 mail opendkim[897]: opendkim: /etc/opendkim.conf: dsn:mysql://opendkim:[email protected]/modoboa/table=dkim?keycol=domain_name?datacol=id: dkimf_db_open(): Can't connect to MySQL server on '127.0.0.1' (111)
Jul 31 10:28:35 mail opendkim[991]: opendkim: /etc/opendkim.conf: dsn:mysql://opendkim:[email protected]/modoboa/table=dkim?keycol=domain_name?datacol=id: dkimf_db_open(): Can't connect to MySQL server on '127.0.0.1' (111)
Jul 31 10:28:37 mail mysqld[1688]: 2018-07-31 10:28:35 139849791634560 [Note] /usr/sbin/mysqld (mysqld 10.1.29-MariaDB-6) starting as process 868 ...
Jul 31 10:28:41 mail /etc/mysql/debian-start[2018]: Upgrading MySQL tables if necessary.
From this, you can see that the mysqld is starting right after OpenDKIM, I tried to switch the sequence using:
update-rc.d mysql defaults 50 and update-rc.d opendkim defaults 95, this moved the mysql right before the OpenDKIM, but still it didn't have time to intialize so it didn't start either.
For now I fixed it using custom startup script which has sleep 10 && systemctl start opendkim. But I would like some proper solution to fix the startup order.
Thank you.
A:
On systemd you can change the boot order for the service changing its unit file and setting in the 'After' option what service it should start after. Usually the files for that are in:
/lib/systemd/system/nameofservice.service
The line should look similar to this (update with the proper name)
After=mariadb.service
| {
"pile_set_name": "StackExchange"
} |
Q:
Calculus Complicated Substitution Derivative
When,
$$y=6u^3+2u^2+5u-2 \ , \ u= \frac{1}{w^3+2} \ , \ w=\sin x -1 $$find what the derivative of $ \ y \ $equals when $ \ x = \pi \ . $
Tried it many times, still can't seem to get the right answer (81)
A:
$$\frac{dy}{dx}=\frac{dy}{du}\cdot\frac{du}{dw}\cdot\frac{dw}{dx}$$
$\frac{dy}{du}=18u^2+4u+5$, $\frac{du}{dw}=\frac{-3w^2}{(w^3+2)^2}$ , $\frac{dw}{dx}=\cos x$
Can you now make the substitutions?
Note that when $x=\pi $, then $w=-1$, and $u=1$
| {
"pile_set_name": "StackExchange"
} |
Q:
Configure external domain for internet facing hardware balanced CAS on Exchange 2010
I am going to install Exchange 2010 SP1 in a single-site two-server configuration, with both servers running CAS/HUB and MAILBOX roles. I'll use DAG to achieve HA and a hardware balancer to balance the CAS roles. No Edge roles will be installed.
When running Exchange setup, at "Configure Client Access External Domain" screen, am I right inserting the same domain (es. mail.mydomain.com) on both servers?
Do I need to create a CAS array?
A:
When running Exchange setup, at "Configure Client Access External Domain" screen, I am right inserting the same domain (es. mail.mydomain.com) on both servers?
Yes, you will need to specify the domain on both CAS servers when you install them. What you'll want to do is set up your load balancer to answer to mail.yourdomain.com and then add your two CAS servers to the load balancer.
Do I need to create a CAS array?
Yes. If you don't create a CAS array, when one of the servers fail your DAG will fail over and clients will still be trying to talk direct to the dead server. With a CAS array, the client session may drop out momentarily, but will re-establish itself pretty quickly, and most importantly automatically with the other DAG member.
As an additional consideration, you only mention one load balancer. It looks like you're trying to achieve a high availability setup, and as it stands your load balancer is currently a single point of failure. Bear in mind that with Exchange 2010, pretty much everything goes through a CAS server so if your load balancer (which is also your CAS array) were to fail, you would have a big problem on your hands. Also worth bearing in mind if you haven't already is your internet connection. If you only have one connection from your building to the outside world, that is also another SPOF you should be aware of.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is the South Pole Telescope located exactly at the South Pole?
I read that there is less atmospheric interference for the telescope at the South Pole because the atmosphere is thin and there is less water vapor in the air. However this seems to be true for many locations on Antarctica? Are there any other reasons that this telescope is located at exactly the South Pole?
A:
Here are extra reasons to the dry air :
During the winter, sunshine does not reach the South Pole; nighttime (or daytime in the summer) extends for months. The lack of daily sunsets and sunrises makes the atmosphere extremely stable. Conducting observations in the winter also removes another contaminant to millimeter/sub-millimeter observations - the sun. All these factors conspire to make the South Pole the perfect place for the South Pole Telescope.
The further north these extra reasons for choosing the South Pole plateau become important, there will be increasing presence of the sun.
A:
Just guessing here, but ...
Compare the regions with really good skies with the places that have infrastructure and people present. Most of the installations are coastal, right? Are those good places to put a telescope? And while the whole inland plateau has good skies, it has few occupied site, and only one operated by the US.
So what is the case for putting up some other (very expensive to build and maintain) installation, when you could just drop it by South Pole Station where they already maintain a year-round presence.
A:
If the telescope was situated directly on the southern axis of the earth's rotation, the telescope's declination axis would be at zenith. The base for the axis would be level to the ground. In theory you could compensate for the earth's rotation with only one motion of the telescope. Also, its the only place on earth where the entire southern celestial hemisphere is visible. Now, are these the reasons it is built there, probably not but they would be advantages.
| {
"pile_set_name": "StackExchange"
} |
Q:
Para cada valor de uma tabela mostrar o corresponde, caso não tenha mostrar zero
O que eu quero fazer é o seguinte:
Mostrar para cada linha da tabela vendedores o número de passes vendidos, mas caso não tenha vendido nenhum passe mostrar 0.
Tem uma query anterior no sistema que faz parecido, mas só mostra os vendedores que tem ingresso vendido, mas eu queria mostrar caso não tivesse ingresso vendido aparecer 0.
SELECT
/**#Campos buscados**/
vendedores.id,
vendedores.nome,
vendedores.email,
vendedores.telefone,
vendedores.rg,
vendedores.created_at,
count(passes.id) passes
/**#Essa parte é só pra somar o Valor de vendas (taxas e preço)**/
REPLACE(REPLACE(REPLACE(format(SUM(preco * (CASE
WHEN modalidade_id IS NOT NULL THEN 100 - desconto
ELSE 100
END) / 100 + taxa * (CASE
WHEN modalidade_id IS NOT NULL THEN 100 - taxes_discounts
ELSE 100
END) / 100), 2), '.','@'),',','.'),'@',',') as Valor_Vendas
/**#Inner joins**/
FROM vendedores
INNER JOIN embaixador_passes ON vendedores.id = embaixador_passes.embaixador_id
INNER JOIN passes ON passes.id = embaixador_passes.ingresso_id
INNER JOIN eventos ON passes.evento_id = eventos.id
INNER JOIN pedidos ON pedidos.id = embaixador_passes.pedido_id
LEFT OUTER JOIN pedido_statuses ON pedido_statuses.id = pedidos.pedido_status_id
LEFT JOIN modalidades ON modalidades.id = embaixador_passes.modalidade_id
WHERE pedido_statuses.id IN (5 , 8)
AND passes.evento_id = 40;
A:
Substitua:
FROM
vendedores
INNER JOIN
por:
FROM
vendedores
LEFT OUTER JOIN
| {
"pile_set_name": "StackExchange"
} |
Q:
EF 5.0 - Can't make bi-directional relationship work with Code First
I have a class called WidgetCollection. It has an Items property exposing a List(Of Widget) and a SelectedWidget property. I would expect EF to build the database as follows:
Add a WidgetCollection_Id property in my Widgets table, specifying
which WidgetCollection each widget is in
Add a SelectedWidget_Id property in my WidgetCollection table, specifying which of the
Widgets is selected
Add a 1-to-many relationship from WidgetCollection.Id to Widget.WidgetCollection_Id
Add a 1-to-0-or-1 relationship from Widget.Id to WidgetCollection.SelectedWidget_Id
I can confirm that it does appear to build the database schema correctly, however I get the following error if I ever save the context after assigning to SelectedWidget:
System.Data.Entity.Infrastructure.DbUpdateException occurred
HResult=-2146233087
Message=An error occurred while saving entities that do not expose foreign key > properties for their relationships. The EntityEntries property will return null because a single entity cannot be identified as the source of the exception. Handling of exceptions while saving can be made easier by exposing foreign key properties in your entity types.
With an inner exception of
Unable to determine a valid ordering for dependent operations. Dependencies may exist due to foreign key constraints, model requirements, or store-generated values.
I can prevent this error by never assigning WidgetCollect.SelectedWidget.
I guess the problem is that EF can't work out what to do with relationships in both directions, but I just can't find a way to point it in the right direction. Example code follows, all suggestions welcome!
Public Class Widget
Private miId As Integer
Public Property Id As Integer
Get
Return miId
End Get
Set(value As Integer)
miId = value
End Set
End Property
Private msName As String
Public Property Name As String
Get
Return msName
End Get
Set(value As String)
msName = value
End Set
End Property
End Class
Public Class WidgetCollection
Private miId As Integer
Public Property Id As Integer
Get
Return miId
End Get
Set(value As Integer)
miId = value
End Set
End Property
Private msName As String
Public Property Name As String
Get
Return msName
End Get
Set(value As String)
msName = value
End Set
End Property
Private moSelectedWidget
Public Property SelectedWidget As Widget
Get
Return moSelectedWidget
End Get
Set(value As Widget)
moSelectedWidget = value
End Set
End Property
Private moWidgets As New List(Of Widget)
Public Property Widgets As List(Of Widget)
Get
Return moWidgets
End Get
Set(value As List(Of Widget))
moWidgets = value
End Set
End Property
End Class
Public Class MyContext
Inherits DbContext
Public Property Widgets As DbSet(Of Widget)
Public Property WidgetCollections As DbSet(Of WidgetCollection)
End Class
Class Application
Public Sub New()
Database.DefaultConnectionFactory = New SqlCeConnectionFactory("System.Data.SqlServerCe.4.0", "", "Data Source=\EFtest.sdf")
Database.SetInitializer(New DropCreateDatabaseIfModelChanges(Of MyContext))
Dim oContext = New MyContext
Dim oWidgetA = New Widget With {.Name = "Widget A"}
Dim oWidgetB = New Widget With {.Name = "Widget A"}
Dim oWidgetCollection = New WidgetCollection With {.Name = "My widget collection"}
oWidgetCollection.Widgets.Add(oWidgetA)
oWidgetCollection.Widgets.Add(oWidgetB)
oWidgetCollection.SelectedWidget = oWidgetA 'Removing this line prevents error
oContext.WidgetCollections.Add(oWidgetCollection)
oContext.SaveChanges()
End Sub
End Class
A:
I think the exception means what it says:
Unable to determine a valid ordering for dependent operations.
These two lines...
oWidgetCollection.Widgets.Add(oWidgetA)
oWidgetCollection.SelectedWidget = oWidgetA
...mean that EF must store the oWidgetCollection before it can set the WidgetCollection_Id foreign key in oWidgetA, but the second line requires to store the objects the other way around, namely that oWidgetA must be stored before EF can set the foreign key SelectedWidget_Id in oWidgetCollection.
To resolve the conflict I believe you must save the changes twice:
oWidgetCollection.Widgets.Add(oWidgetA)
oWidgetCollection.Widgets.Add(oWidgetB)
oContext.WidgetCollections.Add(oWidgetCollection)
oContext.SaveChanges()
oWidgetCollection.SelectedWidget = oWidgetA
oContext.SaveChanges()
By the way: This expectation...
Add a 1-to-0-or-1 relationship from Widget.Id to WidgetCollection.SelectedWidget_Id
...is not correct. EF will create another one-to-many relationship, i.e. the same SelectedWidget can be selected for many WidgetCollections. The default relationship EF will create by convention when you have navigation properties only on one side of the relationship is always one-to-many. You need data annotations or Fluent API to override this default behaviour.
I suggest to leave this relationship as one-to-many. One-to-one relationships are more difficult and EF only supports one-to-one relationships with shared primary keys which would mean that you can't select different widgets as selected. The only possible selected widget would be the one with the same primary key value that also the WidgetCollection has.
| {
"pile_set_name": "StackExchange"
} |
Q:
convert string to long in LLVM assembly code
I am trying to convert a string to an integer in LLVM assembly code. The code works fine with atoi but I want to switch to strtol.
This is the code:
; initialise a number
@number0 = private unnamed_addr constant [2 x i8] c"5\00"
%str = getelementptr [2 x i8]* @number0, i64 0, i64 0
; the endpointer that indicates an error
%endptr = alloca i8*
; the actual call of strtol
%addr = getelementptr i8* %str, i64 0
%new_long = call i64 @strtol(i8* %addr, i8** %endptr)
; debug printing
%after_casting = getelementptr [18 x i8]* @after_casting, i64 0, i64 0
call i64(i8*, ...)* @printf(i8* %after_casting, i64 %new_long)
Now, the debug printf message prints 0. I guess something wrong with the endptr passing. What am I doing wrong?
A:
When wondering about things like this, just run Clang with LLVM IR emission. For example this C code:
int main ()
{
char szNumbers[] = "2001";
char * pEnd;
long int li1;
li1 = strtol (szNumbers,&pEnd,10);
printf ("%ld\n", li1);
return 0;
}
Turns into this IR:
@main.szNumbers = private unnamed_addr constant [5 x i8] c"2001\00", align 1
@.str = private unnamed_addr constant [5 x i8] c"%ld\0A\00", align 1
; Function Attrs: nounwind uwtable
define i32 @main() #0 {
entry:
%szNumbers = alloca [5 x i8], align 1
%pEnd = alloca i8*, align 8
%0 = getelementptr inbounds [5 x i8]* %szNumbers, i64 0, i64 0
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %0, i8* getelementptr inbounds ([5 x i8]* @main.szNumbers, i64 0, i64 0), i64 5, i32 1, i1 false)
%call = call i64 @strtol(i8* %0, i8** %pEnd, i32 10) #1
%call1 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([5 x i8]* @.str, i64 0, i64 0), i64 %call) #1
ret i32 0
}
| {
"pile_set_name": "StackExchange"
} |
Q:
LuaJ (Java Lua Library): Calling Lua functions in other files from a Lua file
To begin, I'm aware of this question, but I don't think it quite fits what I'm doing. Either way, the answer is a bit confusing my opinion. I'd like to find an answer for my problem that's more specific to what I'm doing.
The goal here is for the lua file chatterToolsTest to successfully print "Test success" to the console. Unfortunately, my current approach isn't quite working. Can someone please help? I'm not the best at Lua, so maybe my Lua code is just wrong in this case. Please check out the snippets below.
Another constraint: I can't enable to use of modules from the java-side. Any referencing done between the two Lua files has to be obtained through Lua only. This is because I'm developing a modding system for a Java project and need the Lua to be able to work with minimal change on the Java-side.
Please keep in mind that I'm not storing my Lua files inside of the JAR file or any packages, they are contained in a folder in the root working directory of the Java program, like a folder of resources.
chatterToolsTest.lua:
function main()
print("Test start.");
local test = require("chatterTools");
chatterTools:test();
end
chatterTools.lua, the class called by chatterToolsTest.lua:
function test()
print("Test success");
end
Both of these files are in a folder called world/NOKORIWARE/lua/:
And lastly, here's the Java test class using LuaJ that calls them:
public class LuaTest {
public static void main(String args[]) {
new LuaTest().run("NOKORIWARE/lua/chatterToolsTest.lua", "main");
}
private Globals buildGlobals() {
Globals globals = new Globals();
globals.load(new JseBaseLib());
globals.load(new PackageLib());
globals.load(new Bit32Lib());
globals.load(new TableLib());
globals.load(new StringLib());
globals.load(new JseMathLib());
globals.load(new WhitelistedLuajavaLib());
LoadState.install(globals);
LuaC.install(globals);
return globals;
}
/**
* Runs the given lua file. It must be relative to the lua path.
*/
private void run(String luaPath, String functionName, Object... arguments) {
LuaValue[] coercedValues = null;
if (arguments != null) {
//Coerce arguments into LuaValues
coercedValues = new LuaValue[arguments.length];
for (int i = 0; i < arguments.length; i++) {
coercedValues[i] = CoerceJavaToLua.coerce(arguments[i]);
}
}
//Configure lua file
Globals globals = buildGlobals();
globals.get("dofile").call(LuaValue.valueOf("./world/" + luaPath));
//Call the passed-in function of the lua file.
try {
LuaValue call = globals.get(functionName);
if (arguments != null) {
call.invoke(coercedValues);
}else {
call.invoke();
}
} catch (Exception e) {
e.printStackTrace();
TinyFileDialog.showMessageDialog("Caught " + e.getClass().getName(), e.getMessage(), TinyFileDialog.Icon.INFORMATION);
}
}
}
This is the error that's printed when I run the Java program:
org.luaj.vm2.LuaError: @./world/NOKORIWARE/lua/chatterToolsTest.lua:4 module 'chatterTools' not found: chatterTools
no field package.preload['chatterTools']
chatterTools.lua
no class 'chatterTools'
at org.luaj.vm2.LuaValue.error(Unknown Source)
at org.luaj.vm2.lib.PackageLib$require.call(Unknown Source)
at org.luaj.vm2.LuaClosure.execute(Unknown Source)
at org.luaj.vm2.LuaClosure.onInvoke(Unknown Source)
at org.luaj.vm2.LuaClosure.invoke(Unknown Source)
at org.luaj.vm2.LuaValue.invoke(Unknown Source)
at nokori.robotfarm.test.LuaTest.run(LuaTest.java:64)
at nokori.robotfarm.test.LuaTest.main(LuaTest.java:21)
Any help or links to relevant resources is appreciated.
A:
The default LuaJ working directory is the same as Java's. Once I figured that out, I was able to correctly use require().
chatterTools.lua was changed to this:
local chatterTools = {}
function chatterTools.test()
print("Test success");
end
return chatterTools;
And finally chatterToolsTest.lua had to be changed like this:
function main()
print(package.path);
local chatterTools = require("world.NOKORIWARE.lua.chatterTools");
chatterTools:test();
end
Lua handles packages like above, so instead of world/NOKORIWARE/lua/chatterTools.lua it turns into what you see in the require() call.
After these changes, I ran the program and got the following:
?.lua
Test success
All of this considered, this solution is a lot more straight-forward than the answer in the question I linked at the start of this question. Hopefully this will help some of you out there.
To read more on how I figured this out, check these resources out:
how to call function between 2 .lua
https://forums.coronalabs.com/topic/38127-how-to-call-a-function-from-another-lua-file/
| {
"pile_set_name": "StackExchange"
} |
Q:
Python: building a CCDF out of a list
I have the following list, where the 1st element is a generic value and the second is the number of occurrences of that value:
mylist=[(2, 45), (3, 21), (4, 12), (5, 7),
(6, 2), (7, 2), (8, 3), (9, 2),
(10, 1), (11, 1), (15, 1), (17, 2), (18, 1)]
and I want to compute the CCDF (Complementary Cumulative Distribution Function) of those values appearing as second element of each tuple.
My code:
ccdf=[(i,sum(k>=i for i in mylist)) for i,k in mylist]
But this is not working as the outcome is void:
ccdf=[(2, 0), (3, 0), (4, 0), (5, 0),
(6, 0), (7, 0), (8, 0), (9, 0),
(10, 0), (11, 0), (15, 0), (17, 0), (18, 0)]
The sum of values in the second position in each tuple is 100. So, I would like to know how many times I have a value >= 2 (100-44=56), how many times I have a value >= 3 (100-44-21=35), and so forth. The result would thus be:
ccdf=[(2, 56), (3, 35), (4, 23), (5, 16),
(6, 14), (7, 12), (8, 9), (9, 7),
(10, 6), (11, 5), (15, 4), (17, 3), (18, 1)]
What is wrong in my list comprehension?
A:
Your inner list comprehension is off.
There are two issues:
The correct syntax for a conditional (list) comprehension is: [x for x in someiterable if predicate(x)]
You are using the same variable names in both iterations. That is
confusing and error prone.
Try this instead:
ccdf=[(i,sum(k2 for i2,k2 in mylist if i2 >= i)) for i,k in mylist]
| {
"pile_set_name": "StackExchange"
} |
Q:
Why increased amout of clusters speed up query in Hadoop's MapReduce?
I just started learning Hadoop, in the official guide, it mentioned that double amount of
clusters is able to make querying double size of data as fast as original.
On the other hand, traditional RDBM still spend twice amount of time on querying result.
I cannot grasp the relation between cluster and processing data. Hope someone can give me
some idea.
A:
It's the basic idea of distributed computing.
If you have one server working on data of size X, it will spend time Y on it.
If you have 2X data, the same server will (roughly) spend 2Y time on it.
But if you have 10 servers working in parallel (in a distributed fashion) and they all have the entire data (X), then they will spend Y/10 time on it. You would gain the same effect by having 10 times more resources on the one server, but usually this is not feasible and/or doable. (Like increasing CPU power 10-fold is not very reasonable.)
This is of course a very rough simplification and Hadoop doesn't store the entire dataset on all of the servers - just the needed parts. Hadoop has a subset of the data on each server and the servers work on the data they have to produce one "answer" in the end. This requires communications and different protocols to agree on what data to share, how to share it, how to distribute it and so on - this is what Hadoop does.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why did Achashveirosh take Esther if she was already married?
According to the opinion that Queen Esther was married to Mordechai.
Achashveirosh was looking for unmarried girls (Esther 2:2–4). Why did he take a married woman? Especially considering that Queen Esther was not looking to get married, why didn't she just make her marital state public?
A:
I know this isn't the most geshmake answer, but Ibn Ezra (2:7 - p. 8 here) is clearly bothered by a similar question and says that perhaps the drasha that Mordechai took Esther for a wife doesn't mean that he actually married her, but that such was his intention. (I would add that in order to keep all the drashos one would have to conclude that he had at least betrothed her, otherwise one would not be able to substantiate the drasha of כאשר אבדתי אבדתי - from where Chazal derive that with her willful relations with Achashverosh she became forbidden to Mordechai.)
A:
It is possible that, though the idea as originally outlined for the king by his advisers did in fact include the criterion of 'besulot', when the king issued the edict that criterion was omitted. Hence we do not find it listed in 2:8 and on, where it lists נְעָרוֹת alone:
וַיְהִי, בְּהִשָּׁמַע דְּבַר-הַמֶּלֶךְ וְדָתוֹ, וּבְהִקָּבֵץ נְעָרוֹת רַבּוֹת אֶל-שׁוּשַׁן הַבִּירָה, אֶל-יַד הֵגָי
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does the value of i become negative after some iterations
I have created the following program to find factorial factors, but I am not able to understand why the value of i becomes negative after a few iterations.
#include <stdio.h>
int main()
{
int a,b,i;
printf("enter the number: ");
scanf("%d", &a);
printf("entered value is %d\n", a);
for(i = 1; i < a; i++)
{
printf("iterating for a = %d\n", a);
b = a % i;
if(b == 0)
{
printf("%d\n", i);
}
else
{
printf("a = %d, i = %d, modulo = %d\n", b);
}
}
return (0);
}
A:
Fix:
printf("a = %d, i = %d, modulo = %d\n", b);
to
printf("a = %d, i = %d, modulo = %d\n", a, i, b);
Also, your program doesn't find factorial!
b =1;
for(i = 1; i <= a; i++)
b*=i;
printf(" Factorial for a = %d \n", b);
| {
"pile_set_name": "StackExchange"
} |
Q:
image index out of range PIL
This program creates a 600 x 600 image and then initializes four points.
Each of these four points then move 10% of the distance towards the point
closest to them in a clockwise direction. After each move, the program draws
a line between each of the pairs of points. The program stops when the points
are sufficiently close together.
from PIL import Image
from math import *
# Initial white image
n=600
img = Image.new("RGB", (n, n), (255, 255, 255))
# Draws a line between (p1x, p1y) and (p2x, p2y)
def drawLine(p1x, p1y, p2x, p2y):
t = 0.0
while t < 1.0:
x = int (n * (p1x + t * (p2x - p1x)))
y = int (n * (p1y + t * (p2y - p1y)))
img.putpixel((x, y),(0, 0, 255))
t += 0.001
# Initialize four points
P1 = (x1, y1) = (0.0, 0.0)
P2 = (x2, y2) = (1.0, 0.0)
P3 = (x3, y3) = (1.0, 1.0)
P4 = (x4, y4) = (0.0, 1.0)
# Draws lines
for counter in range(600):
x1 = .9 * x1 + .1 * x2
y1 = .9 * y1 + .1 * y2
drawLine(x1, y1, x2, y2)
x2 = .9 * x2 + .1 * x3
y2 = .9 * y2 + .1 * y3
drawLine(x2, y2, x3, y3) # Doesn't work
x3 = .9 * x3 + .1 * x4
y3 = .9 * y3 + .1 * y4
drawLine(x3, y3, x4, y4) # Doesn't work
x4 = .9 * x4 + .1 * x1
y4 = .9 * y4 + .1 * y1
drawLine(x4, y4, x1, y1)
# Saves image in Lab09.png
img.save("Lab09.png")
img.show("Lab09.png")
So basically the lines commented with # Doesn't work cause this error:
Traceback (most recent call last):
File "/Users/e154675/Desktop/Lab09.py", line 41, in <module>
drawLine(x2, y2, x3, y3)
File "/Users/e154675/Desktop/Lab09.py", line 25, in drawLine
img.putpixel((x, y),(0, 0, 255))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/PIL/Image.py", line 1518, in putpixel
return self.im.putpixel(xy, value)
IndexError: image index out of range
I'd like to know how to fix this issue and what's causing it.
(I'm on a macbook pro using IDLE)
Thank you guys so much!!! :) <3
A:
With drawLine(x2, y2, x3, y3), you have x2 = .9 * x2 + .1 * x3, where x2 and x3 are initially defined to be 1.0. Thus, at the time of the function call, x2 is 1. The first time through your line-drawing loop, when t=0.0, you'll set x to int (n * (p1x + t * (p2x - p1x))), which works out to 1.0 * 600, or 600. So, you'll end up calling img.putpixel using a pixel coordinate whose x component is 600. On a 600x600 image, the outside corner is at (599,599). The result is going to be an IndexError.
To test this theory, you can try making your image one pixel bigger and see if that helps:
img = Image.new("RGB", (n+1, n+1), (255, 255, 255))
Alternatively, move your points (P1, etc.) further away from the edges of your image, like making them (0.1,0.1) and (0.9,0.9) or something.
| {
"pile_set_name": "StackExchange"
} |
Q:
Difference in message-passing model of Akka and Vert.x
Am a Scala programmer and understand Akka from a developer point of view. I have not looked into Akka library's code. Have read about the two types of actors in the Akka model - thread-based and event-based - but not having run Akka at large scale I dont have experience of configuring Akka for production. And am completely new to Vert.x. So, from the choices perspective to build a reactive application stack I want to know -
Is the message-passing model of Akka and Vert.x very different? How?
Are the data-structures behind Akka's actors and Vert.x's verticles to buffer messages very different?
A:
In a superficial view they're really similar, although I personally consider more similar Vert.x ideas to some MQ system than to Akka... the Vert.x topology is more flat: A verticle share a message with other verticle and receive a response... instead Akka is more like a tree, where you've several actors, but you can supervise actors using other actor,..for simple projects maybe they're not so big deal, but for big projects you could appreciate a more "hierarchic system"...
Vert.x on the other hand, offer a better Interoperability between very popular languages*. For me that is a big point, where you would need to mix actors with a MQ system and dealing with more complexity, Vert.x makes it simple and elegant..so the answer, which is better?...depend, if your system will be constructed only over Scala, then Akka could be the best way...if you need communication with JavaScript, Ruby, Python, Java, etc... and don't need a complex hierarchy, then Vert.x is the way to go..
*(using JSON, which could be an advantage or disadvantage compared to)
Also you must consider that Vert.x is a full solution, TCP, HTTP server, routing, even WebSocket!!! That is pretty amazing because they offer a full stack and the API is very clean. If you choose Akka you would need use a framework like Play, Xitrum Ospray. Personally I don't like any of them.
Also remember that Vert.x is a not opinionated platform, you can use Akka or Kafka with it, for example, without almost any overhead. The way how every part of the system is decouple inside a verticle makes it so simple.
Vert.x is a big project with an amazing perspective but really new, if you need a solution now maybe it would not be the better option, fortunately you can learn both and use both in the same project.
A:
After doing a bit of google search I have figured that at detailed comparison of Akka vs Vert.x has not yet been done ( atleast I cound't find it ).
Computation model:
Vert.x is based on Event Driven model.
Akka is based on Actor Model of concurrency,
Reactive Streams:
Vert.x has Reactive Streams builtin
Akka supports Reactive Streams via Akka Streaming. Akka has stream operators ( via Scala DSL ) that is very concise and clean.
HTTP Support
Vert.x has builtin support of creating network services ( HTTP, TCP etc )
Akka has Akka HTTP for that
Scala support
Vert.x is written in Java
Akka is written in Scala and its fun to work on
Remote services
Vert.x supports services, so we need to explicitly create services
Akka has Actors which can be deployed anywhere on the network, with support for clustering, replication, load-balancing, supervision etc.
References:
https://groups.google.com/forum/#!topic/vertx/ppSKmBoOAoQ
https://blog.openshift.com/building-distributed-and-event-driven-applications-in-java-or-scala-with-akka-on-openshift/
https://en.wikipedia.org/wiki/Vert.x
http://akka.io/
| {
"pile_set_name": "StackExchange"
} |
Q:
IList grouping then create new listarray
there have a IList<> have 12 elements,
now,i make the ilist around a circle,end-to-end,
then grouping this list every three element a group,but The beginning of the next group is the end of the previous group,like this:
[0][1][2],[2][3][4],[4][5][6],.......[10][11][0].
how can i make this succeed,the c# will be better
A:
Try this code:
static void Main(string[] args)
{
var list = new List<int> { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 };
foreach (var b in Batch(list))
{
foreach (var n in b)
Console.Write(n + " ");
Console.WriteLine();
}
}
static IEnumerable<IList<int>> Batch(IList<int> list)
{
for (int i = 0; i < list.Count; i += 2)
{
var batch = new List<int>();
for (int j = i; j < i + 3; j++)
if (j < list.Count)
batch.Add(list[j]);
int count = batch.Count;
for (int j = 0; j < 3 - count; j++)
batch.Add(list[j]);
yield return batch;
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Hide table rows in angular Js
I have a table, I am already given it CSS using ng-class if they satisfy a condition. Now I want to show only those rows who satisfy the same condition on a button click. I have wrote a controller which checks if the data received is within 24 hours are not and marks the data cell. Until this it's working.Now I need to add a button and show only the row which has this td marked as not received in time.
<tbody>
<tr ng-repeat ="data in log">
<td>{{data.UniqueId}}</td>
<td>{{data.Name}}</td>
<td ng-class ="{'data-notreceived' : dataNotReceived('data.receivedTime')}">{{data.receivedTime
}}
</tbody>
</table>
A:
I think something like this should work. Basically, clicking the button will toggle between showing all or only the items marked as 'data not received'.
<tbody>
<tr ng-repeat ="data in log" ng-show="showAll || dataNotReceived(data.receivedTime)">
<td>{{data.UniqueId}}</td>
<td>{{data.Name}}</td>
<td ng-class ="{'data-notreceived' : dataNotReceived('data.receivedTime')}">{{data.receivedTime}}
</tr>
</tbody>
// in controller
$scope.showAll = true;
$scope.onButtonClick = function() {
$scope.showAll = !$scope.showAll;
return false;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
confusing with Queryslice.setrange
i am new to cassandra and hector. now i am trying to retrieve some data which i stored in cassandra. there are lot of columns, some of which has a prefix.
column1 column2 column3 prefix1_prefix2_column3 prefix1_prefix2_column4 ....and so on.
now i want to get all the columns with prefix1_prefix2_
however, i got more than i wanted, some other columns are also returend.
the CF comparator is bytestype, i also tried utf8 type, it doesn't work.
following is my code,
SliceQuery<UUID, String, ByteBuffer> query = HFactory.createSliceQuery(
keyspace, UUIDSerializer.get(), stringSerializer,
ByteBufferSerializer.get());
String columnPrifx = "prefix1_prefix2";
query.setKey(keyuuid).setColumnFamily("UserLogin");
query.setRange(columnPrifx, columnPrifx, false, Integer.MAX_VALUE);
//query.setRange(columnPrifx, null, false, Integer.MAX_VALUE);
//i also tried null above
ColumnSliceIterator<UUID, String, ByteBuffer> iterator = new ColumnSliceIterator<UUID, String, ByteBuffer>(
query, null, "\uFFFF", false);
while (iterator.hasNext()) {
HColumn<String, ByteBuffer> c = iterator.next();
System.out.println(c.getName());
}
so, that all, i got more columns than i expected... any one could help me ?
thank you very much
A:
Dont concat string while generating key ,Try to split your column name to composite key like "prefix1_prefix2_" and "column3" . Now if you fetch data as shown below you will get your result
Composite startRange = new Composite();
startRange.addComponent(0, "prefix1_prefix2_",Composite.ComponentEquality.EQUAL);
Composite endRange = new Composite();
endRange.addComponent(0, "prefix1_prefix2_",Composite.ComponentEquality.GREATER_THAN_EQUAL);
query.setRange(startRange, endRange, false, Integer.MAX_VALUE);
| {
"pile_set_name": "StackExchange"
} |
Q:
Finding binding constraints of a mixed-integer-program
I want to find constraints which are binding at the optimal solution of an MIP problem, solved by Cplex in c++. By binding, I mean constraint where the value of the LHS is equal to the value of the RHS. For example, if the solution of a problem is:
x = 1, y = 0,
then constraint x + y <= 2 is non-binding (LHS = 1 + 0 < 2 = RHS),
but x - y <= 1 is binding (LHS = 1 - 0 = 1 = RHS).
This could be done for LPs using getSlack or getDual functions of IloRange: If the slack of a constraint is zero, or the dual value is non-zero, the constraint is binding.
I cant find any function of Cplex that gives this property or value for IloRange, IloConstraint, or similar objects, when the problem is an MIP. I would also prefer not to do this manually in c++ (extracting each variable of a constraint and summing their value per constraint). Is there any way to do this?
A:
I found the answer, IloCplex::getValue(IloNumExprArg) actually gives you the value of an expression (similarly constraint LHS) given the current solution. Comparing this value to the RHS constant determines whether or not the constraint is binding.
| {
"pile_set_name": "StackExchange"
} |
Q:
Printing with delphi
I am facing some difficulties while printing, when I print my reports to physical printer the texts are perfectly centred but when I print the same report to PDF printer (e.g. cutePDF) or XPS document writer the left margin becomes 0. Meanwhile when I am trying to adjust the margin it works fine in PDF and XPS but the physical printing prints the pages with some extra left margin. I am not able to find out this difference also I tried to set the margin only for non-physical printing but could not able to do this.
It would be great if it will possible to set the marige according to printer selection e.g. if I will select PDF printer or XPS writer the margin gets changed. I am using Printer.canvas.textout(), procedure to print the text.
Can anybody please help me for this.
A:
Some points which are worth be highligted:
From the Windows (and Delphi's TPrinter.Canvas) POV, there is no such concept as margins during drawing: the whole paper size is available to the canvas - for instance, X=0 will point to the absolute leftmost part of the paper;
There are so called "hardware margins" or "physical margins", depending on the printer capability: this is the non printable area around the paper; that is, if you draw something in this area, it won't be painted - these margins depend on the technology and model of printer used, and in some cases, it is possible to retrieve those "margins" values from the printer driver via GetDeviceCaps API calls;
But, from my experiment, do not trust those "physical margins" as retrieved by the printer driver - it is better (and more esthetical) to use some software defined margins, and let your user change it if necessary (like the "Page layout" options of MS Word);
PDF printers usually are virtual printers, so they do not have any "physical margin";
When you print a PDF document, Acrobat Reader is able to "fit" the page content to the "physical margins" of the physical printer.
So here are some possible solutions:
From Acrobat Reader, if your PDF has no margin, click on Print, then select "Fit to Printable Area" in the "Page Handling / Page Scaling" option - I guess you have "None " as settings here so the result is truncated by the printer;
From your Delphi application, set some "logical" margins (e.g. 1 cm around your paper) when drawing your report - that is, do not start at X=0 and Y=0, but with some offsets, and let the width and height of your drawing area be smaller (see for instance how is implemented our Open Source Report engine);
From your Delphi application, if you use a Report component, there should be some properties to set the margins.
See this article about general printing using Delphi (some info is old, but most is still accurate), or set up properly your report engine.
| {
"pile_set_name": "StackExchange"
} |
Q:
Use of join method in CompletableFuture class vs get method
I wanted to implement a functionality where a big file gets broken down into chunks and processing can happen in parallel.
I used CompletableFuture to run tasks in parallel.
unfortunately , it doesnt work unless i use join. Im surprised that this is happening, since according to docs, get is also a blocking methd in the class which returns the result. can someone please help me in figuring out what i am doing wrong.
//cf.join(); if i uncommnet this everything works
in case i uncomment the above line in the method processChunk, everything works fine. my values are printed and everything. however if i remove it, nothing happens. all i get are notifications that futures have copleted but the contents are not printed .
This is my output
i cmpleteddone
i cmpleteddone
i cmpleteddone
i cmpleteddone
i cmpleteddone
My text file is a pretty small file(for now)
1212451,London,25000,Blocked
1212452,London,215000,Open
1212453,London,125000,CreditBlocked
1212454,London,251000,DebitBlocked
1212455,London,2500,Open
1212456,London,4000,Closed
1212457,London,25100,Dormant
1212458,London,25010,Open
1212459,London,27000,Open
12124510,London,225000,Open
12124511,London,325000,Open
12124512,London,425000,Open
12124513,London,265000,Open
12124514,London,2577000,Open
12124515,London,2504400,Open
package com.org.java_trial.thread.executors;
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class ProcessReallyBigFile {
private static final ExecutorService ex = Executors.newFixedThreadPool(5);
private static CompletableFuture<String> processChunk(List<String> lines) {
CompletableFuture<String> cf = CompletableFuture.supplyAsync(() -> {
//just for purposes of testing, will be replaced with major function later
lines.stream().forEach(System.out::println);
return "done";
}, ex);
//cf.join(); if i uncommnet this everything works
return cf;
}
private static void readInChunks(String filepath, Integer chunksize) {
List<CompletableFuture<String>> completable = new ArrayList<>();
try (BufferedReader reader = Files.newBufferedReader(Paths.get(filepath))) {
String line = null;
List<String> collection = new ArrayList<String>();
int count = 0;
while ((line = reader.readLine()) != null) {
if (count % chunksize == chunksize - 1) {
collection.add(line);
completable.add(processChunk(collection));
collection.clear();
} else {
collection.add(line);
}
count++;
}
// any leftovers
if (collection.size() > 0)
completable.add(processChunk(collection));
} catch (IOException e) {
e.printStackTrace();
}
for (CompletableFuture c : completable) {
c.join();
if (c.isDone() || c.isCompletedExceptionally()) {
try {
System.out.println("i cmpleted" + c.get());
} catch (InterruptedException | ExecutionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
ex.shutdown();
}
public static void main(String[] args) {
String filepath = "C:\\somak\\eclipse-workspace\\java_thingies\\java_trial\\account_1.csv";
readInChunks(filepath, 3);
}
}
A:
the reason is this:
collection.clear();
your control returns back to the calling method without .join() and the collection that your task is referring to get's cleared. luck you ain't getting an exception thrown for concurrent access. concurrent access to shared resources should always be synchronized. I would have rather done this:
synchronized(collection) {
collection.clear();
}
and
synchronized(collection) {
lines.stream().forEach(System.out::println);
}
this will ensure thread safety while accessing collection object as the thread will require to hold the monitor on instance collection before performing any update on it.
also, as pointed out by @Holger, do this:
synchronized(collection) {
collection.add(line);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Join with select SQL query?
I'm trying to run a query on a table (the one Wordpress uses) where I want to select the ID and post_type columns from one table, then do a Left Join to another table, two separate times (getting separate data).
This is what I have so far, but it's not cutting the mustard:
SELECT derby_posts.id AS pID,
derby_posts.post_type AS tier
FROM derby_posts
LEFT JOIN (SELECT derby_postmeta.post_id AS dbID1,
derby_postmeta.meta_key AS dbMeta1)
ON pid = dbid1
AND dbmeta1 = 'twitter'
LEFT JOIN (SELECT derby_postmeta.post_id AS dbID2,
derby_postmeta.meta_key AS dbMeta2)
ON pid = dbid2
AND dbmeta2 = 'website'
WHERE tier IN ('local', 'regional', 'national')
I'm sure I'm missing something super simple...
Edit: here's the solution that worked for me. Table alias helped, putting all my SELECT statements together cleaned things up. Also, I realized I could remove items from the SELECT, even though I'm using them in the Join, which cleans up the results a lot.
SELECT
db.ID as id,
db.post_type as tier,
dpm1.meta_value as twitter,
dpm2.meta_value as website
FROM derby_posts db
LEFT JOIN derby_postmeta dpm1 ON (db.ID = dpm1.post_id AND dpm1.meta_key = 'twitter' )
LEFT JOIN derby_postmeta dpm2 ON (db.ID = dpm2.post_id AND dpm2.meta_key = 'website' )
WHERE db.post_type IN ('local','regional','national')
A:
I 'm sure I'm missing something super simple...
You are right!
You need to give your selects an alias, and use that alias in the ON clause. You are also missing a FROM <table> - a required part of a SELECT statement that reads from a table:
LEFT JOIN (
SELECT derby_postmeta.post_id AS dbID1,
derby_postmeta.meta_key AS dbMeta1
FROM someTable
) dpm ON pid = dpm.dbid1 AND dpm.dbmeta1 = 'twitter'
I gave the results of your SELECT an alias dpm, and used it to "link up" the rows from the inner select to the rows of your outer select.
| {
"pile_set_name": "StackExchange"
} |
Q:
Get Current slide of SLICK SLIDER and add Classes to inner elements
I have a Slick slider like the example shown in the slick page, I am using the code like this,
<div class="slideshow">
<img src="http://lorempixel.com/500/250" />
<img src="http://lorempixel.com/500/251" />
<img src="http://lorempixel.com/500/249" />
</div>
with a thumbnail carousal
<div class="thumbs">
<img src="http://lorempixel.com/100/100/" />
<img src="http://lorempixel.com/100/100/" />
<img src="http://lorempixel.com/100/100/" />
</div>
Now the Js Code is something like this:
$('.slideshow').slick({
slidesToShow: 1,
slidesToScroll: 1,
arrows: false,
fade: true,
asNavFor: '.thumbs'
});
$('.thumbs').slick({
slidesToShow: 3,
slidesToScroll: 1,
asNavFor: '.slideshow',
dots: true,
centerMode: true,
focusOnSelect: true
});
This works fine, but what I am trying to achive is on the current slide of thumb I want to add something, like class, or on inside element of current thumb( here eg: img).
I tried code like this:
$('.thumbs').on('beforeChange', function(event, slick, currentSlide, nextSlide){
$('.slick-current img').addClass('works');
});
but not working, what is wrong with my code, Is there a way to get this work properlyl
A:
change you beforeChange function as below
$('.slideshow').on('beforeChange', function(event, slick, currentSlide, nextSlide){
$(".slick-slide").removeClass('works');
$('.slick-current').addClass('works');
});
Please find dis fiddle for your reference
| {
"pile_set_name": "StackExchange"
} |
Q:
How to NUnit test for a method's attribute existence
public interface IMyServer
{
[OperationContract]
[DynamicResponseType]
[WebGet(UriTemplate = "info")]
string ServerInfo();
}
How do I write an NUnit test to prove that the C# interface method has the [DynamicResponseType] attribute set on it?
A:
Something like:
Assert.IsTrue(Attribute.IsDefined(
typeof(IMyServer).GetMethod("ServerInfo"),
typeof(DynamicResponseTypeAttribute)));
You could also do something involving generics and delegates or expressions (instead of the string "ServerInfo"), but I'm not sure it is worth it.
For [WebGet]:
WebGetAttribute attrib = (WebGetAttribute)Attribute.GetCustomAttribute(
typeof(IMyServer).GetMethod("ServerInfo"),
typeof(WebGetAttribute));
Assert.IsNotNull(attrib);
Assert.AreEqual("info", attrib.UriTemplate);
| {
"pile_set_name": "StackExchange"
} |
Q:
Seperating multiple strings in excel cell
I'm searching for an excel formula (VBA isn't an option for this scenario) that could split two text strings that appear in a single cell.
Each string can be of any length and contain spaces, but will be seperated by two or more spaces. The first string will almost always be preceeded by multiple spaces, too.
So for example, you may have cells with values like:
north wing second floor
south korea dosan-park
From which I'd want to extract north wing and second floor as two seperate cells from the first line and from the second line get south korea and dosan-park
Any thoughts? Many thanks in advance!
A:
From your examples, I realized that there can be many variations of the data set. To write a formula you should understand the stages and try to make it happen.
Method 1: Formulas
If your data is in cell A1 and we assume that the first part always have two separate strings then the formula is as below for the first part (example: north wing, south korea)
=LEFT(TRIM(A1),FIND(CHAR(1),SUBSTITUTE(TRIM(A1)," ",CHAR(1),2))-1)
and as below for the second part (example: second floor, dosan-park)
=RIGHT(TRIM(A1),LEN(TRIM(A1))-FIND(CHAR(1),SUBSTITUTE(TRIM(A1)," ",CHAR(1),2)))
Method 2: Excel tools and formulas
The other solution I can come up with is using Excel Tools. You can take the text in one or more cells, and split it into multiple cells using the Convert Text to Columns Wizard.
Do as below:
Select the cell or column that contains the text you want to split.
Select Data tab, in Data Tools section click on Text to Columns tool.
In the Convert Text to Columns Wizard, select Delimited and then click on Next.
Select the Delimiters for your data. In your case just select Space. You can see a preview of your data in the Data preview window. Click on Next.
Select the Column data format or use what Excel chose for you (General). Select the Destination, which is the address of your converted data. In case you want to keep your original data for any reason like comparing you should change the default destination address. I recommend you to change it to the next column address. For example, if your data is on column A and row 1 (address $A$1), you should change the destination address to $B$1. Click on Finished.
Now you have each string separated in a new cell, you can concatenate them with formulas in a new cell.
After conversion, if you have the words "north", "wing", "second", and "floor" consecutively in cells C1, D1, E1, and F1 then you can use the below formula in cell G1 to concatenate C1 and D1 to make the string "north wing"
=CONCAT(C1," ",D1)
For the second part, I used TRIM because in the case of "dosan-park" the next cell would be empty. This will add extra space at the end of "dosan-park".
=TRIM(CONCAT(E1, " ",F1))
For more clarification look at the screenshot below.
Replacing the formula with value
You can copy (Ctrl + C) the cell with the formula and use paste special (Ctrl + V) then click on the small paste icon that appears near the cell then click on V button.
| {
"pile_set_name": "StackExchange"
} |
Q:
Endianness and OpenCL Transfers
In OpenCL, transfer from CPU client side to GPU server side is accomplished through clEnqueueReadBuffer(...)/clEnqueueWriteBuffer(...). However, the documentation does not specify whether any endian-related conversions take place in the underlying driver.
I'm developing on x86-64, and a NVIDIA card--both little endian, so the potential problem doesn't arise for me.
Does conversion happen, or do I need to do it myself?
A:
The transfer do not do any conversions. The runtime does not know the type of your data.
You can probably expect conversions only on kernel arguments.
| {
"pile_set_name": "StackExchange"
} |
Q:
Validator with attributes for custom component
I want to create validator for custom component where I want to pass few attributes. This is how code looks like (it's not original code but is implemented in the same way):
Custom component (customComponent.xhtml)
<h:body>
<composite:interface componentType="compositeComponent">
<composite:attribute name="name" required="true" />
<composite:attribute name="value" required="true" />
<composite:attribute name="values" required="true" />
<composite:editableValueHolder name="validator" targets="#{cc.attrs.id}"/>
</composite:interface>
<composite:implementation>
<h:selectOneMenu value="#{cc.attrs.value}" id="#{cc.attrs.id}">
<f:selectItems value="#{cc.attrs.values}" var="item" itemValue="#{item.value}" itemLabel="#{item.label}" />
<composite:insertChildren/>
</h:selectOneMenu>
</composite:implementation>
</h:body>
As you can see I want to pass validator to h:selectOneMenu. Component can be (to be more precisely 'should be' because it currently doesn't work) used in this way:
<ns:customComponent name="myComp" value="#{controller.value}" values="#{controller.values}">
<f:validator validatorId="myValidator" for="validator">
<f:attribute name="param1" value="param1Value"/>
<f:attribute name="param1" value="param1Value"/>
</validator>
</ns:customComponent>
I tested this code and validator is called if i don't pass attributes into it.
<ns:customComponent name="myComp" value="#{controller.value}" values="#{controller.values}">
<f:validator validatorId="myValidator" for="validator"/>
</ns:customComponent>
I found that attributes can be passed in this way:
<ns:customComponent name="myComp" value="#{controller.value}" values="#{controller.values}">
<f:validator validatorId="myValidator" for="validator"/>
<f:attribute name="param1" value="param1Value"/>
<f:attribute name="param1" value="param1Value"/>
</ns:customComponent>
but (as far as I know) only validator will be injected to custom component (thats why for="validator" is set on validator) so I won't be able to get these attributes. How can I pass attributes to this validator?
BTW. If it's possible I will want to pass parameters as nested elements because it looks more clear. This one:
<f:selectOneMenu>
<f:validator validatorId="myValidator">
<f:attribute name="param1" value="value1"/>
</f:validator>
</f:selectOneMenu>
instead of this one:
<f:selectOneMenu>
<f:validator validatorId="myValidator"/>
<f:attribute name="param1" value="value1"/>
</f:selectOneMenu>
A:
I found that <f:validator/> cant have nested elements so this one won't work:
<f:validator validatorId="myValidator">
<f:attribute name="param1" value="value1"/>
</f:validator>
To solve my problem I've created custom validator. To do it I had to:
Create taglib.xml file in WEB-INF dir.
<?xml version="1.0"?>
<!DOCTYPE facelet-taglib PUBLIC
"-//Sun Microsystems, Inc.//DTD Facelet Taglib 1.0//EN"
"http://java.sun.com/dtd/facelet-taglib_1_0.dtd">
<facelet-taglib>
<namespace>http://customtag.com/tags</namespace>
<tag>
<tag-name>uniqueValidator</tag-name>
<validator>
<validator-id>simpleValidator</validator-id>
</validator>
<!-- To show hints on this component add this but it's not required -->
<attribute>
<description>List of elements to check. Validation succeeds if each item is unique (equals() method is used to compare items).</description>
<name>items</name>
<required>true</required>
</attribute>
</tag>
</facelet-taglib>
Register taglib.xml in web.xml
<context-param>
<param-name>javax.faces.FACELETS_LIBRARIES</param-name>
<param-value>/WEB-INF/taglib.xml</param-value>
</context-param>
Write validator code
package validator;
import java.util.List;
import javax.faces.component.UIComponent;
import javax.faces.context.FacesContext;
import javax.faces.validator.FacesValidator;
import javax.faces.validator.Validator;
import javax.faces.validator.ValidatorException;
@FacesValidator("simpleValidator")
public class SimpleValidator implements Validator {
private List<Object> items;
@Override
public void validate(final FacesContext arg0, final UIComponent arg1, final Object arg2) throws ValidatorException {
// use items list
}
public void setItems(final List<Object> items) {
this.items = items;
}
}
This is how it can be used in view / composite component:
<mycomp:custom name="test11">
<myval:uniqueValidator items="#{model.values}" for="validator"/>
</mycomp:custom>
Of course to use validator in custom component I had to define editableValueHolder and inject/paste it using insertChildren (see my question).
| {
"pile_set_name": "StackExchange"
} |
Q:
Servidor Glassfish com Dockerfile
Boa noite pessoal!!
Estou com o seguinte problema:
Preciso subir um container contendo o servidor Glassfish, a imagem foi gerada a partir de um Dockerfile, no entanto ao executar o container contendo essa imagem, no ultimo passo, ou seja, realizar o deploy da aplicação, depois de algum tempo o container finaliza sua execução.
Segue o corpo do Dockerfile
FROM ubuntu:latest
COPY ./glassfish/ /usr/local/
RUN apt-get update && apt-get install -y make git openjdk-8-jdk
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV GLASSFISH_HOME /usr/local/glassfish4/glassfish
ENV PATH $JAVA_HOME/bin:$GLASSFISH_HOME/bin:$PATH
RUN mkdir /app
COPY ./mercado/target/mercado.war /app
WORKDIR /app
RUN asadmin start-domain domain1 && \
asadmin deploy /app/mercado.war
Enfim preciso que o container fique executando para que a aplicação não pare sua execução.
Não consegui identificar o erro no Dockerfile, mas se puderem me ajudar desde já agradeço a atenção!
A:
Olá @john.sousa, então cara, um container é feito para morrer, o problema não está na sua ultima linha e sim no entendimento da coisa, ok? Eu sugiro dar uma estudada na documentação do Docker e principalmente como fazer ele funcionar em modo Daemon. Aprendendo isso você já resolve seu problema. Docker parece difícil, mas, é bastante simples e vai te ajudar muito posteriormente.
A ultima linha do seu script use CMD e aponte para um arquivo de Entrypoint, neste arquivo você roda asadmin start-domain domain1 asadmin deploy /app/mercado.war
Outra coisa, eu verifiquei no seu script que vc copia o diretório do Glassfish e aponta a variável de ambiente para glassfish4, acredito que você terá um problema aí quando conseguir rodar seu script corretamente.
Alem disso, existem imagens já do glassfish 4 (glassfish) ou do java 8 (openjdk:8-jdk-alpine) não precisa fazer todo aquele trabalho de instalação e configuração de variáveis.
Por fim para você me avaliar muito bem essa resposta (mas, ainda assim considere ler a documentação)
Você deve rodar isso dentro da pasta onde está o Dockerfile
docker build -t mercado:1.0 .
Esse comando irá construir um novo container.
docker run --net=host mercado:1.0
Este comando irá rodar a aplicação e setar todas a portas automaticamente.
Alguns links bacana para aprender os conceitos do docker seguem abaixo:
https://blog.geekhunter.com.br/docker-na-pratica-como-construir-uma-aplicacao
https://www.mundodocker.com.br/o-que-e-docker/
| {
"pile_set_name": "StackExchange"
} |
Q:
Access key values of dictionary with tuple as key
I have dictionary dict like this, with tuples as keys:
dict = { (1, 1): 10, (2,1): 12}
and tried to access it like this :
new_dict = {}
for key, value in dict:
new_dict["A"] = key[0]
new_dict["B"] = key[1]
new_dict["C"] = value
But it fails, since key does not seem to resolves to a tuple. What is the correct way?
A:
To iterate over key value pairs, use the .items() method of the dict.
Also, give the dictionary a name like my_dict to avoid overwriting the builtin dict.
new_dict = {}
for key, value in my_dict.items():
new_dict["A"] = key[0]
new_dict["B"] = key[1]
new_dict["C"] = value
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add printThis() element exception?
is it possible to not print a div include in a printThis() element ?
example :
<div id="print-it">
<div>hello</div>
<div id="no-print-this">
<button>must be show on page but not print</button>
</div>
</div>
<button id="printMe">Print button</button>
AND JQUERY
$("#printMe").click(function () {
$('#print-it').printThis();
});
});
the div id "no-print-this" is ever show to print... is it possible to hide it on the print page with this jQuery printing plugin method ?
Can add $('#no-print-this').hide(); to the jquery click function but div "no-print-this" is not show again after closing print window browser...
The @media print method have no effect here. So i dont know if its possible with the prinThis jquery plugin.
Thanks !
A:
so the solution : (thanks to Showdev)
$("#printMe").click(function () {
$('#print-it').printThis({
importCSS: true,
importStyle: true,
loadCSS: "/assets/css/print_rules_page1.css"
});
});
it need to use inportCSS and loadCSS with the printThis() jquery plugin
CSS ("loadCSS" load file print_rules_page1.css)
@media print
{
#no-print-this,#no-print-this2 *{
display: none !important;
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Godox TT350n/Nikon D610 - How to fire my Godox flash "only" as a rear curtain sync?
I want to use my Godox TT350n as a Rear-Curtain off-camera flash; and not fire my in-built flash at all.
i.e.
I want to take a 15" shot, and at the end of the shutter close, I want to fire my off-camera flash and not my in-built flash.
So far I have tried this combination on my camera and my flash:
The camera flash setting is set to Rear Curtain Sync:
Built-in flash is set to "--", Group A mode is selected as M, and Channel is selected as 1 CH:
On flash, settings are matched with what is set in camera:
Can someone help me find the right settings please?
A:
Commander mode is for Nikon's optical CLS system or their radio WR system. It does not work with Godox's 2.4 GHz radio system. While the Godox TT685-N and V860 II-N full-sized speedlights can be optical CLS commanders/slaves, the TT350-N/V350-N cannot.
Your TT350-N is set into radio master mode [antenna icon in lower left with M], to be used as a transmitter on your camera's hotshoe. To get it to work as a radio slave, you also need a Godox transmitter for the camera hotshoe (e.g., XPro-N, X2T-N, Flashpoint R2 Pro II-N). The transmitter should let you set 2nd curtain; instead of using the camera menus to set wireless control, you'd use the transmitter's UI instead.
If you do get a Godox transmitter, you would need to set the TT350-N to radio slave mode by holding down the SYNC button, and when the radio icon flashes, use the wheel/SET button to cycle through M/S/off.
You can trip the TT350 off-camera with your pop-up flash (out of Commander mode, and in M/TTL) by using the S1/S2 "dumb" optical slave modes (be in M mode, no radio function set, and use the SLAVE button on the TT350), but these are similar to SU-4 mode, and do not allow for TTL/HSS communication, and possibly do not allow 2nd curtain (depends on when the pop-up flash burst fires). You may also need to take the TT350 out of radio slave mode for this.
See also: My Godox flash won't fire off-camera. What should I check?
| {
"pile_set_name": "StackExchange"
} |
Q:
Define the content-type of an element together with an enumerable attribute
I am having trouble defining the content-type of an element ("phonenumber") whilst at the same time defining a property ("location") with enumeration restrictions.
Here's where I'm stuck:
<xs:element name="phonenumbers">
<xs:complexType>
<xs:sequence maxOccurs="unbounded">
<xs:element name="phonenumber">
<!--
Try #1:
Define the type of the attribute "location" and the
content-type of the element "phonenumber". But without
enumeration restrictions on "location".
-->
<xs:complexType>
<xs:simpleContent>
<xs:extension base="xs:integer">
<xs:attribute name="location" type="xs:string"/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
<!--
Try #2:
Enumerables in attribute "location" with no content-type
of the element "phonenumber", thus being unable to put
anything in it.
-->
<xs:complexType>
<xs:attribute name="location">
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:enumeration value="Home"/>
<xs:enumeration value="Work"/>
<xs:enumeration value="Mobile"/>
</xs:restriction>
</xs:simpleType>
</xs:attribute>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
A:
What about
<xs:element name="phonenumbers">
<xs:complexType>
<xs:sequence>
<xs:element name="phonenumber" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:simpleContent>
<xs:extension base="xs:integer">
<xs:attribute name="location" use="required">
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:enumeration value="Home"/>
<xs:enumeration value="Work"/>
<xs:enumeration value="Mobile"/>
</xs:restriction>
</xs:simpleType>
</xs:attribute>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
| {
"pile_set_name": "StackExchange"
} |
Q:
Remove extra data from get_data string
I am now having a problem with get_data. The website that i used before returned the result in plain text but now they have "" around the result. How do I get rid of them.
add_filter( 'mycred_buycred_get_cost', 'adjust_buycred_points_cost', 10, 4 );
function adjust_buycred_points_cost( $cost, $amount, $prefs, $buy_creds ) {
$dogeprice1 = $amount * (get_data('https://www.dogeapi.com/wow/?a=get_current_price'));
$roundedprice = (number_format((float)$dogeprice1, 2, '.', ''));
return $roundedprice + 2.50;
}
Expected Result would be 0.00111617 instead of "0.00111617"
A:
$str = strlen($dogeprice1);
$result = substr($dogeprice1,1, $str-1);
This will removed the first and last chareter. I guess it will work for you.
| {
"pile_set_name": "StackExchange"
} |
Q:
Running StyleCopAnalyzers without building solution
I would like to use StyleCopAnalyzers https://github.com/DotNetAnalyzers/StyleCopAnalyzers to find StyleCop violations in my code. However I can't find the way to run them without building entire solution. Is that even possible?
A:
Have a look at the StyleCopTester project. That shows off a lot of the groundwork to run StyleCop Analyzers - and any other Roslyn Analyzer - using the Roslyn (Microsoft.CodeAnalysis) APIs.
| {
"pile_set_name": "StackExchange"
} |
Q:
Android form validation using reactive programming
I'm fairly new to RxJava, RxAndroid. I have two editText one for password and one for password confirmation. Basically I need to check if the two strings match. Is it possible to do this using Observables? Would really appreciate an example so I can grasp it. Cheers.
A:
First, create Observable out of your EditText. You can utilize RxBinding library or write wrappers by yourself.
Observable<CharSequence> passwordObservable =
RxTextView.textChanges(passwordEditText);
Observable<CharSequence> confirmPasswordObservable =
RxTextView.textChanges(confirmPasswordEditText);
Then merge your streams and validate values using combineLatest operator:
Observable.combineLatest(passwordObservable, confirmPasswordObservable,
new BiFunction<CharSequence, CharSequence, Boolean>() {
@Override
public Boolean apply(CharSequence c1, CharSequence c2) throws Exception {
String password = c1.toString;
String confirmPassword = c2.toString;
// isEmpty checks needed because RxBindings textChanges Observable
// emits initial value on subscribe
return !password.iEmpty() && !confirmPassword.isEmpty()
&& password.equals(confirmPassword);
}
})
.subscribe(new Consumer<Boolean>() {
@Override
public void accept(Boolean fieldsMatch) throws Exception {
// here is your validation boolean!
// for example you can show/hide confirm button
if(fieldsMatch) showConfirmButton();
else hideCOnfirmButton();
}
}, new Consumer<Throwable>() {
@Override
public void accept(Throwable throwable) throws Exception {
// always declare this error handling callback,
// otherwise in case of onError emission your app will crash
// with OnErrorNotImplementedException
throwable.printStackTrace();
}
});
subscribe method returns Disposable object. You have to call disposable.dispose() in your Activity's onDestroy callback (or OnDestroyView if you are inside Fragment) in order to avoid memory leaks.
P.S. The example code uses RxJava2
| {
"pile_set_name": "StackExchange"
} |
Q:
I somehow can't stop my setTimeout() loop
So i have a major function which triggers another function every 2 - 17 seconds but when I try to stop it with clearTimeout() it just goes on and completely ignores the clearTimeout().
So this is my major function:
var itemTimer;
var stopTimeout;
function major (){
var itemTime = Math.floor(Math.random() * 15000) + 2000;
itemTimer = setTimeout('items()', itemTime);
stopTimeout = setTimeout('major()',itemTime);
}
And this is my stop timeout function:
function stopTimer() {
clearTimeout(itemTimer);
clearTimeout(stopTimeout);
}
Thank you for helping
A:
Your setTimeout() is being called incorrectly; you're invoking items() and major(). Instead, you need to pass them as functions to be invoked.
Don't pass the brackets to the parameters, and don't wrap the parameters in quote marks.
Instead of:
itemTimer = setTimeout('items()', itemTime);
stopTimeout = setTimeout('major()',itemTime);
You're looking for:
itemTimer = setTimeout(items, itemTime);
stopTimeout = setTimeout(major, itemTime);
Hope this helps! :)
| {
"pile_set_name": "StackExchange"
} |
Q:
calling sidebar for product detail page only
i want to call my custom siderbar on product detail page only below is code i am using in admin > categroy > custom design
<reference name="left">
<block type="core/template" name="Designer Sidebar" template="page/design/sidebar.phtml"/>
</reference>
but it add sidebar on all pages how to restrict it for product detail page only as now my custom sidebar also show on product listing page i do not want it there
based marius answer i update my code to below:in admin > category > custom design
<catalog_product_view>
<reference name="left">
<block type="core/template" name="Designer Sidebar" template="page/design/sidebar.phtml"/>
</reference>
</catalog_product_view>
but after that i do not see sidebar.phtml there
A:
You can restrict it to the product pages by entering the xml in the layout update section of each product (but that's wrong and time consuming) or you can add this in one of your layout xml files:
<catalog_product_view>
<reference name="left">
<block type="core/template" name="Designer Sidebar" template="page/design/sidebar.phtml"/>
</reference>
</catalog_product_view>
| {
"pile_set_name": "StackExchange"
} |
Q:
Moving from $(0,0)$ to $(5,5)$ without right angles
In a Cartesian coordinate system, we can move from $(a,b)$ to $(a+1,b) , (a,b+1)$ and $(a+1,b+1)$, but there must be no right angle occur if we draw lines during the move. In how many ways can we do that so that we start from $(0,0)$ and end at $(5,5)$?
I found solution in AoPS but they just use brute-force method. Can anyone give some hints please. Thank you!
Also, I think that this problem is AIME , but I don’t know the year.
A:
Generating function approach.
Let $H(x,y)$, $V(x,y)$ and $D(x,y)$ be the generating functions of the paths which ends at $(n,m)$ with an horizontal step, a vertical step, or a diagonal step respectively.
Then the following linear relations hold
$$\begin{cases}
H=x(H+D)\\
V=y(V+D)\\
D-1=xy(H+V+D)
\end{cases}
$$
By solving the system, we find that the generating function of all the paths is
$$F(x,y)=H(x,y)+V(x,y)+D(x,y)=\frac{1-xy}{1-x-y+x^2y^2}.$$
Hence the number of paths from the origin to $(n,n)$ for $n\geq 0$ is
$$[x^ny^n]\frac{1-xy}{1-x-y+x^2y^2}=1, 1, 3, 9, 27, 83, 259, 817, 2599, 8323, 26797,\dots.$$
Therefore, for $n=5$, the answer is $83$.
The sequence appears in OEIS as A171155 and, according to the references, the diagonal of $F(x,y)$ is
$$\sqrt{\frac{1-x}{1-3x-x^2-x^3}}.$$
A:
If I understand the problem correctly, and you let $\ n_h(x,y)\ $ be the number of paths from $\ (0,0)\ $ to $\ (x,y)\ $ with final step horizontal from $\ (x-1,y)\ $, $\ n_v(x,y)\ $ the number with final step vertical from $\ (x,y-1)\ $, and $\ n_d(x,y)\ $ the number with final step diagonal from $\ (x-1,y-1)\ $ then $\ n_h, n_v, n_d\ $ must satisfy the following recursions
\begin{align}
n_h(x,y)&=n_h(x-1,y)+n_d(x-1,y)\\ n_v(x,y)&=n_v(x,y-1)+ n_d(x,y-1)\\
n_d(x,y)&=n_h(x-1,y-1)+ n_v(x-1,y-1)+n_d(x-1,y-1)
\end{align}
for $\ 1\le x,y\le5\ $, and the following initial conditons
\begin{align}
n_h(x,0)&=1, n_v(x,0)=n_d(x,0)=0\ \text{ for }\ 1\le x\le5\ ,\\
n_v(0,y)&=1, n_h(0,y)=n_d(0,y)=0 \ \text{ for }\ 1\le y\le5,\ \text{and}\\
n_v(1,1)&=n_h(1,1)=0, n_d(1,1)=1\ .
\end{align}
Applying these recursions to the initial conditions we get the values of $\ n_h(x,y), n_v(x,y), n_d(x,y)\ $ given in the following table:
\begin{array}{c|cccccc}
{}_y\backslash{}^x&0&1&2&3&4&5\\
\hline
0&&(1,0,0) &(1,0,0) &(1,0,0) &(1,0,0) &(1,0,0)\\
1&(0,1,0) & (0,0,1)&(1,0,1)&(2,0,1)&(3,0,1)&(4,0,1)\\
2 &(0,1,0)&(0,1,1)&(1,1,1)&(2,1,2)&(4,1,3)&(7,1,4)\\
3& (0,1,0) &(0,2,1)&(1,2,2)&(3,3,3)&(6,4,5)&(11,5,8)\\
4 &(0,1,0) &(0,3,1)&(1,4,3)&(4,6,5)&(9,9,9)&(18,13,15)\\
5 &(0,1,0) &(0,4,1)&(1,7,4)&(5,11,8)&(13,18,15)&(28,28,27)
\end{array}
The total numbet of paths from $\ (0,0)\ $ to $\ (5,5)\ $ is therefore $\ n_h(5,5)+n_v(5,5)+n_d(5,5)=$$28+28+27=83\ $.
| {
"pile_set_name": "StackExchange"
} |
Q:
Difficulty with persisting a collection that references an internal property at design time in Winforms and .net
The easiest way to explain this problem is to show you some code:
Public Interface IAmAnnoyed
End Interface
Public Class IAmAnnoyedCollection
Inherits ObjectModel.Collection(Of IAmAnnoyed)
End Class
Public Class Anger
Implements IAmAnnoyed
End Class
Public Class MyButton
Inherits Button
Private _Annoyance As IAmAnnoyedCollection
<DesignerSerializationVisibility(DesignerSerializationVisibility.Content)> _
Public ReadOnly Property Annoyance() As IAmAnnoyedCollection
Get
Return _Annoyance
End Get
End Property
Private _InternalAnger As Anger
<DesignerSerializationVisibility(DesignerSerializationVisibility.Content)> _
Public ReadOnly Property InternalAnger() As Anger
Get
Return Me._InternalAnger
End Get
End Property
Public Sub New()
Me._Annoyance = New IAmAnnoyedCollection
Me._InternalAnger = New Anger
Me._Annoyance.Add(Me._InternalAnger)
End Sub
End Class
And this is the code that the designer generates:
Private Sub InitializeComponent()
Dim Anger1 As Anger = New Anger
Me.MyButton1 = New MyButton
'
'MyButton1
'
Me.MyButton1.Annoyance.Add(Anger1)
// Should be: Me.MyButton1.Annoyance.Add(Me.MyButton1.InternalAnger)
'
'Form1
'
Me.Controls.Add(Me.MyButton1)
End Sub
I've added a comment to the above to show how the code should have been generated. Now, if I dispense with the interface and just have a collection of Anger, then it persists correctly.
Any ideas?
Update 1
I'm sick of this. This problem was specifically about persisting an interface collection but now on further testing it doesn't work for a normal collection. Here's some even simpler code:
Public Class Anger
End Class
Public Class MyButton
Inherits Button
Private _Annoyance As List(Of Anger)
<DesignerSerializationVisibility(DesignerSerializationVisibility.Content)> _
Public ReadOnly Property Annoyance() As List(Of Anger)
Get
Return _Annoyance
End Get
End Property
Private _InternalAnger As Anger
<DesignerSerializationVisibility(DesignerSerializationVisibility.Content)> _
Public ReadOnly Property InternalAnger() As Anger
Get
Return Me._InternalAnger
End Get
End Property
Public Sub New()
Me._Annoyance = New List(Of Anger)
Me._InternalAnger = New Anger
Me._Annoyance.Add(Me._InternalAnger)
End Sub
End Class
The designer screws up the persistence code in the same way as the original problem.
Update 2
I've worked out what is going on. I wondered why sometimes it would work and not others. It boils down to the name that I give to the internal property and the collection.
If I rename the property 'Annoyance' to 'WTF', it will serialize correctly because 'WTF' is, alphabetically, after the name of the collection - 'InternalAnger'.
It looks like the serializer is creating instances of objects alphabetically and needs my internal property to be created by the time it comes to create the collection.
I can fix this with a rename, but that's a hack and I fear that writing a custom serializer is a big job - which I've never done before.
Any ideas?
Update 3
I've answered the question with a hack. I'm fairly confident in it unless MS change the way that codedom serializers the designer code.
A:
As i said in the OP, the problem boils down to the name that I give to the internal property and the collection.
Without delving into a custom codedom serializer, the simple solution is to make sure the internal property's name is alphabetically before any other property that will reference it.
I do this by retaining the original property name 'InternalProperty', but I disable serialization and refer it to a proxy property, that is cunningly named, and is serialized.
Private _InternalProperty
Public ReadOnly Property InternalProperty
Get
Return Me._ProxyInternalProperty
End Get
End Property
<Browsable(False), EditorBrowsable(Never), DesignerSerializationVisibility(Content)> _
Public ReadOnly Property _ProxyInternalProperty
Get
Return Me._InternalProperty
End Get
End Property
This is a hack, but its better than renaming my property AInternalProperty. Also, the user will never see _ProxyInternalProperty because it's hidden, and even if they did discover it, there is no danger in referencing it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Buscar um item no json object do http response
Bem eu sou "iniciante" no android. E no meu projeto, eu estou a pegar do http reponse (com o método GET) uma string para JSONObject.
"{ "status":{ "d3": { "stats" : false }, "a1": { "stats" : false } } }"
Mas não estou a conseguir fazer a função que pega o d3/a1.
A:
Tudo o que você precisa fazer é buscar o JSONObject e, em seguida, buscar o primeiro elemento como:
JSONObject obj = new JSONObject (stringJson);
JSONObject status = obj.getJSONObject("status");
JSONObject d3 = status.getJSONObject("d3");
e assim vai você conseguirá pegar sempre seus valores seguindo este conceito.
Porem sugiro que pesquise um pouco o como converter Json para objeto que vai facilitar muito sua vida.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to redirect to previous page in admin?
In my admin controller action I want to redirect to the previous page. How can I do this?
A:
Try this:
$this->_redirectReferer();
it does a little more than redirect to the previous page.
If you specify in the url a parameter uenc it will consider that as the referrer.
A:
Try this code :
Mage::app()->getResponse()->setRedirect($_SERVER['HTTP_REFERER']);
Mage::app()->getResponse()->sendResponse();
exit;
| {
"pile_set_name": "StackExchange"
} |
Q:
Make a textbox content editable false in a dynamic table for HTML using Javascript?
I am trying to make the first cell in my dynamic table not editable but I am having no luck. According to my knowledge, this should be right but for some reason it's not working right.
var n = 1;
function addRow(tableID,column) {
var table = document.getElementById(tableID);
var rowCount = table.rows.length;
var row = table.insertRow(rowCount);
for(i=0;i<column;i++){
var cell = row.insertCell(i);
var element = document.createElement("input");
element.type = "text";
element.name = n+"0"+i;
element.size = "12";
element.id = n+"0"+i;
element.value = element.id;
if(element.id == n+"00"){
element.contenteditable = "false";
element.value = "false";
//alert("false");
}
cell.appendChild(element);
}
n++;
}
Any ideas on how to do this?
n is the number of the row
I am getting "false" for value of the first cell, meaning is entering the if statement, but it's not reading the contenteditable="false".
Like always, any help is greatly appreciated!
A:
Input elements don't need contenteditable. Just use
element.disabled = true;
to disable it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Cleanup of statement handlers in DBI perl
I am using DBI perl to connect with Sybase dataserver. My process does the following in loop that runs throughout the day
Till end of day, do {
$sth = $dbh->prepare
execute query
someAnotherPerlFunction()
someAnotherPerlFunctionOne()
}
someAnotherPerlFunction()
{
$sth = $dbh->prepare (select)
execute query
}
someAnotherPerlFunctionOne()
{
my $sth = undef;
$sth = $dbh->prepare (update)
execute query;
undef $sth;
}
Now, given that this will run throughout the day, what are some of the things that I need to keep in mind in terms of resource cleanup.
Currently, I am doing undef $sth after each function, as shown in someAnotherPerlFunctionOne. Is that necessary?
A:
Perl will clean up for you, but it is a good idea to pass your db handle to the functions instead of recreating it every time and destroying it immediately.
| {
"pile_set_name": "StackExchange"
} |
Q:
add string to dictionary without quotes in python
I'm working with json data to post to an API.
I have everything working as expected formatting the data. However when I put it together I get an unexpected single quote around the variable.
My dictionary is as follows.
data = {
"Items": [
out2
],
"TenantToken": "user",
"UserToken": "pass"
}
The data in "out2" looks something like.
{"Code": "123456789", "LocationCode": "OTV-01", "Quantity": 69, "WarehouseId": 6884}, {"Code": "123456789", "LocationCode": "OTV-01", "Quantity": 123, "WarehouseId": 6884},
However when I post the data I get
{'Items': ['{"Code": "805619531972", "LocationCode": "OSWATV-01", "Quantity": 126, "WarehouseId": 6884}, {"Code": "805619531989", "LocationCode": "OSWATV-01", "Quantity": 142, "WarehouseId": 6884}'], 'TenantToken': 'user', 'UserToken': 'pass'}
With the added single quotes
['{ }']
instead of
[{ }]
This is my first post here so I apologize if I missed anything.
Thanks!
Edit: out2 is currently a string created by using pandas and exporting to .txt (Its saving for future use and because I will be looping multiple files)
I've imported it using
text_file = open('file.txt', "r")
lines = text_file.readlines()
The goal is to make a json to send that looks something like this.
{
"Items": [
{
"Code": "String",
"LocationCode": "String",
"Quantity": 0,
"WarehouseId": 0
},
{
"Code": "String",
"LocationCode": "String",
"Quantity": 0,
"WarehouseId": 0
}
],
"TenantToken": "String",
"UserToken": "String"
}
A:
Use ast to convert the string to a dictionary. Then remove the extra list you're using after "Items": and finally use json.dumps to generate valid json output.
import json
import ast
out2 = '{"Code": "123456789", "LocationCode": "OTV-01", "Quantity": 69, "WarehouseId": 6884}, {"Code": "123456789", "LocationCode": "OTV-01", "Quantity": 123, "WarehouseId": 6884}'
data = {
"Items":
ast.literal_eval(out2),
"TenantToken": "user",
"UserToken": "pass"
}
print(json.dumps(data))
Output
{"Items": [{"Code": "123456789", "LocationCode": "OTV-01", "Quantity": 69, "WarehouseId": 6884}, {"Code": "123456789", "LocationCode": "OTV-01", "Quantity": 123, "WarehouseId": 6884}], "TenantToken": "user", "UserToken": "pass"}
| {
"pile_set_name": "StackExchange"
} |
Q:
copy data which is allocated in device from device to host
I have a pointer which is dynamically allocated in device,then how can I copy it from device to host.
#include <stdio.h>
#define cudaSafeCall(call){ \
cudaError err = call; \
if(cudaSuccess != err){ \
fprintf(stderr, "%s(%i) : %s.\n", __FILE__, __LINE__, cudaGetErrorString(err)); \
exit(EXIT_FAILURE); \
}}
#define cudaCheckErr(errorMessage) { \
cudaError_t err = cudaGetLastError(); \
if(cudaSuccess != err){ \
fprintf(stderr, "%s(%i) : %s : (code %d) %s.\n", __FILE__, __LINE__, errorMessage, err, cudaGetErrorString(err)); \
exit(EXIT_FAILURE); \``
}}
struct num{
int *a;
int b;
};
__device__ struct num *gun;
int main()
{
int i;
char c[100];
struct num *dun,*cun;
cudaSafeCall(cudaSetDevice(1));
cun=(struct num*)malloc(10*sizeof(struct num));
cudaSafeCall(cudaMalloc(&dun,10*sizeof(struct num)));
cudaSafeCall(cudaMemcpyToSymbol(gun,&dun,sizeof(struct num*)));
__global__ void kernel();
kernel<<<1,10>>>();
cudaSafeCall(cudaDeviceSynchronize());
cudaCheckErr(c);
cudaSafeCall(cudaMemcpyFromSymbol(&dun,gun,sizeof(struct num*)));
cudaSafeCall(cudaMemcpy(cun,dun,10*sizeof(struct num),cudaMemcpyDeviceToHost));
for(i=0;i<10;i++) cudaSafeCall(cudaMalloc(&csu[i].a,10*sizeof(int)));
cudaSafeCall(cudaGetSymbolAddress((void**)csu[0].a,(void**)gun[0].a));
for(i=0;i<10;i++) cun[i].a=(int*)malloc(10*sizeof(int));
for(i=0;i<10;i++) cudaSafeCall(cudaMemcpy(cun[i].a,dun[i].a,10*sizeof(int),cudaMemcpyDeviceToHost));
printf("%d ",cun[8].b);
printf("%d ",cun[8].a[8]);
cudaSafeCall(cudaFree(dun));
free(cun);
}
__global__ void kernel()
{
int i;
int tid=threadIdx.x;
gun[tid].b=tid;
gun[tid].a=(int*)malloc(10*sizeof(int));/*this is dynamically allocated in device.*/
for(i=0;i<10;i++)
gun[tid].a[i]=tid+i;
}
In this program, it always comes to a "segmentation fault" in
cudaSafeCall(cudaMemcpy(cun[i].a,dun[i].a,10*sizeof(int),cudaMemcpyDeviceToHost))
Why? And what can I do to copy this data from device to host?
A:
The problem you have is that you are trying to use device pointer indirection in host code, which is illegal. In your example
cudaMemcpy(cun[i].a,dun[i].a,10*sizeof(int),cudaMemcpyDeviceToHost)
dun contains a device pointer, so dun[i].a implies indirection of dun[i] to read the value of a. That is not a valid host memory address and so a seg fault results. You have actually already copied the pointers to the heap memory your kernel allocated when you do this:
cudaMemcpy(cun,dun,10*sizeof(struct num),cudaMemcpyDeviceToHost);
so following that code with
int ** a_h = (int **)malloc(10 * sizeof(int *)); // to hold heap pointers
for(i=0;i<10;i++) {
a_h[i] = cun[i].a; // save heap pointer
cun[i].a=(int*)malloc(10*sizeof(int));
cudaMemcpy(cun[i].a,a_h[i],10*sizeo(int),cudaMemcpyDeviceToHost); // copy heap to host
}
should safely copy the heap memory you allocated back to the host.
| {
"pile_set_name": "StackExchange"
} |
Q:
C3P0 connection pool gives connection timeout error with this configuration
I am using resin server + spring framework and c3p0 connection pooling. I have configured the connection pool with the following properties file. But somehow every 24 hours or so my website faces connection timeout errors and then i have to restart my resin server to make the website live again. Please tell me whats wrong in the following configuration file and what im missing here.
jdbc.driverClassName=com.mysql.jdbc.Driver
jdbc.databaseURL=jdbc:mysql://localhost/my_database1_url
jdbc.StockDatabaseURL=jdbc:mysql://localhost/my_database2_url
jdbc.username=my_username
jdbc.password=my_password
jdbc.acquireIncrement=10
jdbc.minPoolSize=20
jdbc.maxPoolSize=30
jdbc.maxStockPoolSize=30
jdbc.maxStatements=100
jdbc.numOfHelperThreads=6
jdbc.testConnectionOnCheckout=true
jdbc.testConnectionOnCheckin=true
jdbc.idleConnectionTestPeriod=30
jdbc.prefferedTestQuery=select curdate();
jdbc.maxIdleTime=7200
jdbc.maxIdleTimeExcessConnections=5
A:
So, a bunch of things.
c3p0 has built-in facilities for observing and debugging for Connection leaks. Please set the configuration parameters unusedConnectionTimeout unreturnedConnectionTimeout and debugUnreturnedConnectionStackTraces. Set an unreturnedConnectionTimeout that defines a period of time after which c3p0 should presume a Connection has leaked, and so close it. Set debugUnreturnedConnectionStackTraces to ask c3p0 to log the stack trace that checked out the Connection that did not get checked in properly. See Configuring to Debug and Workaround Broken Client Applications.
You are configuring c3p0 in a nonstandard way. That might be fine, or not, but you want to verify that the config that you intend to set is the config c3p0 gets. c3p0 DataSources dump their config at INFO on pool initialization. Please consider checking that to be sure you are getting the config you intend. Alternatively, you can check your DataSource's runtime config via JMX.
Besides the nonstandard means of configuration, several of your configuration properties seem amiss. prefferedTestQuery should be preferredTestQuery. numOfHelperThreads should be numHelperThreads.
The following are not c3p0 configuration names at all. Perhaps you are internally mapping them to c3p0 configuration, but you'd want to verify this. Here are the not-c3p0-property-names:
jdbc.driverClassName=com.mysql.jdbc.Driver
jdbc.databaseURL=jdbc:mysql://localhost/my_database1_url
jdbc.StockDatabaseURL=jdbc:mysql://localhost/my_database2_url
jdbc.username=my_username
jdbc.maxStockPoolSize=30
In a standard c3p0.properties form, what you probably mean is
c3p0.driverClass=com.mysql.jdbc.Driver
c3p0.jdbcURL=jdbc:mysql://localhost/my_database1_url
# no equivalent -- jdbc.StockDatabaseURL=jdbc:mysql://localhost/my_database2_url
c3p0.user=my_username
# no equivalent -- jdbc.maxStockPoolSize=30
Please see Configuration Properties. Again, c3p0 knows nothing about jdbc.-prefixed properties, but perhaps something in your own libraries or middleware picks those up.
Note: I love to see @NiSay's way of checking for Connection leaks, because I love to see people using more advanced c3p0 API. It will work, as long as you don't hot-update your DataSource's config. But you don't need to go to that much trouble, and there's no guarantee this approach will continue to work in future versions c3p0 makes no promises about ConnectionCustomizer lifecycles. ConnectionCustomizers are intended to be stateless. It is easier and safer to use c3p0's built-in leak check facility, described in the first bullet-point above.
A:
As there could be possibility of connection leaks in the program (the probable cause of connection timeouts), you need to follow the below steps in order to identify the leaks.
Make as entry in your c3p0.properties file
c3p0.connectionCustomizerClassName = some.package.ConnectionLeakDetector
Create a class with name 'ConnectionLeakDetector' and place it in appropriate package. Below is the content of the class.
import java.sql.Connection;
import java.util.concurrent.atomic.AtomicInteger;
public class ConnectionLeakDetector implements com.mchange.v2.c3p0.ConnectionCustomizer {
static AtomicInteger connectionCount = new AtomicInteger(0);
@Override
public void onAcquire(Connection c, String parentDataSourceIdentityToken)
throws Exception {
}
@Override
public void onDestroy(Connection c, String parentDataSourceIdentityToken)
throws Exception {
}
@Override
public void onCheckOut(Connection c, String parentDataSourceIdentityToken)
throws Exception {
System.out.println("Connections acquired: " + connectionCount.decrementAndGet());
}
@Override
public void onCheckIn(Connection c, String parentDataSourceIdentityToken)
throws Exception {
System.out.println("Connections released: " + connectionCount.incrementAndGet());
}
}
The onCheckOut method will increment the count when the connection is acquired, where as onCheckOut will decrement it when the connection is released.
Execute some scenarios and observe the statistics on your console. If the count is more than 0, then the scenario executed has a connection leak. Try to fix them and you will observe the difference.
As a side note, you can increment the jdbc.maxPoolSize as a temporary solution until you deploy the fix.
| {
"pile_set_name": "StackExchange"
} |
Q:
C function returns wrong value
float a, b;
float sa() { return a;};
int main() {
a = 10;
b = sa();
printf("%f", b);
return 0;
}
This is a simplified version of my code.
I believe the program should print 10 but it gives me really small numbers like -65550, not always the same but very alike.
I have used the debugger to check the value of variabe a right before it is returned and it is 10, so the function returns 10, but b is set to something like -65550. I don't understand why this happens.
I'd appreciate some intell.
Thanks in advance.
Here is the full code:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <time.h>
int dimensiuni, nrBitiSolutie, bitiPeDimensiune, gasitInbunatatire, nrRulari;
float limInf, limSup, precizie, valoareFunctie, minim, minimNou, T;
char solutie[100000];
float solutieReala[100];
void generateRandomSolution();
void bitesToFloat();
void rastrigin();
void rosenbrock();
float nextFirstFit();
float nextBestFit();
void main() {
int k;
T = 10;
gasitInbunatatire = 1;
srand ( time(NULL) );
printf("Introduceti numarul de dimensiuni: ");
scanf("%d", &dimensiuni);
printf("Introduceti limita inferioara si cea superioara: ");
scanf("%f%f", &limInf, &limSup);
printf("Introduceti precizia: ");
scanf("%f", &precizie);
//calculam numarul de biti necesari ca sa reprezentam solutia
nrBitiSolutie = dimensiuni * ceil(log(limSup-limInf * pow(10, precizie)))/log(2.0);
bitiPeDimensiune = nrBitiSolutie/dimensiuni;
//generam o solutie random
generateRandomSolution();
bitesToFloat();
rastrigin();
minim = valoareFunctie;
printf("Pornim de la %f\n", minim);
while( (nrRulari < 10000) && (T > 0.001)) {
minimNou = sa(); //error occurs here. sa() returns about 200 but minimNou is set to -65550
if (minimNou < minim) {
printf("Minim nou: %f\n", minimNou);
minim = minimNou;
T *= 0.995;
}
nrRulari++;
}
printf("Minimul aproximat: %f\n", minim);
system("pause");
}
void generateRandomSolution() {
int l;
for (l = 0; l < nrBitiSolutie; l++) solutie[l] = rand()%2;
}
void bitesToFloat() {
int i, parcurse = 1, gasite = 0;
int variabila = 0;
float nr;
for (i = 0; i < nrBitiSolutie; i++) {
variabila = variabila<<1 | (int)solutie[i];
if(parcurse == bitiPeDimensiune) {
nr = (float)variabila / (float)pow(2, bitiPeDimensiune);
nr *= limSup-limInf;
nr += limInf;
nr *= pow(10, precizie);
nr = (int)nr;
nr /= pow(10, precizie);
parcurse = 0;
solutieReala[gasite++] = nr;
variabila = 0;
}
parcurse++;
}
}
void rastrigin() {
int i;
valoareFunctie = 10 * dimensiuni;
for (i = 0; i < dimensiuni; i++) {
valoareFunctie += pow((float)solutieReala[i], 2) - 10 * (float)cos(2 * 3.14 * (float)solutieReala[i]);
}
}
void rosenbrock() {
int i;
valoareFunctie = 0;
for (i = 0; i < dimensiuni - 1; i++) {
valoareFunctie += 100 * pow((solutieReala[i+1] - pow(solutieReala[i], 2)), 2) + pow((1-solutieReala[i]), 2);
}
}
float sa() {
int j;
for (j = 0; j < nrBitiSolutie; j++) {
solutie[j] = solutie[j] == 0 ? 1 : 0;
bitesToFloat();
rastrigin();
if (valoareFunctie < minim) return valoareFunctie;
else if ( (rand()/INT_MAX) < exp((minim - valoareFunctie)/T) )
return valoareFunctie;
else solutie[j] = solutie[j] == 0 ? 1 : 0;
}
return minim;
}
I have marked where the error occurs with error occurs here comment
A:
You simplified the code incorrectly. In your simplification, you defined sa() before calling it. But in your full program, you call sa() before defining it. In the absence of a declaration, functions are assumed to return int. Since your function actually returns a float, the result is undefined. (In this case, you will read a garbage value from the top of the floating point stack and then the floating point stack will underflow, and things go downhill from there.)
| {
"pile_set_name": "StackExchange"
} |
Q:
Qt5 to Qt4 UI File Compatibility
Are Qt5 generated UI/form files backwards compatible with Qt4? As in, can I take the source, headers, and UI files and recompile with Qt4 without issue?
A:
The files generated by uic in Qt 5 will have #include <QtWidgets/QFoo>, which of course doesn't work in Qt 4.
Or are you talking about .ui files generated by Qt Designer / Creator? Those will instead work without changes (modulo using Qt 5-only or Qt 4-only classes, of course).
| {
"pile_set_name": "StackExchange"
} |
Q:
What does version attribute mean in .xsl declaration
I am new in xslt and some fundamental questions bother me. One of them is:
What version="1.0" mean in my stylesheet, when I am using xslt 2.0 processor. Even if I have (in my stylesheet) non 1.0 function, it is processed despite I have explicitly declared the stylesheet version to 1.0.
To me, it seems that version attribute has no property beyond informative.
It doesn't configure the processor. Then it serves for what?
The other question is:
Is there any relation between the versions of (xslt processor), (xslt stylesheet) and (xpath)?
Thank you in advance.
A:
See http://www.w3.org/TR/xslt20/#backwards, if the XSLT 2.0 processor supports it then version="1.0" enables backwards compatible processing, one major difference then is that <xsl:value-of select="foo"/> outputs a text node with the string value of the first selected foo element while version="2.0" would output the values of all selected foo elements.
As an example see http://xsltransform.net/6r5Gh2R, it processes the input
<?xml version="1.0" encoding="UTF-8"?>
<root>
<items>
<item>foo</item>
<item>bar</item>
</items>
</root>
with the stylesheet
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0">
<xsl:output indent="yes"/>
<xsl:template match="/">
<result>
<result version="2.0">
<xsl:apply-templates/>
</result>
<result version="1.0">
<xsl:apply-templates mode="backwards"/>
</result>
</result>
</xsl:template>
<xsl:template match="@*|node()" mode="#all">
<xsl:copy>
<xsl:apply-templates select="@*|node()" mode="#current"/>
</xsl:copy>
</xsl:template>
<xsl:template match="items">
<xsl:copy>
<xsl:value-of select="item"/>
</xsl:copy>
</xsl:template>
<xsl:template match="items" version="1.0" mode="backwards">
<xsl:copy>
<xsl:value-of select="item"/>
</xsl:copy>
</xsl:template>
</xsl:transform>
where there are two template match="items" in different modes and one template uses version="1.0", the result of the stylesheet is
<?xml version="1.0" encoding="UTF-8"?>
<result>
<result version="2.0">
<root>
<items>foo bar</items>
</root>
</result>
<result version="1.0">
<root>
<items>foo</items>
</root>
</result>
</result>
which demonstrates the difference of the value-of select="item" evaluation depending on the version.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to select only unique record in mysql
I have used the following mysql query:
select q.question,q.formnumber,g.forename,a.guardiancode from tblfeedbackquestions q
join tblfeedbackanswers a on q.id=a.questionid join tblguardian g on g.guardiancode=a.guardiancode where q.formnumber='3'
Output of this query is:
Now what i want is only the first row for same guardiancode should appear like in output for row 1 and row 2 i have guardian code as 10025 but i want only the first row and similarly for each unique guardiancode.Please help me with this.
A:
Try adding GROUP BY:
select q.question,q.formnumber,g.forename,a.guardiancode
from tblfeedbackquestions q
join tblfeedbackanswers a on q.id=a.questionid join tblguardian g
on g.guardiancode=a.guardiancode
where q.formnumber='3'
GROUP BY g.guardiancode
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I calculate how many servers do I need?
Assuming that I and some of my friends trying to build a social network for a large number of people approximatelly 4,000,000. Let's say a random 100,000 of them will be available online every day. The database usage will be let's say 200 MB a day!
Now How can I know how many servers do I need and which will be best to suit our goal?
Regards,
A:
Unless you happen to have 4 million users all lined up ready to use your site I would just get one cheap server to start everything with and see where it goes from there. Concentrate, instead, on building a scalable software platform for your service. Test, benchmark, profile everything you can so you know what the limits of your software/hardware is at any time and what effect changes have.
If you are mainly just curious about high end scalability you can search for how the current "big guys" do it and the challenges they faced along the way. For example, Facebook, Wikipedia, High Scalability, MySpace, etc.... You may not be aiming so high but you can learn a lot from how they do things and use the same design patterns. Unless you have experience working with such large systems it is very hard or impossible to guess the scalability issues you are going to have until you have them.
A:
Will depend on how much time/page loads the average user has. Social networking tends to be "sticky" according to Facebook, so people hang around for a while. More page loads means more load on the system. The code behind the site will have another huge effect, better code will put a lighter load on the system.
These days if you don't have a good idea how much/how fast your site will grow you might want to consider one of the cloud hosting environments like EC2 or Rackspace Cloud/Slicehost. You can buy two server instances to get started, and add more servers quickly as load changes. Experience with your app is the best way to get a solid idea on how much capacity you will really need. Excess capacity sitting around is expensive, so avoid it if you can.
Having said that, 100,000 users isn't a huge load if they only load the page a few times a day. You should be able to get started on that with as little as a single server and probably no more than 2-3 total.
| {
"pile_set_name": "StackExchange"
} |
Q:
CF DESEDE encrypt() Key Length Issue
I am trying to encrypt a string using ColdFusion encrypt() with a 3rd party provided key like this:
encrypteded = encrypt('theString', 'FD52250E230D1CDFD5C2DF0D57E3E0FEFD52250E230D1CDF', 'DESEDE/CBC/NoPadding', 'BASE64', ToBase64('0'));
I get:
"The key specified is not a valid key for this encryption: Wrong key algorithm, expected DESede."
What do I have to do to this key in terms of encoding/decoding to get it into the right format?
A:
Generally, when using provided keys from other languages, you have to do a little gymnastics on it to get it into Base64.
Try this for the key argument:
ToBase64(BinaryDecode('FD52250E230D1CDFD5C2DF0D57E3E0FEFD52250E230D1CDF','hex'))
But, to make this work for me, the input string needed to be a multiple of 8 bytes (because you're specifying NoPadding), and the IV needed to also be a multiple of 8 bytes.
So, this ended up working for me - not sure if you'll be able to decrypt it on the other end, tho, if the IV they're specifying is really what you've got listed there.
encrypteded = encrypt('theStrin', ToBase64(BinaryDecode('FD52250E230D1CDFD5C2DF0D57E3E0FEFD52250E230D1CDF','hex')), 'DESEDE/CBC/NoPadding', 'BASE64', ToBase64('0000'));
No IV also worked as well (with different output, obviously):
encrypteded = encrypt('theStrin', ToBase64(BinaryDecode('FD52250E230D1CDFD5C2DF0D57E3E0FEFD52250E230D1CDF','hex')), 'DESEDE/CBC/NoPadding', 'BASE64');
If you've been given a Hex IV, then you can use it as such:
encrypteded = encrypt('theStrin', ToBase64(BinaryDecode('FD52250E230D1CDFD5C2DF0D57E3E0FEFD52250E230D1CDF','hex')), 'DESEDE/CBC/NoPadding', 'BASE64', BinaryDecode("7fe8585328e9ac7b","hex"));
Hopefully this is enough info to get you on your way!
| {
"pile_set_name": "StackExchange"
} |
Q:
moved my mingw installation - now eclipse cdt can't resolve the includes
I went to Project->Properties->C/C++ General->Paths and Symbols and indeed the paths are the ones of my old mingwin install. My question is : is there an easy way to tell eclipse to reset the toolchain - for the workspace not per project ? Of course when I installed the CDT having already mingwin in my PATH I did not configure anything manually and I'd like to avoid it now
Thanks
A:
Found an answer here. Go to Project > properties > C/C++ General > Paths and Symbols > GNU C++ - you will notice the paths point to old installation. Clear them in Project > properties > C/C++ Build > Discovery Options > GCC C++ Compiler and press the Clear button in the Automated discovery of paths and symbols box by the Clear discovered entries now label.
Rebuild your project.
Valid for Eclipse Juno.
If someone could explain how to do this not on a per project basis but for the whole workspace (the clean up of the settings) it would be much appreciated (and accepted). The paths are hardcoded in ${workspace_loc}\.metadata\.plugins\org.eclipse.cdt.make.core\<project_name>.sc files - the usual total epic mess with eclipse prefs being scattered all over the place - grr. Would the answer to my original question be to search and replace the paths there (and only there ?) ?
EDIT : Of note are the items in Project > Index menu - maybe a more appropriate way to do the same thing ?
| {
"pile_set_name": "StackExchange"
} |
Q:
meaning of a regexp if ($_ =~ /-\n/)
I am a beginner of perl scripting.
I know hyphen (-) is used to specify the range.
But what if it is mentioned in the beginning of the expression?
Example:
if ($_ =~ /-\n/)
//do something
How to interpret the above code?
"if the parameter is equal to a range of newline" ?
(No, that is weird understanding :-/)
Please help.
A:
Outside of [] - means "-" as far as I know, it only indicates a range within a [] block.
Here is a more complete answer I found
How to match hyphens with Regular Expression? (look at the second answer)
So the expression should match a - followed by a newline or line ending with -
A:
The pattern will match hyphens "-" followed by a newline \n.
The hyphen is treated as a range operator inside character classes, as explained in perldoc perlrequick:
The special character '-' acts as a range operator within character
classes, so that the unwieldy [0123456789] and [abc...xyz] become
the svelte [0-9] and [a-z] :
/item[0-9]/; # matches 'item0' or ... or 'item9'
/[0-9a-fA-F]/; # matches a hexadecimal digit
If '-' is the first or last character in a character class, it is
treated as an ordinary character.
A:
This means:
If there is a hyphen immediately followed by a newline-character, no matter where this pair of characters is located inside the string.
| {
"pile_set_name": "StackExchange"
} |
Q:
Ember Component handle jQuery event handlers
In an Ember app Component, what is the best hook within which to handle jQuery event handlers ?
I want to handle a keyUp event on an input element.
A:
didInsertElement would be the best place to do anything jquery or dom related operation.
didInsertElement() is also a good place to attach event listeners. This is particularly useful for custom events or other browser events which do not have a built-in event handler.
https://guides.emberjs.com/v3.0.0/components/the-component-lifecycle/#toc_integrating-with-third-party-libraries-with-code-didinsertelement-code
Edit
We do not need $(document).ready() since document would have already been loaded by that time. You can access the dom element globally or locally.
You can access globally by using Ember.$() which is a similar to normal query which you can use to select any element on the page. Even from another component.
The better (preferred) approach is to access locally using this.$() which is scoped to component elements only.
For example:
<h1 class="title">Heading 1</h1>
{{your-component}}
# your-component.hbs
<div> <h2>Component Heading 2</h2></div>
From above example, you can access both and tags inside didInsertElement globally by using Ember.$('h1') and Ember.$('h2')
However if you do this.$('h1'), it will return null as your component template does not have h1 tag and the exiting h1 tag is outside of your component.
In nutshell, Ember.$() acts like regular $() and this.$() act like Ember.$('your component root element').find()
| {
"pile_set_name": "StackExchange"
} |
Q:
emberjs checkedBinding on Em.Checkbox doesn't work
Why doesn't this checkedBinding on an Em.Checkbox work?
Here is a code snippet illustrating the problem:
With this template
{{#each person in people}}
<li>{{view Em.Checkbox checkedBinding=person.isSelected}}</li>
{{/each}}
and this controller
App.IndexController = Em.Controller.extend({
count: function(){
return this.get('people').filterBy('isSelected').get('length');
}.property('[email protected]'),
people: Em.A([
Person.create({firstName: 'Kris', lastName: 'Selden', isSelected: true}),
Person.create({firstName: 'Luke', lastName: 'Melia', isSelected: false}),
Person.create({firstName: 'Formerly Alex', lastName: 'Matchneer', isSelected: false})
])
});
I see all the check boxes unchecked
Here is a fiddle.
A:
You will need to wrap your checkedBinding in quotation marks, like so:
{{#each person in people}}
<li>{{view Em.Checkbox checkedBinding="person.isSelected"}}</li>
{{/each}}
See this working jsFiddle
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use the .insert method to add values to a list
I've been working on an algorithm that involves genetic code. I started by associating all 4 genetic bases, A, C, T, G with a list. A is 1,0,0,0. C is 0,1,0,0. T is 0,0,1,0 and G is 0,0,0,1. There are two different genetic codes, one being the original one and the other being one that was genetically mutated. The algorithm is going to come to conclusions of the data given based on the difference between the two genetic codes. But first, I need to sort of preprocess the data before I can work on the algorithm making conclusions.
What I'm trying to do is, when the code sees a letter in the original code, it should look at the letter in the same position in the copy version. If you look at the code below, an example would be seeing if the first letter in each(A & C) or the second letter in each(T & T) are the same. If they are then the list should not change. For example, in the 2nd position, T & T are the same. Which means the list would stay the same and be: 0,0,1,0. However, if it's not the same, so for example A & C, then the algorithm should overlap them and add both letter. So the code would be 1,0,1,0.
So far, this is what the code is looking like:
A = [1,0,0,0]
C = [0,1,0,0]
T = [0,0,1,0]
G= [0,0,0,1]
original = [A,T,T,G,C,T,A]
copy = [C,T,T,A,T,A,A]
final = original # In case you were wondering the purpose of this line is to make a new variable to hold the end result.
for i,v in enumerate(original):
if v == copy[i]:
print(v)
else:
print(final.insert(i,copy[i]))
When I run it I get "list index out of range" and I tried to play with it a little and delete the final = original and for some reason it works but instead of combining the two different letters when it should, it just says None.
I'm pretty new to programming so this could be a simple issue but I was wondering how I can actually go about making the two letters from two different lists, overlap if they are different.
A:
Lists are "mutable" in python , in your code by final = original your final name is a new 'reference' to the the list named 'original', but not a new list and any changes made to the underlying list using either name will affect both (or rather will be visible using both list names, but change is only in one place). Use of mutable objects is usually the source of coders pains. You can use final = original.copy() to make a copy and operate on it safely. See other discussions on SO of Are Python Lists mutable?. Easy to trip over it when you are starting.
| {
"pile_set_name": "StackExchange"
} |
Q:
Django Time_Zone not saving correct values
I have the following settings in my settings.py regarding Time zones.
TIME_ZONE = 'Asia/Kolkata'
USE_TZ = True
Using a datetime field, say I save a particular time as
Date: 2015-04-01
Time: 22:00:00
it still gets stored as 2015-04-01 16:30:00 in my database which on querying returns 04:30 PM 01/04/2015 but in admin still shows
Date: 2015-04-01
Time: 22:00:00
How do I fix this error? Is there a setting that I'm missing in settings.py?
A:
With Windows, you have to actually change the operating system timezone to be able to reliably work:
https://docs.djangoproject.com/en/1.7/ref/settings/#time-zone
Note
Django cannot reliably use alternate time zones in a Windows environment. If you’re running Django on Windows, TIME_ZONE must be set to match the system time zone.
| {
"pile_set_name": "StackExchange"
} |
Q:
Google Plus Account Sign-In/Sign-Out not working properly
Not able to select among multiple accounts(second time and above) while sign in via google
I am using google plus sign in into my android app. Every thing is working fine. But when I try to login first time(after app installation) is gives me option to select among different google account(if i have more then 1 account login into my device)
, But when i logout and login again it don't give me that selection option, it automatically login with that account that was previously selected.
I am using this code for logout.
GoogleSignInOptions gso = new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN).build();
GoogleApiClient mGoogleApiClient = new GoogleApiClient.Builder(context)
.addApi(Auth.GOOGLE_SIGN_IN_API, gso)
.build();
Auth.GoogleSignInApi.signOut(mGoogleApiClient);
Auth.GoogleSignInApi.revokeAccess(mGoogleApiClient);
I have also gone through signOut documentation , What i understand from there is "Removes the default account set in Google Play services for your app" but it doesn't work. Is there any solution to do that ?
Anyone please help to find solution
A:
After long time, i found the answer for the problem, from the answer of Rahul Sonone.
They only thing that did trick for me is calling signOut just before you try to sign in.
Auth.GoogleSignInApi.signOut(mGoogleApiClient);
Intent signInIntent = Auth.GoogleSignInApi.getSignInIntent(mGoogleApiClient);
startActivityForResult(signInIntent, RC_SIGN_IN);
| {
"pile_set_name": "StackExchange"
} |
Q:
git local master branch stopped tracking remotes/origin/master, can't push
Just when I thought I'd got the hang of the git checkout -b newbranch - commit/commit/commit - git checkout master - git merge newbranch - git rebase -i master - git push workflow in git, something blew up, and I can't see any reason for it.
Here's the general workflow, which has worked for me in the past:
# make sure I'm up to date on master:
$ git checkout master
$ git pull # k, no conflicts
# start my new feature
$ git checkout -b FEATURE9 # master @ 2f93e34
Switched to a new branch 'FEATURE9'
... work, commit, work, commit, work, commit...
$ git commit -a
$ git checkout master
$ git merge FEATURE9
$ git rebase -i master # squash some of the FEATURE9 ugliness
Ok so far; now what I expect to see -- and normally do see -- is this:
$ git status
# On branch master
# Your branch is ahead of 'origin/master' by 1 commit.
#
nothing to commit (working directory clean)
But instead, I only see "nothing to commit (working directory clean)", no "Your branch is ahead of 'origin/master' by 1 commit.", and git pull shows this weirdness:
$ git pull
From . # unexpected
* branch master -> FETCH_HEAD # unexpected
Already up-to-date. # expected
And git branch -a -v shows this:
$ git branch -a -v
FEATURE9 3eaf059 started feature 9
* master 3eaf059 started feature 9
remotes/origin/HEAD -> origin/master
remotes/origin/master 2f93e34 some boring previous commit # should=3eaf059
git branch clearly shows that I'm currently on * master, and git log clearly shows that master (local) is at 3eaf059, while remotes/origin/HEAD -> remotes/origin/master is stuck back at the fork.
Ideally I'd like to know the semantics of how I might have gotten into this, but I would settle for a way to get my working copy tracking the remote master again & get the two back in sync without losing history. Thanks!
(Note: I re-cloned the repo in a new directory and manually re-applied the changes, and everything worked fine, but I don't want that to be the standard workaround.)
Addendum: The title says "can't push", but there's no error message. I just get the "already up to date" response even though git branch -a -v shows that local master is ahead of /remotes/origin/master. Here's the output from git pull and git remote -v, respectively:
$ git pull
From .
* branch master -> FETCH_HEAD
Already up-to-date.
$ git remote -v
origin [email protected]:proj.git (fetch)
origin [email protected]:proj.git (push)
Addendum 2: It looks as if my local master is configured to push to the remote, but not to pull from it. After doing for remote in 'git branch -r | grep -v master '; do git checkout --track $remote ; done, here's what I have. It seems I just need to get master pulling from remotes/origin/master again, no?
$ git remote show origin
* remote origin
Fetch URL: [email protected]:proj.git
Push URL: [email protected]:proj.git
HEAD branch: master
Remote branches:
experiment_f tracked
master tracked
Local branches configured for 'git pull':
experiment_f merges with remote experiment_f
Local refs configured for 'git push':
experiment_f pushes to experiment_f (up to date)
master pushes to master (local out of date)
A:
When you do a git pull did you actually want to do a git push?
For some reason git pull is "pulling" from your current directory, I suspect you want to be pulling from remotes/origin/HEAD.
What output does git push origin produce?
[Addendum by Paul]: This led me to the correct answer, so I'm accepting. The additional steps it took to figure out what was going on were:
# see details of the current config:
$ git config -l
branch.master.remote=. # uh oh, this should point to origin
# to see what it should be ,make a clean clone of the same project
# in a different directory, checkout the master branch and run the
# same command. That showed "branch.master.remote=origin", so...
# then to fix:
$ git config branch.master.remote origin
After that, the local master was tracking remotes/origin/master again. Thanks to Peter Farmer for the clue that got me here!
| {
"pile_set_name": "StackExchange"
} |
Q:
Dealing with Boundary conditions / Halo regions in CUDA
I'm working on image processing with CUDA and i've a doubt about pixel processing.
What is often done with the boundary pixels of an image when applying a m x m convolution filter?
In a 3 x 3 convolution kernel, ignoring the 1 pixel boundary of the image is easier to deal with, especially when the code is improved with shared memory. Indeed, in this case, one does not need to check if a given pixel has all the neigbourhood available (i.e. pixel at coord (0, 0) has not left, left-upper, upper neighbours). However, removing the 1 pixel boundary of the original image could generate partial results.
Opposite to that, I'd like to process all the pixels within the image, also when using shared memory improvements, i.e., for example, loading 16 x 16 pixels, but computing the inner 14 x 14. Also in this case, ignoring the boundary pixels generates a clearer code.
What is usually done in this case?
Does anyone usually use my approach ignoring the boundary pixels?
Of course, I'm aware the answer depends on the type of problem, i.e. adding two images pixel-wise has not this problem.
Thanks in advance.
A:
A common approach to dealing with border effects is to pad the original image with extra rows & columns based on your filter size. Some common choices for the padded values are:
A constant (e.g. zero)
Replicate the first and last row / column as many times as needed
Reflect the image at the borders (e.g. column[-1] = column[1], column[-2] = column[2])
Wrap the image values (e.g. column[-1] = column[width-1], column[-2] = column[width-2])
A:
tl;dr: It depends on the problem you're trying to solve -- there is no solution for this that applies to all problems. In fact, mathematically speaking, I suspect there may be no "solution" at all since I believe it's an ill-posed problem you're forced to deal with.
(Apologies in advance for my reckless abuse of mathematics)
To demonstrate let's consider a situation where all pixel components and kernel values are assumed to be positive. To get an idea of how some of these answers could lead us astray let's further think about a simple averaging ("box") filter. If we set values outside the boundary of the image to zero then this will clearly drag down the average at every pixel within ceil(n/2) (manhattan distance) of the boundary. So you'll get a "dark" border on your filtered image (assuming a single intensity component or RGB colorspace -- your results will vary by colorspace!). Note that similar arguments can be made if we set the values outside the boundary to any arbitrary constant -- the average will tend towards that constant. A constant of zero might be appropriate if the edges of your typical image tend towards 0 anyway. This is also true if we consider more complex filter kernels like a gaussian however the problem will be less pronounced because the kernel values tend to decrease quickly with distance from the center.
Now suppose that instead of using a constant we choose to repeat the edge values. This is the same as making a border around the image and copying rows, columns, or corners enough times to ensure the filter stays "inside" the new image. You could also think of it as clamping/saturating the sample coordinates. This has problems with our simple box filter because it overemphasizes the values of the edge pixels. A set of edge pixels will appear more than once yet they all receive the same weight w=(1/(n*n)).
Suppose we sample an edge pixel with value K 3 times. That means its contribution to the average is:
K*w + K*w + K*w = K*3*w
So effectively that one pixel has a higher weight in the average. Note that since this is an average filter the weight is a constant over the kernel. However this argument applies to kernels with weights that vary by position too (again: think of the gaussian kernel..).
Suppose we wrap or reflect the sampling coordinates so that we're still using values from within the boundary of the image. This has some valuable advantages over using a constant but isn't necessarily "correct" either. For instance, how many photos do you take where the objects at the upper border are similar to those at the bottom? Unless you're taking pictures of mirror-smooth lakes I doubt this is true. If you're taking pictures of rocks to use as textures in games wrapping or reflecting could be appropriate. I'm sure there are significant points to be made here about how wrapping and reflecting will likely reduce any artifacts that result from using a fourier transform. However this comes back to the same idea: that you have a periodic signal which you do not wish to distort by introducing spurious new frequencies or overestimating the amplitude of existing frequencies.
So what can you do if you're filtering photos of bright red rocks beneath a blue sky? Clearly you don't want to add orange-ish haze in the blue sky and blue-ish fuzz on the red rocks. Reflecting the sample coordinate works because we expect similar colors to those pixels found at the reflected coordinates... unless, just for the sake of argument, we imagine the filter kernel is so big that the reflected coordinate would extend past the horizon.
Let's go back to the box filter example. An alternative with this filter is to stop thinking about using a static kernel and think back to what this kernel was meant to do. An averaging/box filter is designed to sum the pixel components then divide by the number of pixels summed. The idea is that this smooths out noise. If we're willing to trade a reduced effectiveness in suppressing noise near the boundary we can simply sum fewer pixels and divide by a correspondingly smaller number. This can be extended to filters with similar what-I-will-call-"normalizing" terms -- terms that are related to the area or volume of the filter. For "area" terms you count the number of kernel weights that are within the boundary and ignore those weights that are not. Then use this count as the "area" (which might involve a extra multiplication). For volume (again: assuming positive weights!) simply sum the kernel weights. This idea is probably awful for derivative filters because there are fewer pixels to compete with the noisy pixels and differentials are notoriously sensitive to noise. Also, some filters have been derived by numeric optimization and/or empirical data rather than from ab-initio/analytic methods and thus may lack a readily apparent "normalizing" factor.
| {
"pile_set_name": "StackExchange"
} |
Q:
Pick first result from multiple matches when using WHERE
I'm facing a "problem" when performing an area search based on zip codes, longitudes and latitudues with MySQL. But there exist duplicate zip codes within a country.
I'm currently performing a first query, to check for multiple results and pick the first one (as they only have a very small difference in distance). But I want to do it in one query.
Is there a way, to ignore multiple matches when using WHERE or simply pick the first one.
This is what I have so far:
SELECT
u.name,
dest.zc_zip,
dest.zc_location_name,
ACOS(
SIN(RADIANS(src.zc_lat)) * SIN(RADIANS(dest.zc_lat))
+ COS(RADIANS(src.zc_lat)) * COS(RADIANS(dest.zc_lat))
* COS(RADIANS(src.zc_lon) - RADIANS(dest.zc_lon))
) * 6380 AS distance
FROM zip_coordinates dest
CROSS JOIN zip_coordinates src
CROSS JOIN users u
WHERE src.zc_id = 2 /* searching for id */
AND u.zip = dest.zc_zip
AND u.city = dest.zc_location_name
HAVING distance < 100
ORDER BY distance;
Here I want to change src.zc_id = 2 to something like src.zc_zip = XXXX
Edit:
I also created a sql fiddle: http://sqlfiddle.com/#!2/5fb6a/3
A:
if I understand your question properly, you want to replace the
WHERE src.zc_id = 2 /* searching for id */
line with something like:
WHERE src.zc_id = (SELECT ... FROM zip_coordinates where ...)
And you want the result of the WHERE clause to be a single value. In that case you can just use the MAX function. Something like:
WHERE src.zc_id = (SELECT MAX(zc_id) FROM zip_coordinates where ...)
It would be helpful if you provide more information about the query you are running before this SELECT query to pick one of the results.
| {
"pile_set_name": "StackExchange"
} |
Q:
Cycles in oribifolds
Suppose we have a compact, orientable, $n$-dimensional orbifold $X$, where $n \geq 3$. Suppose that there is a single isolated orbifold point $p_{0} \in X$, with a neighbourhood homeomorphic to $\mathbb{R}^{n} /\{\pm 1\}$, i.e. the cone over $\mathbb{RP}^{n-1}$. In particular $X$ has a single orbifold point with order $2$.
In what follows when I refer to homology, I mean just the singular homology of the underlying topological space of $X$. I think that the following statement should be true and I would like to ask for a proof or a reference for it.
Question: Suppose we have a integral cycle $C \in C_{d}(X,\mathbb{Z})$ in $X$ and $0 < d < \dim(X)$. Then $[2C]$ is represented by a cycle which is (set-theoretically) disjoint from $p_{0} \in X$.
A:
Here is the details of Moishe Kohan's nice solution. We will need the the following fact:
Fact: Let $m>0$ be an integer, for $0<i<m$ for any $\alpha \in H_{i}(\mathbb{RP}^{m},\mathbb{Z})$ satisfies $2 \alpha =0$.
Let $U$ be a small neighbourhood of the orbifold point, homeomorphic the cone on $\mathbb{RP}^{n-1}$, in particular $U$ is contractible. Let $V$ be a small neighbourhood of $X \setminus U$, so that $U \cap V$ is homeomorphic to $(0,1) \times \mathbb{RP}^{n-1}$, in particular it is homotopy equivelant to $\mathbb{RP}^{n-1}$. Let fix $0<d<\dim(X)$ and consider the following part of the MV sequence ascosiated to $(X,U,V)$. All homology group are taken with integral coeeffients, we use reduced homology to ensure the case $d=1$ works out.
$$\ldots \rightarrow H_{d}(U) \oplus H_{d}(V) \rightarrow H_{d}(X) \rightarrow H_{d-1}(U \cap V) \rightarrow \ldots $$
Consider $\beta \in H_{d}(X)$, then by the Fact, the image of $2\beta$ in $H_{d-1}(U \cap V)$ is zero, hence $2\beta$ is in the image of $H_{d}(U) \oplus H_{d}(V)$. The inclusion of any cycle in $V$ is clearly disjoint from the orbifold point. Any cycle contained in $U$ can also be made disjoint because $U$ is contractible. Hence, the class $2\beta$ is represented by a cycle disjoint from the orbifold point.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I convince my boss that investing in Adobe CC is more beneficial than using GIMP?
I have recently been employed to a hotel management company as creative designer. Apparently, the guy who used to work at the company was using GIMP, Fireworks and Sony Vegas to do most designs and post production. At this point my employer refuses to spend any type of money to get Adobe CC, even though I have let her know that it will be more productive for me to work with Adobe software.
She asked me one question at the interview: "Are you familiar with GIMP?" .. and dumb enough, I said YES. Now she's slamming it back to me as "Well you told me that you can use GIMP, so get on with it, plus the previous designer was working pretty well with it".
Obviously I can use GIMP, but the workflow on Adobe products is way better and I can work twice as fast. How can I convince her to get to pay for the Abobe CC package?
A:
I wouldn't convince her at all right now. She told you it would be with Gimp, you accepted the position, for now don't make waves.
Prove your worth and value, then as the assignments and workload increase you can start to make suggestions.
Instead of asking yourself, "How can I convince my boss to spend money" you should be asking yourself, "What limits am I facing?" Then when those limits are met you can ask for additional software.
This is almost always the case I might add in the business world.
A:
The way to convince people is that you are more productive on one software and the cost benefit worths it.
Dowload for one month the trial of photoshop, and show him with more productivity, quality or inventive (because you have more freedom).
Make her a bet. If she feels the diference, she will buy a licence.
In case you can not use the trial version, because someone already did it, buy the photography plan which is only 9.99 a month. You can use your own personal licence on the offcie, with the premise it is on your computer and it is your licence.
A:
Obviously money is the thing you should be targetting. But first let me explain a critical point. It is not really easy to convince a boss for spending money if the boss feels its a luxury item. It is perfectly possible that claim coming from you will sound like a excuse for a better toy. Problem is that any claim for efficiency is easily manipulated by you.
So it would be better for you to get somebody else to point this out for you. For me the tool in question was SnagIt and i couldnt get snagit untill our secretary pointed out that I had been using extra 10 hours this week gluing together screen captures and that this would be projected to cost the company a few thousand dollars just because the boss wanted to save 50.
In your case this secondary person might be the printing service, or a external collaborator complaining about something. Be careful though not to look incompetent.
| {
"pile_set_name": "StackExchange"
} |
Q:
Удаление файлов c#
Не могу понять почему не удаляется файл, вроде и путь и имя указаны верно а все равно лежит в папке как лежал.
private void Main_FormClosed(object sender, FormClosedEventArgs e)
{
DirectoryInfo dir = new DirectoryInfo(@"materials\");
string delNAME = OrderData.deletFiles[0].ToString() + ".jpg";
foreach (FileInfo file in dir.GetFiles())
{
if (file.Name == delNAME)
{
file.Delete();
}
}
}
может в коде где ошибся?
A:
С кодом все в порядке, приложение не может получить доступ к файлу тк файл используется другим приложением.
| {
"pile_set_name": "StackExchange"
} |
Q:
ExecuteScalar returns 0 executing a stored procedure
I'm refactoring a C# program that calls a stored procedure that ends with:
SELECT @ResultCode AS ResultCode
The C# code looks like this:
SqlDbCommand.CommandType = System.Data.CommandType.StoredProcedure;
SqlDbCommand.CommandText = "PR_Foo";
SqlDbCommand.Parameters.Clear();
SqlDbCommand.Parameters.Add("@Foo", SqlDbType.Char);
SqlDbCommand.Parameters["@Foo"].Value = 'Foo';
System.Data.SqlClient.SqlDataAdapter SqlDbAdapter = new System.Data.SqlClient.SqlDataAdapter();
System.Data.DataSet SQLDataSet = new System.Data.DataSet();
SqlDbAdapter.SelectCommand = SqlDbCommand;
SqlDbAdapter.Fill(SQLDataSet);
SQLDataSet.Tables[0].TableName = "PR_Foo";
if (SQLDataSet.Tables.Count != 0) {
Result = int.Parse(SQLDataSet.Tables[SQLDataSet.Tables.Count - 1].Rows[0][0].ToString());
}
With the above code, Result is correctly populated with the value returned by the
stored procedure.
Refactoring the code with a simpler ExecuteScalar:
SqlDbCommand.CommandType = System.Data.CommandType.StoredProcedure;
SqlDbCommand.CommandText = "PR_Foo";
SqlDbCommand.Parameters.Clear();
SqlDbCommand.Parameters.Add("@Foo", SqlDbType.Char);
SqlDbCommand.Parameters["@Foo"].Value = 'Foo';
Result = (int)SqlDbCommand.ExecuteScalar();
the Result value is oddly set to 0 while the expected result should be an integer value greater than zero.
Do you know what could be the cause of this strange behavior?
Note:
the stored procedure has several if blocks, returning result values lower than zero in case of particular checks; these cases are correctly handled by the ExecuteScalar().
The problem raises when the stored procedure does its job correctly, committing the transactions of the various updates and returning the Result value at the end.
A:
In the event of multiple tables being returned your two piees of code aren't doing the same thing. Your original code takes the first field of the first row of the last table whereas the execute scalar will take the first field of the first row of the first table. Could this be where your problem lies?
A:
I also encountered this problem.
In my opinion it is very relevant.
So I decided to give here the correct sample of code.
SqlCommand cmd2 = new SqlCommand();
cmd2.Connection = conn;
cmd2.CommandType = System.Data.CommandType.StoredProcedure;
cmd2.CommandText = "dbo.Number_Of_Correct";
SqlParameter sp0 = new SqlParameter("@Return_Value", System.Data.SqlDbType.SmallInt);
sp0.Direction = System.Data.ParameterDirection.ReturnValue;
SqlParameter sp1 = new SqlParameter("@QuestionID", System.Data.SqlDbType.SmallInt);
cmd2.Parameters.Add(sp0);
cmd2.Parameters.Add(sp1);
sp1.Value = 3;
cmd2.ExecuteScalar(); // int Result = (int)cmd2.ExecuteScalar(); trowns System.NullReferenceException
MessageBox.Show(sp0.Value.ToString());
| {
"pile_set_name": "StackExchange"
} |
Q:
Makefile disable options when running command
I have a makefile called test to which I want to be able to pass options and arguments. Something like:
make test -t 'test number 1'
This would in theory run the test called 'test number 1' in my docker container.
My problem is that -t is considered as an option of the make command instead of an option of my test program.
So is there any way to disable the options of the make command so that the options given are considered as options of the makefile program ?
A:
No, Make doesn't work that way and isn't designed for that. The usual way to pass options into a Makefile is by setting Make variables
make test DESCRIPTION='test number 1'
You can set (default) values in the Makefile and reference these variables like any other value
DESCRIPTION := no description set
all: test
test:
echo "$(DESCRIPTION)"
| {
"pile_set_name": "StackExchange"
} |
Q:
changing opacity on hover with jquery
Here's a jsfiddle of my problem: http://jsfiddle.net/bkWaw/
I'm trying to get the text to display when the item-overlay is hovered, but for some reason the opacity isn't changing. What am I doing wrong?
Here's the code
HTML:
<a href="#">
<div class="item-overlay">
<div class="item-hover">text</div>
</div>
<img src="http://placehold.it/500x300" />
</a>
CSS:
.item a { display: block; color: #666; font-style: italic; font-weight: bold;}
.item a:hover { text-decoration: none; }
.item-overlay { position: absolute; left: 0; top: 0; width: 100%; height: 100%; background: transparent;
-webkit-transition: all 0.2s ease-in-out;
-moz-transition: all 0.2s ease-in-out;
-o-transition: all 0.2s ease-in-out;
transition: all 0.2s ease-in-out;
}
.item-overlay:hover { background-color: rgba(233, 115, 149,0.8);}
.item-hover { color: white; z-index: 999; position: absolute; opacity: 0; width: 100px; height: 100px;}
JS:
$('.item-overlay').hover(
function () {
$(this).find('item-hover').css("opacity","1");
},
function () {
$(this).find('item-hover').css("opacity","0");
}
);
A:
DEMO
Use . for a CLASS
$(this).find('.item-hover').css("opacity","1");
P.S. in jsFiddle you did not set the jQuery library, and what you want is doable in pure CSS but I think you know that as soon you use transitions...
| {
"pile_set_name": "StackExchange"
} |
Q:
How do i write Conditional Logic for string Using java
We have retail industry data . In that ,we need to convert each Unit SKU'S TO CASE SKU'S BY using conversion factor(that is column 4)
Input data
We have input data for
Col1 COL2 COL3 COL4 col5
ABHS-SMH-4OZ-01 EA CS 12 1
ABHK-SMH-01 EA CS 24 1
Expected data after transformation :
Col1 COL2 COL3 COL4 col5
ABHS-SMH-4OZ-12 EA CS 12 1
ABHK-SMH-24 EA CS 24 1
We are trying to write the transformation/conditional logic in Java language .
We tried following regex so far:
I want to search for something
e.g. "ABHS-SMH-4OZ-01"
search for "-01"
return "ABHS-SMH-4OZ-24"
Any help would be much appreciated
This is my regex so far
"ABHS-SMH-4OZ-01".matches(".-01.");
Thanks In advance.
A:
Description
^(?=(?:(?:(\S+))\s+){4})(\S+-)01(?=\s)
** To see the image better, simply right click the image and select view in new window
This regular expression will do the following:
Look ahead and capture the value in COL4 into capture group 1
Match the leading characters in COL1 upto the last -01
Replaces the value in COL1 with the leading characters followed by the value from COL4
Example
Live Demo
Sample text
Col1 COL2 COL3 COL4 col5
ABHS-SMH-4OZ-01 EA CS 12 1
ABHK-SMH-01 EA CS 24 1
After Replacement
Col1 COL2 COL3 COL4 col5
ABHS-SMH-4OZ-12 EA CS 12 1
ABHK-SMH-24 EA CS 24 1
Explanation
NODE EXPLANATION
----------------------------------------------------------------------
^ the beginning of the string
----------------------------------------------------------------------
(?= look ahead to see if there is:
----------------------------------------------------------------------
(?: group, but do not capture (4 times):
----------------------------------------------------------------------
(?: group, but do not capture:
----------------------------------------------------------------------
( group and capture to \1:
----------------------------------------------------------------------
\S+ non-whitespace (all but \n, \r,
\t, \f, and " ") (1 or more times
(matching the most amount
possible))
----------------------------------------------------------------------
) end of \1
----------------------------------------------------------------------
) end of grouping
----------------------------------------------------------------------
\s+ whitespace (\n, \r, \t, \f, and " ")
(1 or more times (matching the most
amount possible))
----------------------------------------------------------------------
){4} end of grouping
----------------------------------------------------------------------
) end of look-ahead
----------------------------------------------------------------------
( group and capture to \2:
----------------------------------------------------------------------
\S+ non-whitespace (all but \n, \r, \t, \f,
and " ") (1 or more times (matching the
most amount possible))
----------------------------------------------------------------------
- '-'
----------------------------------------------------------------------
) end of \2
----------------------------------------------------------------------
01 '01'
----------------------------------------------------------------------
(?= look ahead to see if there is:
----------------------------------------------------------------------
\s whitespace (\n, \r, \t, \f, and " ")
----------------------------------------------------------------------
) end of look-ahead
----------------------------------------------------------------------
| {
"pile_set_name": "StackExchange"
} |
Q:
Create a link pointing to #
I am trying to make a link pointing to # but am unable to find a method that works.
The output i would like to see is
<a href="#">Example</a>
I have tried a few variations of the following without success.
$url = Url::fromUserInput('#');
$link_item = Link::fromTextAndUrl(t('Example'), $url);
return $link_item->toRenderable();
I have also tried using ::fromUri etc and am unable to find a method that produces tehd esired output, please help.
A:
It's probably not possible to attach an empty fragment. See this code in UrlGenerator::generateFromRoute():
if (isset($options['fragment'])) {
if (($fragment = trim($options['fragment'])) != '') {
$fragment = '#' . $fragment;
}
}
If the fragment is empty it is ignored and no '#' shows up. This doesn't change if you provide the fragment in Url::fromUserInput(), because the fragment is transfered to the options and later processed by the same or similar code.
So you have to provide an anchor in the fragment:
$url = Url::fromUserInput('#anchor1');
or
$url = Url::fromRoute('<current>', [], ['fragment' => 'anchor2']);
As alternative option you can place a link with an empty fragment
<a href="#">{{ examplevariable }}</a>
either in a twig template or in an inline template in php.
| {
"pile_set_name": "StackExchange"
} |
Q:
If the teacher dies of old age students were kicked out?
My teacher died of old age and all my students were kicked out. As far as I know, 'uneducated' people can never go back to school.
How can I prevent students from being kicked out?Can I 'queue' a replacement teacher in such a case?
A:
If you do not have any available laborers then the job will remain empty until there is one avilable.
If you notice, when someone holding a job dies, there are 2 messages in the log:
"So and so the teacher has died"
"What's his name has replaced so and so as a teacher."
Unfortunately you are correct that once the students leave the school they become "Uneducated" laborers and will not return to school. This is due to children and students becoming laborers at the ages of 10 and 17-18 respectively. When a child reaches the age of 10 they have to be either a student or a laborer. If there is no available school, or the school is full, the child goes straight to laborer.
To prevent something like this happening you should always have a few laborers available to mitigate deaths. If you're truly short on workers, try pulling a few people out of jobs that aren't being worked currently. Usually I keep 2 people in builder even when I don't have a build job going and I let my Blacksmiths/Tailors/Woodcutters come and go as my supplies bounce against the cap. Some of these jobs are good to remove in order to have available laborers in times of desperation.
| {
"pile_set_name": "StackExchange"
} |
Q:
React Router DOM can't read location state
I've seen this question before but only applied to Class components so I am not sure how to apply the same approach for functional components.
It's simple, I have a link <Link to={{ pathname="/first-page" state: { name: "First person" }>First Page</Link> and then in the component FirstPage.js I need to read the name state so I have tried the following:
import React from "react";
export default props => {
React.useEffect(() => {
console.log(props)
}, []);
return (
<div>
<h1>First Page</h1>
<p>Welcome to first page, {props.location.state.name}</p>
</div>
);
};
I have been reading React Router location documentation and it should pass the state as a component property but it isn't. I am quite sure there is something I am doing wrong or not seeing at all.
In case you wanna give a try on the whole code, I will leave here a CodeSandbox project to "test" this.
Therefore, any ideas on what am I doing wrong? Thanks in advance.
A:
This isn't an issue of class-based vs. functional component, but rather how Routes work. Wrapped children don't receive the route params, but anything rendered using the Route's component, render, or children prop do.
Route render methods
<Switch>
<Route path="/first-page" component={FirstPage} />
<Route path="/second-page" component={SecondPage} />
</Switch>
The other option is to export a decorated page component using the withRouter HOC, or if a functional component, use hooks.
withRouter
You can get access to the history object’s properties and the closest
<Route>'s match via the withRouter higher-order component. withRouter
will pass updated match, location, and history props to the wrapped
component whenever it renders.
const FirstPage = props => {
React.useEffect(() => {
console.log(props)
}, []);
return (
<div>
<h1>First Page</h1>
<p>Welcome to first page, {props.location.state.name}</p>
</div>
);
};
export default withRouter(FirstPage);
hooks
React Router ships with a few hooks that let you access the state of
the router and perform navigation from inside your components.
const FirstPage = props => {
const location = useLocation();
console.log(location);
return (
<div>
<h1>First Page</h1>
<p>Welcome to first page, {location.state.name}</p>
</div>
);
};
export default FirstPage;
| {
"pile_set_name": "StackExchange"
} |
Q:
Email Alerts (batch jobs) going to users after canceled
Is there a table or a class that shows all emails for all batch jobs? We have users getting emails for jobs that are canceled.
A:
To agree with Jan, there is no email log.
Even further, jobs that are canceled should not continue to send emails...so it sounds like the job isn't actually canceled.
To find alerts setup by the user, in the AOT go to Tables\BatchJobAlerts. There you can see things like user and email. The BatchJobId field is a recId that can be looked up against Tables\BatchJob in the recId field. This will tell you the offending batch jobs.
| {
"pile_set_name": "StackExchange"
} |
Q:
LINQ to SQL context.SubmitChanges - How to get error details?
I'm working on an application where a lot of data is inserted into an SQL database at once. I use LINQ to SQL, and have something like this as my insert operation:
foreach (var obj in objects)
{
context.InsertOnSubmit(obj);
}
context.SubmitChanges();
Here's the problem: If I get an exception (for instance, DuplicateKeyException), I've NO CLUE what object caused the problem. The only information I'm getting is that at least one of the objects contains key values that are identical to some other key in the database.
Is it possible to extract more information about what object(s) caused the conflict?
Of course, I could call SubmitChanges after each and every InsertOnSubmit, but with the amount of data I'm inserting, this is incredibly slow.
Anyone have any tips for me?
Thanks!
A:
Friend, I'm not trying to be a smart alec, and perhaps I am succeeding anyways, but my main suggestion is that you abandon linq for use in data loads. SSIS produces simple, efficient and easy to maintain code for ETL work. Reason being, it was designed to do just that.
Secondly, you don't specify what type of exception is being thrown, nor if that exception when presented to you contains a non null inner exception. That's the first place I would look.
Good luck.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can i look at executed queries?
can anyone tell me how to view executed queries in SharePoint?
I couldn't find a way to show query logs with SQL Server.
As you know, SharePoint framework hides all queries from programmers.
I would like to look into the queries and understand the mechanisms.
http://www.infoq.com/articles/SharePoint-Andreas-Grabner
In above article, i can see some windows showing methods and arguments(Queries).
But i could not figure out where this window come from...(looks like a window from visual studio)
Does anyone know how to show this window? or any alternative way to display executed queries?
My working environment.
Windows Server 2008 Enterprise
MOSS 2007
SQL Server 2008 Enterprise
Visual Studio 2008 with VSeWSS 1.2
Thank you in advance.
Taiga
A:
SQL Server Profiling will let you log queries at a database level. You'd want to refine your trace to queries executed against the SharePoint database.
| {
"pile_set_name": "StackExchange"
} |
Q:
Protractor Button Click and open page in new tab
I am fairly new to Protractor.
I am trying to automate a scenario where I click on a button and its opens up a page in new tab and then we need to populate form in new page and submit.
Issue: when i click on button to open new page. My tests does not wait for new page to load and say test completed and success message.
I am using simple click event of that button to click the button.
element(by.id("newPlan")).click()
Am I missing something ? Do i need to do something so that my tests wait for new page to load and then I can perform some functions ?
A:
You need to wait until the page opens by using callbacks. Try something in this sense:
element(by.id("newPlan")).click().then(function () {
browser.getAllWindowHandles().then(function (handles) {
newWindowHandle = handles[1]; // this is your new window
browser.switchTo().window(newWindowHandle).then(function () {
// fill in the form here
expect(browser.getCurrentUrl()).toMatch(/\/url/);
});
});
});
A:
This is the solution that worked for me, but i've added a browser.sleep(500), to avoid the error mentioned above (UnknownError: unknown error: 'name' must be a nonempty string).
The problem was that the new handle was not yet available.
Just give it a moment after the click, to open the new tab and have it's handler available.
Yes, it's adding an ugly sleep, but it's a short one...
element(by.id("newPlan")).click().then(function () {
browser.sleep(500);
browser.getAllWindowHandles().then(function (handles) {
newWindowHandle = handles[1]; // this is your new window
browser.switchTo().window(newWindowHandle).then(function () {
// fill in the form here
expect(browser.getCurrentUrl()).toMatch(/\/url/);
});
});
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Laurent series of a trigonometric function
Find the Laurent Serie(and residue) aroud $z_0=0$ of the function
$f(z) = \frac{1}{1-\cos z}$.
Progress:
It looks very trivial but it seems to get complicated so I'll only try with 3-4 terms:
We can use that $\cos z = 1- \frac{z^2}{2!}+\frac{z^4}{4!}-\frac{z^6}{6!}+...+\sum_{0}^{\infty}\frac{(-1)^k z^{2k}}{(2k)!}$ and hence
$f(z) = \frac{1}{1-\cos z} = \frac{1}{1-(1- \frac{z^2}{2!}+\frac{z^4}{4!}-\frac{z^6}{6!}+...+\sum_{0}^{\infty}\frac{(-1)^k z^{2k}}{(2k)!})} \Rightarrow f(z)= \frac{1}{\frac{z^2}{2!}-\frac{z^4}{4!}+\frac{z^6}{6!}}
$
This is not correct apparently so I tried to factor out $z^2$ and$\frac{z^2}{2!}$ but I was unable to proceed. I suspect that I'll arrive to a nestled sum.
A:
Since we're interested only around $\,z=0\,$ we can try the following using what you've already done:
$$\frac1{1-\cos z}=\frac1{\frac{z^2}2\left(1-\frac{z^2}{12}+\mathcal O(z^4)\right)}=\frac2{z^2}\left(1+\frac{z^2}{12}+\frac{z^4}{12^2}+\ldots\right)=$$
$$\frac2{z^2}+\frac16+\frac{2z^2}{12^2}+\ldots$$
We used, of course, the development
$$\frac1{1-z}=1+z+z^2+\ldots\;\;,\;\;|z|<1$$
A:
Factor out a $z^2/2!$ from the denominator to get
$$\begin{align}\frac{1}{1-\cos{z}} &= \frac{2}{z^2} \frac{1}{\displaystyle1-2 \sum_{k=1}^{\infty} (-1)^{k+1}\frac{z^{2 k}}{(2 (k+1))!}}\\ &= \frac{2}{z^2} + \frac{4}{z^2} \sum_{k=1}^{\infty} (-1)^{k+1}\frac{z^{2 k}}{(2 (k+1))!} + \frac{2}{z^2} \left ( \sum_{k=1}^{\infty} (-1)^{k+1}\frac{z^{2 k}}{(2 (k+1))!}\right )^2+\cdots \\ &\approx \frac{2}{z^2} +\frac{1}{6} +\cdots\end{align}$$
The residue at $z=0$ is the coefficient of $1/z$ and is thus zero.
| {
"pile_set_name": "StackExchange"
} |
Q:
Dynamically calling a SQL Server stored procedure from a C# application
I have a class (dbConnections) with methods for handling types of DB queries. I would like to make calls to a method in this class passing the name of the procedure and an array containing the required parameters for that particular call.
However when executed, it doesn't recognise that any parameters have been passed. If I hard-code them, they work fine so there is obviously something wrong with my application of the loop.
I want to be able to re-use this method passing and getting differing parameters, I just don't know how I should be going about it. I haven't tackled the return parameters yet as I haven't got this working...
Any insight would be greatly appreciated.
This is the method in my dbConnections class:
public void ExecuteProcedure(string procedureName, string[] paramName, string[] procParams)
{
SqlCommand cmd = new SqlCommand(procedureName, con);
cmd.CommandType = CommandType.StoredProcedure;
for (int i = 0; i >= paramName.Length; i++)
{
cmd.Parameters.AddWithValue(paramName[i], procParams[i]);
}
cmd.ExecuteNonQuery();
}
This is a calling method:
private void btn_logIn_Click(object sender, EventArgs e)
{
string uid = txb_userId.Text;
string pwd = txb_password.Text;
string procedureName = "spUsers_Login";
string[] paramName = new string[2];
string[] procParams = new string[2];
paramName[0] = "@Username";
procParams[0] = uid;
paramName[1] = "@Password";
procParams[1] = pwd;
db.OpenConection();
db.ExecuteProcedure(procedureName, paramName, procParams);
}
A:
First of all check your loop
for (int i = 0; i >= paramName.Length; i++)
It will never pass the condition, it should be i<paramName.Length
| {
"pile_set_name": "StackExchange"
} |
Q:
Apex : Count the number of letters in an arbitrary string
The String length() method return the total length of string but i need the count of characters
Example if the string is 'm*tt-', the # of characters will be 3 even though the length of string is 5
A:
String alphaChars = searchString.replaceAll('[^A-Za-z .]','');
Integer charLength = alphaChars.length();
| {
"pile_set_name": "StackExchange"
} |
Q:
swift print element in array at the indexPath.row
want to print string at element from array
var twoDimensionalArray = [
ExpandableNames(isExpanded: false, names: ["Antiques",
"Art",
"Collectables",
"Other Antiques , Art & Collectables"]),
ExpandableNames(isExpanded: false, names: ["Baby Carriers",
"Baby Clothing",
"Baths",
"Safety"]),
]
override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
let mm = twoDimensionalArray[indexPath.row]
if indexPath == [1,1]{
print("this is mm:",mm)
}
}
//print statement prints
mm is: ExpandableNames(isExpanded: true, names: ["Baby Clothing","Baths","Safety"])
i just want it to print "Baby Carriers"
A:
Assuming ExpandableNames is defined something like this
struct ExpandableNames {
var isExpanded: Bool
var names: [String]
}
you could retrieve the first item at indexPath.row like this
if let firstName = twoDimensionalArray[indexPath.row].names.first {
//do something with firstName
}
or if you want to access an element at a specific index
var index = 0
let firstName = twoDimensionalArray[indexPath.row].names[index]
| {
"pile_set_name": "StackExchange"
} |
Q:
Finite groups and topological spaces
Can we connect topological spaces with groups as:
For topological space $X$ take biective homomorfisms $\phi: X\to X$, then divide such homomorphisms on classes of equivalency $\phi_1 \equiv\phi_2$ if exists continious homotopy between them. Then easy to see that these classes form group $G_X$ with operation as composition.
Is this correct? If we look on every finite graph $Gr$ as on topological space (edges are segments in $\mathbb{R}^N$, for large $N$), then is $G_{Gr}=Aut(Gr)$?
A:
I can't see anything wrong with your construction. If $X$ is homeomorphic to a circle, then $G_X \cong \mathbb{Z}/2$, but if $Gr$ is an $n$-gon, $Aut(Gr)$ is the dihedral group of order $2n$, so your conjecture doesn't hold in that case. Maybe it would hold under some extra assumption, e.g., no vertices of valency 2.
| {
"pile_set_name": "StackExchange"
} |
Q:
parser error : Start tag expected, '<' not found
I am using PHP for the first time. I am using the php sample for uploading image on ebay sandbox. I am getting the following error on running the PHP file:
PHP Warning: simplexml_load_string(): Entity: line 1: parser error : Start tag expected, '<' not found in /home/nish/stuff/market place/test/php5/UploadImage/UploadSiteHostedPictures.php on line 69
PHP Warning: simplexml_load_string(): HTTP/1.1 200 OK in /home/nish/stuff/market place/test/php5/UploadImage/UploadSiteHostedPictures.php on line 69
PHP Warning: simplexml_load_string(): ^ in /home/nish/stuff/market place/test/php5/UploadImage/UploadSiteHostedPictures.php on line 69
PHP Notice: Trying to get property of non-object in /home/nish/stuff/market place/test/php5/UploadImage/UploadSiteHostedPictures.php on line 92
PHP Notice: Trying to get property of non-object in /home/nish/stuff/market place/test/php5/UploadImage/UploadSiteHostedPictures.php on line 93
PHP Notice: Trying to get property of non-object in /home/nish/stuff/market place/test/php5/UploadImage/UploadSiteHostedPictures.php on line 93
PHP Notice: Trying to get property of non-object in /home/nish/stuff/market place/test/php5/UploadImage/UploadSiteHostedPictures.php on line 94
PHP Notice: Trying to get property of non-object in /home/nish/stuff/market place/test/php5/UploadImage/UploadSiteHostedPictures.php on line 94
Relevant lines are:
69. $respXmlObj = simplexml_load_string($respXmlStr); // create SimpleXML object from string for easier parsing
// need SimpleXML library loaded for this
92. $ack = $respXmlObj->Ack;
93. $picNameOut = $respXmlObj->SiteHostedPictureDetails->PictureName;
94. $picURL = $respXmlObj->SiteHostedPictureDetails->FullURL;
What I can understand is the respXMLObj is not getting set properly. I have checked that simleXML support is enabled.
Could someone please help me debug this. Thanks
A:
The code you refer to has this line:
//curl_setopt($connection, CURLOPT_HEADER, 1 ); // Uncomment these for debugging
it seems like you uncommented these. This will result in getting the HTTP header in your response. Which is OK for debugging, but it will create an XML parse error in simplexml_load_string.
Either comment it out again or put 0 as its value.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to run nodemon + ts-node + typescript altogether without having to install ts-node or npx globally?
I have the following in my package.json:
"scripts": {
"serve-fake-api": "nodemon fake-api/server.ts --watch 'fake-api/*.*'",
"serve-vue": "vue-cli-service serve",
"serve": "concurrently -k \"npm run serve-fake-api\" \"npm run serve-vue\"",
"build": "vue-cli-service build",
"lint": "vue-cli-service lint"
},
and I would like to rewrite "serve-fake-api": "nodemon --exec 'ts-node' fake-api/server.ts --watch fake-api/*.*", but without having to install ts-node or npx globally.
How can I achieve that?
A:
I managed to run everything with the packages.json below:
{
"name": "rm-combo",
"version": "0.1.0",
"private": true,
"scripts": {
"serve-fake-api": "nodemon fake-api/index.ts --watch fake-api/*.*",
"serve-vue": "vue-cli-service serve",
"serve": "concurrently -k \"npm run serve-fake-api\" \"npm run serve-vue\"",
"build": "vue-cli-service build",
"lint": "vue-cli-service lint"
},
"dependencies": {
"@types/node": "^12.12.7",
"axios": "~0.19.0",
"devextreme": "19.2.3",
"devextreme-vue": "19.2.3",
"element-ui": "~2.8.2",
"oidc-client": "~1.9.1",
"vue": "^2.6.10",
"vue-class-component": "^7.1.0",
"vue-property-decorator": "^8.3.0",
"vue-router": "^3.1.3",
"vuetify": "^2.1.10",
"vuex": "^3.1.2",
"vuex-class": "^0.3.2"
},
"devDependencies": {
"@types/express": "^4.17.2",
"@types/json-server": "^0.14.2",
"@vue/cli-plugin-typescript": "^4.0.5",
"@vue/cli-service": "^4.0.5",
"concurrently": "^5.0.0",
"devextreme-cldr-data": "^1.0.2",
"globalize": "^1.4.2",
"json-server": "^0.15.1",
"node-sass": "^4.13.0",
"nodemon": "^1.19.4",
"sass-loader": "^8.0.0",
"ts-node": "^8.5.0",
"typescript": "^3.7.2",
"vue-template-compiler": "^2.6.10"
}
and ts features supported with doing that trick there: https://stackoverflow.com/a/59126595/4636721
| {
"pile_set_name": "StackExchange"
} |
Q:
console.log issue in a local folder
I just wonder what I am doing with console.log is wrong or not.
I have simple two files as below:
index.html
index.js
and when opening the index.html in chrome(c:\temp\index.html), it does not output console.log message in console tab as below.
So am I missing something?
As you can see, if you run it below code, it shows console.log properly.
function doSomething() {
console.log("Work!");
}
doSomething();
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Document</title>
</head>
<body>
<div>Hi</div>
<script scr='index.js'> </script>
</body>
</html>
A:
Looks like you have a typo:
<script scr='index.js'>
should be
<script src='index.js'>
| {
"pile_set_name": "StackExchange"
} |
Q:
How to split Devise user into different types?
I want to split Devise user into different types? Like, for example: user go to the registration page and sign up, but based on whether they're a teacher or a student they will registered and login to see different navbar? How would I do that using a checkbox?
A:
It is excellently explained in their Wiki.
In my opinion you should consider option number 1. and 3:
Separate model for student and teacher, if they have different attributes
One model for both with additional column role. It will be appropriate if models have the same attributes.
Then on your views just check what is the role / type of user and present proper content.
| {
"pile_set_name": "StackExchange"
} |
Q:
Rigid pentagons and rational solutions of $s^4+s^3+s^2+s+1=y^2$
Gerard 't Hooft, Nobel Prize in Physics laureate, wrote three articles on what he called "Meccano math" (1, 2, 3) – rigid constructions following rules quite similar to my earlier question on doubling the cube with unit sticks, but with the following generalisations:
Sticks can be of any rational length (the formulation in 't Hooft's papers uses idealised Meccano strips of integral length, but they can be trivially scaled)
Hinges can lie anywhere on a stick, not just at the ends, as long as they are at rational distances from the ends
For rigid polygons, the polygon's sides can be extended
One of the given constructions is a rigid pentagon with just two extra sticks. However, it does not look very nice because it requires long extensions of two sides.
So I decided to make it less intrusive (in the sense of "less occupied space outside the pentagon") as follows. Let $r,t,s$ be the lengths of three consecutive sides of a quadrilateral, with $108^\circ=\frac{3\pi}5$ angles between them:
Then it is easy to show that the fourth side length $u$ is
$$\sqrt{\left((r+s)\cos\frac{2\pi}5+t\right)^2+\left((r-s)\sin\frac{2\pi}5\right)^2}$$
We want all four side lengths to be rational (but they can be negative). If $u$ is rational, so is $u^2$, so the expression inside the square root must also be rational. Expanding it gives
$$r^2+s^2+t^2-\frac{rs+rt+st}2+\frac{\sqrt5}2(rt+st-rs)$$
and for this to be rational we must have $rt+st-rs=0$ or $t=\frac{rs}{r+s}$. Making this substitution gives
$$u=\sqrt{\frac{r^4+r^3s+r^2s^2+rs^3+s^4}{r^2+2rs+s^2}}$$
Clearly we can scale any solution $(r,s,t,u)$ by any rational number, so we set $r=1$ arbitrarily:
$$u=\sqrt{\frac{s^4+s^3+s^2+s+1}{s^2+2s+1}}=\frac{\sqrt{s^4+s^3+s^2+s+1}}{|s+1|}$$
Thus, up to scale, all rational solutions correspond one-to-one with solutions of
$$s^4+s^3+s^2+s+1=y^2\qquad s,y\in\mathbb Q,s\not\in\{0,-1\}\tag1$$
The same equation has been posed on this site before, but only with integers, and I could not find any good reference in this answer. By Faltings's theorem there are only finitely many solutions, but have I found all of them?
Is it true that $(1)$ has a solution only if $s$ or $1/s$ is in $\left\{3,\frac{808}{627},-\frac{11}8,-\frac{123}{35}\right\}$? References would be much appreciated.
The solution with $s=-\frac{11}8$ in particular gives a much less intrusive rigid pentagon. (All black sticks below, sides of the pentagon, are of unit length.)
A:
Not true I'm afraid. There are, in fact, an infinite number of rational solutions.
The curve is a quartic with a rational point $(0,1)$, and is thus birationally equivalent to an elliptic curve, which has genus $1$. Faltings' Theorem only applies if the genus is strictly greater than $1$.
The equivalent elliptic curve is $v^2=u^3-5u^2+5u$ with $s=(2v-u)/(4u-5)$. The point $(0,0)$ is the only finite torsion point and we can take $(1,1)$ as a generator.
The rational solutions you give come from small multiples of the generator. Larger examples are $-20965/43993$ and $-761577/1404304$, but you can get larger and larger solutions.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to teach brackets?
I was taught in a school that one has to use different brackets in expressions like $\{[(3+4)\cdot 4]^4\}^{1/2}$ to denote the order which subexpression is evaluated first. But can this be recommended in current mathematics? I guess no as one can create arbitrary complex expressions such that one would require arbitrary many different bracket notations. Also, I think that is good to teach that $()$ is for evaluating order, $[]$ is for floor function and matrices and $\{\}$ is for the set notation. So is it wrong if I suggest students to use the notation $(((3+4)\cdot 4)^4)^{1/2}$
A:
I teach the students to use parentheses only. They are widely acceptable and it is what I prefer to use. One argument against brackets is that some students find it hard to remember how to draw different kinds of brackets and spend inordinate amounts of time shaping their brackets.
However I find that it is important to show students that different brackets may be used as parentheses and are not only used for matrices or the floor function. When students find an expression with different brackets they need to be able to understand it and know how to evaluate correctly.
A:
It is definitely not wrong to teach your students to just use (). I would possibly argue the converse. Your experience at school flys in the face of convention. Teaching your students that way will not prepare them well for meeting the rest of the world.
| {
"pile_set_name": "StackExchange"
} |
Q:
can I install multiple versions of php and mysql on xampp
I am using xampp on windows xp. Recently I felt in need of having more than one version of PHP and mysql.
Is it possible.?
A:
Have a look at this posting on SuperUser
| {
"pile_set_name": "StackExchange"
} |
Q:
How to iterate visible rows alone using UFT
when i iterate this below webtable,i am getting row count as 3(with hidden row).
but i can see only 2 rows in my application.
i can get row count with help of descriptive programming,but i want to iterate only the rows which are visible.
<table>
<tbody>
<tr class="show">Name</tr>
<tr class="hide">Ticket</tr>
<tr class="show">city</tr>
</tbody>
</table>
i have tried this below code,but its displays hidden row text as well,
for i=1 to rowcount
print oWebtable.getcelldata(i,2)
next
Actual Output-
Name,
Ticket,
city
expecting output-
Name,
city
A:
UFT has no way knowledge of your show/hide class names. If you want to filter out some rows you need to do it yourself.
Set desc = Description.Create()
desc("html tag").Value = "TR"
desc("class").Value = "show"
Set cells = oWebtable.ChildObjects(desc)
Print "Count: " & cells.Count
For i = 0 To cells.Count - 1
Print i & ": " & cells(i).GetROProperty("inner_text")
Next
Note that I had to add TD elements to your table in order for this to work since it's invalid HTML to have text in a TR element.
| {
"pile_set_name": "StackExchange"
} |
Q:
multiple triggered subsystem + algebraic loop, initialisation problem
I have a Simulink diagram which contains multiple triggered subsystem with different timestamp. In this model I also got a feedback loop inducing an algebraic loop. Therefore the signal must be initialised, in order to do that, I used a Memory block.
The problem is on the feedback loop, the value of the signal seems to be not initialised.
I believe the origin of this problem is that it is indeed initialised by the memory block for the first timestamp, however, the trigger on the next subsystem did not occur. By default, this subsytem puts its out signal value to be 0. The loop is therefore broken there.
Did someone already encounter this situation ? Any Tips ?
Thank you for your time.
A:
You could add initialization blocks for your trigger values? I don't know about what SubSystem0 looks like inside, but its output could use an initialization block as well, this way you guarantee that you have an input to Subsystem
| {
"pile_set_name": "StackExchange"
} |
Q:
JSON Newtonsoft C# Good Practice for Serialize/ Deserialize Lists of Objects
I've readed others posts here about this question
Serializing a list of Object using Json.NET
Serializing a list to JSON
Merge two objects during serialization using json.net?
All very useful. Certain, I can serialize in one json two lists, but I cant deserialize it.
I´m working with Json Newtonsoft, C#, MVC5, framework 4.5. This is the scenario:
C# CODE
public class User
{
public int id { get; set; }
public string name { get; set; }
}
public class Request
{
public int id { get; set; }
public int idUSer{ get; set; }
}
List<User> UserList = new List<User>();
List<Request> RequestList = new List<Request>();
string json= JsonConvert.SerializeObject(new { UserList, RequestList });
JSON RESULT
{
"UserList":[
{
"id":1,
"name":"User 1"
},
{
"id":2,
"name":"User 2"
},
{
"id":3,
"name":"User 3"
}
],
"RequestList":[
{
"id":1,
"idUSer":1
},
{
"id":2,
"idUSer":1
},
{
"id":3,
"idUSer":1
},
{
"id":4,
"idUSer":2
}
]
}
C# DESERIALIZE
I dont Know how configure the settings of Json.Deserialize< ?, Settings>(json) for indicate what types of objects are being deserialized.
Change of approach
So that, change of approach, I've created a new class "Cover" in order to put the lists together and serialize one object
public class Cover
{
private List<User> user = new List<User>();
private List<Request> request = new List<Request>();
public List<User> User
{
get { return user;}
set { User = value;}
}
public List<Request> Request
{
get {return request;}
set {Request = value;}
}
}
SERIALIZE
string json = JsonConvert.SerializeObject(cover);
JSON The json result is the same.
DESERIALIZE
Cover result = JsonConvert.DeserializeObject<Cover>(json, new
JsonSerializerSettings { TypeNameHandling = TypeNameHandling.Auto });
It's work fine. My situation is resolved but I have doubts about concepts, in my mind something is not clear:
MY QUESTIONS ARE:
For the first aproach:
Do you think that there is a way to deserialize a json with different lists of objects? Is not a good practice?
Second aproach: Why jsons are equals for first situation?
A:
In JSON.NET you need to specify the type you're about to deserialize, by supplying its name as a type argument to DeserializeObject.
However, in this line:
string json= JsonConvert.SerializeObject(new { UserList, RequestList });
you create anonymous object and then serialize it - new { UserList, RequestList }. So there is the catch - you cannot use anonymous type as type arguments.
To handle such situations, JSON.NET provides DeserializeAnonymousType<>. It doesn't require you to supply the type argument; actually you can't as you going to deserialize anonymous type. Instead it is inferred from the type of the second argument, passed to the method. So you just create a dummy, anonymous object, without data and pass it to this method.
// jsonData contains previously serialized List<User> and List<Request>
void DeserializeUserAndRequest(string jsonData)
{
var deserializedLists = new {
UserList = new List<User>(),
RequestList = new List<Request>()
};
deserializedLists = JsonConvert.DeserializeAnonymousType(jsonData, deserializedLists);
// Do your stuff here by accessing
// deserializedLists.UserList and deserializedLists.RequestLists
}
Of course this all works fine, but this approach suggests that you already know the structure of the serialized data. If this structure doesn't match the structure of the initialized by you anonymous type you'll get nothing after the DeserializeAnonymousType method. And this is valid not just for the type of the properties of the anonymous type, but for their names too.
| {
"pile_set_name": "StackExchange"
} |
Q:
capture text output as structured data frame
I have output streamed as text in the following form:
[2] "TWS OrderStatus: orderId=12048 status=PreSubmitted
filled=0 remaining=300 averageFillPrice=0 "
[3] "TWS OrderStatus: orderId=12049 status=PreSubmitted
filled=0 remaining=300 averageFillPrice=0 "
I would like to capture such output and convert it to a data frame with columns: orderId, status, filled, remaining, averageFillPrice.
I am wondering what is the most efficient way to do it.
I tried capturing it with capture.output but then I am not so sure how to covert it to a data frame.
A:
I think you can do this with a few base string functions. If you had your strings stored in a list, as in the example below, you could create a function to extract the information you need and then apply it to the list and output a data frame:
a <- "TWS OrderStatus: orderId=12048 status=PreSubmitted filled=0 remaining=300 averageFillPrice=0 "
b <- "TWS OrderStatus: orderId=12049 status=PreSubmitted filled=0 remaining=300 averageFillPrice=0 "
dat <- list(a, b)
extract <- function(x) {
a <- as.vector(strsplit(x, " ")[[1]])[-(1:2)]
return(sapply(a, function(b) substr(b, gregexpr("=", b)[[1]] + 1, nchar(b))))
}
as.data.frame(t(sapply(dat, extract)))
The output could be prettier but I'm sure you can clean it up a bit. It works if all your data follows the same pattern (i.e. split by spaces and where you don't want the bit before the equals signs).
| {
"pile_set_name": "StackExchange"
} |
Q:
Pandas: add (sum) dataframes with some different indices and columns
I'm trying to use Pandas's capabilities to add other dataframes together as well, but the ways I'm trying to do it are not really working out. Generally, the two dataframes will have a few rows that are the same (whose values should be added), and a few rows that are different (and should be concatenated). However, the index may be different as well. As below:
# dataframe 1
pi = pd.PeriodIndex(start=2017, periods=10, freq='M')
a = pd.Series(np.full(shape=10, fill_value=2), pi)
b = pd.Series(np.full(shape=10, fill_value=3), pi)
df1= pd.DataFrame({'data_1': a, 'data_2': b})
# dataframe 2 to be added with longer index & additional data column
pi2 = pd.PeriodIndex(start=2016, periods=30, freq='M')
a = pd.Series(np.full(shape=30, fill_value=2), pi2)
b = pd.Series(np.full(shape=30, fill_value=3), pi2)
c = pd.Series(np.full(shape=30, fill_value=3), pi2)
df2= pd.DataFrame({'data_1': a, 'data_2': b, 'data_3': c})
new_df = df1 + df2
# returns a sum for all indices where there is a union, then nan
# for everything else --> need to preserve values at those other locations
# data_3 should return array/series full of 3s instead of nans
# new_df.iloc[0,0] should return 2 instead of nan
I've tried a few things, but not really getting it to work as any not_null or fill_na stuff gets called after the nans are generated.
A:
new_idx = df1.index.union(df2.index)
new_cols = df2.columns.union(df2.columns)
new_df = df1.loc[new_idx, new_cols].fillna(0) + df2.loc[new_idx, new_cols].fillna(0)
Edit:
Actually you can just use
new_df = df1.add(df2, fill_value=0)
| {
"pile_set_name": "StackExchange"
} |
Q:
Information propagating at less than the speed of light
Which speed does information propagate at in a medium?
For example, if we live in a pool which is one light-year cube and somebody dewaters of the pool from an observer which is at 1 light-year distance. Will the observer experience the effect of the event after 1 light-year or longer? Put the question another way, will the observer experience the effect of the event first and after that, he will realize that someone dewaters of the pool.
So now I extend my question. If the graviton propagates with the speed of light in any medium and I know that the photon propagates with less than c (lets it propagates with v) in some specific medium. Then I build two detectors to find where electromagnetic and gravitational wave emitting objects are. It is like earthquake detectors. right?
A:
Actually he will see that something is happening in the other part of the pool (with light) before he will feel anything, because "dewatering" will send a sound wave in the medium, which is much slower than the speed of light in the medium.
Anyhow, one can send information faster than light in a medium, if the light is slow enough, high energy particles (like neutrons, electrons, neutrino, etc...) can travel faster and deliver information faster than light. If a charged particle does that, Cherenkov radiation is emitted by those particle.
So to sum up: information will never travel faster than $c$ because that breaks some rules from special relativity. But if the speed of light is lower than $c$, particles with a velocity bigger than light can transfer information faster. However, information concerning pressure, density and other flow parameters travel at the speed of sound via the interactions between molecules in the medium and that is very slow, meaning you will see someone dewater and only then feel anything due to the water flowing.
| {
"pile_set_name": "StackExchange"
} |
Q:
Ideas for creating a "Did you mean XYZ" feature into website
I'd like to give users the ability to search through a large list of businesses, but still find near matches.
Does anyone have any recommendations on how best to go about this when you're not targeting simple dictionary words, but instead complex names like ABC Business Name?
Regards.
A:
Check out the wikipedia article on Levenshtein distance. It's a fairly simple concept to wrap your head around and pretty easy to implement an algorithm in whichever language you are using, in your case, C#.
I found an example in C# for you here.
Also, here is an example of a spelling corrector from Peter Norvig of Google. It was said on the SO podcast a few episodes ago that Jon Skeet attempted a rewrite of this same algorithm in C#. Not sure if he completed it and/or made it publicly available though.
| {
"pile_set_name": "StackExchange"
} |
Q:
Servers configuration to improve performance and offer redundancy for ASP.NET site with SQL Server
I'm administrating a fairly large website (currently about 300 thousand page views a day) which is expected to grow fast. Currently both IIS and SQL Server are running in a quad core server, with RAID 10 SAS Hard Drives and 32 GB of RAM. A less powerful server is configured as cold backup. Databases are synchronized daily and also the site files are moved over to the backup server daily. In case the primary server goes down, the site can be up again in a few hours, but that's not ideal. I'm looking for a solution that will offer:
improved performance. In the future it will be necessary to create a web farm to handle the requests, so I need to plan for that.
redundancy. If one server goes down, the site should not go down.
backup. The data are critical, so the SQL Server configuration should be in such a way that we don't loose data older than 1 day (it's no big issue if the last day data are lost)
Also, the solution should include disaster recovery. If the data center goes in flame, we'll need a solution to be back online in less than one day (we're thinking of keeping a copy of the data and site in our local servers, but we'll need a way to have the process as automatic as possible. The primary server is hosted in a data center in Germany).
The database is 50GB+ while the web application is rather small.
A:
This all sounds pretty standard. I'm going to assume SQL Server 2008 R2 or SQL Server 2012 here for the database part.
The first thing you need to do is get IIS off of the SQL Server and put it onto it's own machine. You'll also need to get some sort of load balancer to put in front of the web farm. I'd recommend something like an F5 or Cisco, though you could go with a Linux based load balancer if you have a Linux person in house. Once you've got the load balancer in place as you need to grow the web farm out doing so is pretty easy. You just buy another server, configure it like normal and add it to the farm in the load balancer.
As for SQL HA, you'll probably want to look at SQL Server Database Mirroring. This will give you two servers in the local data center (though you could put them in different data centers) with automatic fail over if you have the Enterprise Edition of SQL Server.
Setting up the backups to copy from the data center to your office isn't all that hard. Just setup a site to site VPN and copy the files over the network. Bandwidth and latency become the only problem at that point.
Your DR requirement is going to be the hardest part. Having a requirement that you be back up and running in less than a day means that you need to have a contract with another data center, and that you need to have servers already at that data center. Without having this equipment already in place you will never hit your goal of getting the site back up and running within a day as just getting new servers can take weeks (or longer depending on how big the disaster is as you won't be the only people trying to buy new servers).
On the web server site, DR is easy. Simply point the DNS servers to the public IP at the DR site.
For the SQL Server side of things you'll probably want to look at transaction log shipping from the primary site to the DR site. If you want an easier config look at SQL Server 2012's AlwaysOn Availability Groups. They'll do automatic failover, sync and async data replication, etc. AlwaysOn Availability Groups do require an Active Directory domain, so you'll need to look into getting that setup first.
If you haven't noticed yet DR isn't cheap or easy.
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.