_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d17001 | val | Currently I would suggest to use command grep in terminal(Xcode could not search .xib file):
grep -i -r --include=*.xib "TextToFindHere" /PathToSearchHere
Another way is to right click your .xib file and Open As Source Code and use Command+F to search. But this method can only search one file a time. | unknown | |
d17002 | val | To completely customize your ActionBar tabs, try something like the following, for a fictional tab called "Home". This layout contains an image and label.
(1) create the tab as normal, but specify a custom layout via ActionBar.Tab.setCustomView()
// Home tab
LayoutInflater inflater = (LayoutInflater) this.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
HomeFragment homeFragment = new HomeFragment();
RelativeLayout layoutView = (RelativeLayout)inflater.inflate(R.layout.layout_tab_home, null);
TextView title = (TextView)layoutView.findViewById(R.id.title);
ImageView img = (ImageView)layoutView.findViewById(R.id.icon);
ActionBar.Tab tabHome = mActionBar.newTab();
tabHome.setTabListener(homeFragment);
title.setText(this.getString(R.string.tabTitleHome));
img.setImageResource(R.drawable.tab_home);
tabHome.setCustomView(layoutView);
mActionBar.addTab(tabHome);
(2) create a layout for your tab (layout_tab_home.xml)
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="wrap_content" android:layout_height="56dip" android:layout_weight="0" android:layout_marginLeft="0dip" android:layout_marginRight="0dip">
<ImageView android:id="@+id/icon" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:paddingBottom="2dip" />
<TextView android:id="@+id/title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/icon" android:paddingLeft="0dip" android:paddingRight="0dip" style="@style/tabText" />
</RelativeLayout>
(3) set a drawable for your image
<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item android:state_selected="true" android:drawable="@drawable/home_sel" />
<item android:state_selected="false" android:drawable="@drawable/home_unsel" />
</selector>
in this example, I just have a PNG graphic with diff't colors for selected vs. unselected states
You seem to be most interested in the BACKGROUND drawable, so the same thing would apply, but set a background drawable on your RelativeLayout, and use a selector similar to the above.
A: You can use inset drawables to create the line strip at the bottom. This xml will create a white rectangle with a blue line at the bottom like the one you posted. Then you can use state list drawables for its state.
<?xml version="1.0" encoding="utf-8"?>
<inset xmlns:android="http://schemas.android.com/apk/res/android"
android:insetBottom="0dp"
android:insetLeft="-5dp"
android:insetRight="-5dp"
android:insetTop="-5dp" >
<shape>
<solid android:color="#FFF" />
<stroke
android:width="3dp"
android:color="#00F" />
</shape>
</inset>
For more info you can have a look at this post: http://blog.stylingandroid.com/archives/1329 | unknown | |
d17003 | val | If you have numeric ratings, you could use diff to check if you consistently have 0 difference between each rater:
f <- function(cols, data) {
sum(colSums(diff(t(data[cols]))==0)==(length(cols)-1)) / nrow(data)
}
Results are as expected when applying the function to example groups:
f(c("a","b","d"), df)
#[1] 0.6
f(c("a","d"), df)
#[1] 1
A: There are two tasks here: firstly, making a list of all the relevant combinations, and secondly, evaluating and aggregating rowwise similarity. combn can start the first task, but it takes a little massaging to arrange the results into a neat list. The second task could be handled with prop.table, but here it's simpler to calculate directly.
Here I've used tidyverse grammar (primarily purrr, which is helpful for handling lists), but convert into base if you like.
library(tidyverse)
map(2:length(df), ~combn(names(df), .x, simplify = FALSE)) %>% # get combinations
flatten() %>% # eliminate nesting
set_names(map_chr(., paste0, collapse = '')) %>% # add useful names
# subset df with combination, see if each row has only one unique value
map(~apply(df[.x], 1, function(x){n_distinct(x) == 1})) %>%
map_dbl(~sum(.x) / length(.x)) # calculate TRUE proportion
## ab ac ad bc bd cd abc abd acd bcd abcd
## 0.6 0.2 1.0 0.2 0.6 0.2 0.0 0.6 0.2 0.0 0.0
A: With base R functions you could do:
groupVec = c("a","b","d")
transDF = t(as.matrix(DF))
subDF = transDF[rownames(transDF) %in% groupVec,]
subDF
# [,1] [,2] [,3] [,4] [,5]
# a 1 2 1 2 1
# b 1 2 2 1 1
# d 1 2 1 2 1
#if length of unique values is 1, it implies match across all objects, count unique values/total columns = match pct
match_pct = sum(sapply(as.data.frame(subDF), function(x) sum(length(unique(x))==1) ))/ncol(subDF)
match_pct
# [1] 0.6
Wrapping it in a custom funtion:
fn_matchPercent = function(groupVec = c("a","d") ) {
transDF = t(as.matrix(DF))
subDF = transDF[rownames(transDF) %in% groupVec,]
match_pct = sum(sapply(as.data.frame(subDF), function(x) sum(length(unique(x))==1) ))/ncol(subDF)
outputDF = data.frame(groups = paste0(groupVec,collapse=",") ,match_pct = match_pct)
return(outputDF)
}
fn_matchPercent(c("a","d"))
# groups match_pct
# 1 a,d 1
fn_matchPercent(c("a","b","d"))
# groups match_pct
# 1 a,b,d 0.6
A: Try this:
find.unanimous.percentage <- function(df, at.a.time) {
cols <- as.data.frame(t(combn(names(df), at.a.time)))
names(cols) <- paste('O', 1:at.a.time, sep='')
cols$percent.unanimous <- 100*colMeans(apply(cols, 1, function(x) apply(df[x], 1, function(y) length(unique(y)) == 1)))
return(cols)
}
find.unanimous.percentage(df, 2) # take 2 at a time
O1 O2 percent.unanimous
1 a b 60
2 a c 20
3 a d 100
4 b c 20
5 b d 60
6 c d 20
find.unanimous.percentage(df, 3) # take 3 at a time
O1 O2 O3 percent.unanimous
1 a b c 0
2 a b d 60
3 a c d 20
4 b c d 0
find.unanimous.percentage(df, 4)
O1 O2 O3 O4 percent.unanimous
1 a b c d 0
A: Clustering similarity metrics
It seems that you might want to calculate a substantially different (better?) metric than what you propose now, if your actual problem requires to evaluate various options of clustering the same data.
This http://cs.utsa.edu/~qitian/seminar/Spring11/03_11_11/IR2009.pdf is a good overview of the problem, but the BCubed precision/recall metrics are commonly used for similar problems in NLP (e.g http://alias-i.com/lingpipe/docs/api/com/aliasi/cluster/ClusterScore.html).
A: Try this code. It works for your example and should hold for the extended case.
df <- data.frame(a = c(1,2,1,2,1), b=c(1,2,2,1,1), c= c(2,1,2,2,2), d=c(1,2,1,2,1))
# Find all unique combinations of the column names
group_pairs <- data.frame(t(combn(colnames(df), 2)))
# For each combination calculate the similarity
group_pairs$similarities <- apply(group_pairs, 1, function(x) {
sum(df[x["X1"]] == df[x["X2"]])/nrow(df)
}) | unknown | |
d17004 | val | As the secondary volume has the same UUID and the Amazon Linux used UUID based identification for root, then there might be a chance that the secondary volume was taken as the root volume. This may be the reason why there would be a mess up in choosing the root volume and the initial attempt to find test.txt would fail.
The reboot might have allowed it to take a different order which is why you were able to find it. | unknown | |
d17005 | val | You have middleware 'auth' for your route. You should investigate a bit on it and you'll understand why doesn't it work.
The point is that 'auth' middleware requires cookies to work correctly, while you have it empty when you're have Ajax request. Why does it give you 302 and not 401? You should look into your authentication handler for any custom logic on that.
What to do? Implement JWT auth or stateless authentication!
A: OK problem solved...
My problem was in routes/web.php, I declared that route inside a Route group, which has two other customized middlewares.
One of those middleware was making me that wrong redirection. | unknown | |
d17006 | val | At least in my case, the answer is to run the jobs as the same user as the pg_cron background thread. I've posted more details to the end of the original question. | unknown | |
d17007 | val | You need to consider using the anchor and Dock properties this is how you position your controls on the form and control their positions in various scales
you can find here very useful article about using
anchoring and docking
A: By making use of anchors and docks then you should be able to create a WinForm which scales to any size monitor.
It would be helpful if you could edit your question and include the designer code so we can see what's happening.
A: In order to make the form resize as you want, You can use table layout panels to set your layout and then you can use the anchor property of the controls to set, where they should move when the form is resized.
The anchor property simply anchors the control to a location, for example if you anchor a text box to may be left, then on resize it will be at left. Or if you anchor it to say both left and right, if will expand in both directions. Just explore them and it should work fine for you. | unknown | |
d17008 | val | Specifying popover-placement fixed the problem for me.
Example:
<input type="number"
popover-placement="top"
popover="This is some text that explains something"
popover-trigger="focus">
A: There seems to be a problem with placement/position of tooltips, and popovers. It has something to do with the changes to angular.isDefined which works differently in AngularJS 1.2 & 1.3
Here are a few directives to patch the issues by setting defaults
// Bootstrap UI fixes after upgrading to Angular 1.3
.directive('tooltip', function() {
return {
restrict: 'EA',
link: function(scope, element, attrs) {
attrs.tooltipPlacement = attrs.tooltipPlacement || 'top';
attrs.tooltipAnimation = attrs.tooltipAnimation || true;
attrs.tooltipPopupDelay = attrs.tooltipPopupDelay || 0;
attrs.tooltipTrigger = attrs.tooltipTrigger || 'mouseenter';
attrs.tooltipAppendToBody = attrs.tooltipAppendToBody || false;
}
}
})
.directive('popover', function() {
return {
restrict: 'EA',
link: function(scope, element, attrs) {
attrs.popoverPlacement = attrs.popoverPlacement || 'top';
attrs.popoverAnimation = attrs.popoverAnimation || true;
attrs.popoverPopupDelay = attrs.popoverPopupDelay || 0;
attrs.popoverTrigger = attrs.popoverTrigger || 'mouseenter';
attrs.popoverAppendToBody = attrs.popoverAppendToBody || false;
}
}
})
A: Perhaps you are using angular 1.3.1 that breaks popover, angular 1.3.0 works | unknown | |
d17009 | val | This code actually works fine. The only problem was that I was looking at the _i value of the moment object to check it's value (this is the value used as the initial input when creating the object, not necessarily the current value).
Changing the console.log line to the following yields the expected / correct result:
console.log(event.offsetX, valueX.format('YYYY-MM-DD HH:mm:ss'), null, event.offsetY, valueY);
A: If you want to get the nearest x axis value you could do it this way:
onClick: function (event) {
const activeElements = this.getElementsAtXAxis(event);
const xval = this.scales['x-axis-0']._timestamps.data[activeElements[0]._index];
console.log(xval)
} | unknown | |
d17010 | val | Yes it is. See http://www.meteorpedia.com/read/Deploying_to_a_PaaS
In most cases this is as simple as using "meteor bundle",
demeteorizer, and then uploading the resulting files with your PaaS
provider's CLI deploy tool.
Demeteorizer wraps and extends Meteor’s bundle command by creating
something that more closely resembles a standard looking Node.js
application, complete with a package.json file for dependency
management.
$ cd /my/meteor/app
$ demeteorizer -o /my/node/app
$ cd /my/node/app
$ npm install
$ export MONGO_URL='mongodb://user:password@host:port/databasename?autoReconnect=true&connectTimeout=60000'
$ export PORT=8080
$ forever start main.js
Forever keeps your app running after a disconnect or crash, but not a reboot unless you manually add a boot entry.
The whole deploy is much easier using Meteor Up instead. Or maybe mups, though that doesn't even have updated docs.
To run a Meteor app in an Azure web app:
Azure Web App
Python 2.7
Websockets ON (optional)
WEBSITE_NODE_DEFAULT_VERSION 0.10.32 (default)
ROOT_URL http://webapp.azurewebsites.net
MONGO_URL mongodb://username:[email protected]:36648/dbname (For advanced apps. Request log should say if you need it.)
Dev Machine
Install Visual Studio Community 2015
Install Node 0.12.6
Install Meteor MSI
app> demeteorizer -o ..\app-dem
app-dem\programs\server\packages\webapp.js change .PORT line to "var localPort = process.env.PORT"
app-dem\package.json change "node": "0.10.36" to "node": "0.12.6"
app-dem> npm install
app-dem> git init
app-dem> git add -A .
app-dem> git commit -m "version 1.0 demeteorized Meteor + tweaks"
app-dem> git remote add azure https://[email protected]:443/webapp.git
app-dem> git config http.postBuffer 52428800
app-dem> git push azure master
Instead of demeteorizer -o, perhaps you could use meteor build and create a package.json in the output root:
{
"name": "App name",
"version": "0.0.1",
"main": "main.js",
"scripts": {
"start": "node main.js"
},
"engines": {
"node": "0.12.6"
}
}
If bcrypt doesn't compile, make sure to use a more recent version:
"dependencies": {
"bcrypt": "https://registry.npmjs.org/bcrypt/-/bcrypt-0.8.4.tgz"
}
A: Before starting make sure your have install'd a 32 bit version of nodejs and have run "npm -g install fibers" on your windows build machine. Default nodejs on azure is running 32 bit only!
Note: this will not work if you'r using for example the spiderable package which relays on PhantomJS. PhantomJS can not be executed in a webapp on azure?
*
*In your project "meteor build ..\buildOut" and extract the .tar.gz file located in "..\buildOut".
*Place/create in "..\buildOut\bundle" a "package.json" containing:
{
"name": "AppName",
"version": "0.0.1",
"main": "main.js",
"scripts": {
"start": "node main.js"
},
"engines": {
"node": "0.12.6"
}
}
Note: Make sure "name" doesn't contain spaces, the deploy on azure will fail.
*On your favorite shell, goto "..\buildOut\bundle\programs\server" and run "npm install". This will pre download all the requirements and build them.
*Now open the file "..\buildOut\bundle\programs\server\packages\webapp.js" and search for "process.env.PORT".
it looks like this:
var localPort = parseInt(process.env.PORT) || 0;
alter this line into:
var localPort = process.env.PORT || 0;
This is needed so your meteor project can accept a named socket as soon as it runs in node. The function "parseInt" will not let a string go thru, the named socket is a string located in your webapp's environment. This my be done for a reason, a warning here! Now save this change an we are almost done...
*Solve the bcrypt issue: Download this file and extract it somewhere: https://registry.npmjs.org/bcrypt/-/bcrypt-0.8.4.tgz
Extract it.
Now replace the files located: "..\buildOut\bundle\programs\server\npm\npm-bcrypt\node_modules\bcrypt*"
with the directory's and file's located somewhere: ".\bcrypt-0.8.4\package*"
Now go on the shell in the directory "..\buildOut\bundle\programs\server\npm\npm-bcrypt\node_modules\bcrypt\" and make sure you remove the "node_modules" directory. If the node_modules directory is not removed npm will not build the package for some reason.
Run on the shell "npm install".
Make sure you set the "Environment" variables: "MONGO_URL" and "ROOT_URL" in the portal for you webapp.
If everything worked without an error, you can deploy your app to the git repository on the deployment slot for your webapp. Go to "..\buildOut\bundle" and commit the files there to the deployment slot's repository. This will course the deploy on the deployment slot and create the needed iis configuration file(s).
Now wait a little and your app should fire after some time... Your app should be running and you can access it on the *.azuresites.net
Thanks to all that made this possible. | unknown | |
d17011 | val | Your problem is that you are referring to sold_quantity here :
select(tp.package_rate * sold_quantity )
The alias is not recognized at this point.You will have to replace it with sum(sales). You will also have to group by tp.package_rate.
Your query should ideally be like :
select tp.package_rate, sum(sell) as sold_quantity ,
(tp.package_rate * sum(sell) ) as sell_amount from tbl_ticket_package tp
where tp.event_id=1001 group by tp.package_rate;
I am guessing that tp.package_rate is unique for a given event_id, from latter part of your question. If that's not the case, the sql you have written makes no sense. | unknown | |
d17012 | val | I think that one of the best solutions is the use of Collections.sort
Collections.sort(posts, new Comparator<WPPost>() {
@Override
public int compare(WPPost o1, WPPost o2) {
return o2.getRating() - o1.getRating();
}
});
In some implemetations Collectios sort use merge sort algorithm, that will give you a O(log n) complexity. | unknown | |
d17013 | val | You are just declaring the functions, you haven't made a call to any functions at all. Use the sample below and change your program accordingly.
Example:
int main()
{
int i_array={0,1,2,3,4,5};
function(i_array); // Call a function
printf("%d",i_array[0]); // will print 100 not 0
}
void function(int [] i_array)
{
i_array[0]=100;
} | unknown | |
d17014 | val | Use your query as a subselect:
SELECT *
FROM (SELECT a.name ...) dummy
WHERE distance < 500.0; | unknown | |
d17015 | val | Your suggested method is good practice: you should try to flatten your data structure as much as possible.
I'd suggest using the user's ID for the membership of each address so it's easy to identify though. This way you can obtain a list of the members of "Sherman Street" from /addresses/Sherman Street and then match the keys listed within there to the users at /users/ with ease.
{
"users": {
"oXrJPVZsnMP3VKp9palSBfdnntk1": { ... },
"xQDx3ntavhV3c02KFRPS8lxYfC62": { ... }
},
"addresses": {
"Sherman Street": {
"members": {
"xQDx3ntavhV3c02KFRPS8lxYfC62": true
}
},
"Wallaby Way": {
"members": {
"oXrJPVZsnMP3VKp9palSBfdnntk1": true
}
}
}
}
You can also add backwards linking too by adding an address field to the user which matches the address name:
{
"users": {
"oXrJPVZsnMP3VKp9palSBfdnntk1": { ... },
"xQDx3ntavhV3c02KFRPS8lxYfC62": {
"username": "Jannie",
"address": "Sherman Street"
}
},
"addresses": {
"Sherman Street": { ... }
}
}
Using both together makes it easy to identify what users have selected which addresses, irrespective of which object you are currently handling within the app.
See the Firebase documentation on database structure for further details on structuring your data like this. | unknown | |
d17016 | val | I am afraid that there is no such a feature in azure devops to group the approval requests of service endppoints currently.
Actually, the resource owner doesnot need to click multiple times to approve each individual approval request for a pipeline run. He can simply click Approval All to approve at once
In my test yaml pipeline, I included several service endpoints in my yaml pipeline. When i run the pipeline. The resource owner will receive the approval requests. He can click the Approval All button to approve all the requests in a single click, he can also choose to approve each specific approval request. See below screenshot.
If above approval all feature in azure devops can not achieve what you expect with grouping the approval requests of service endpoints. You can suggest a feature to microsoft development team. Hope they will consider implementing this feature in the future sprint. | unknown | |
d17017 | val | Don't use two separate ways of attaching handlers when you only need one. Inline event handlers are essentially eval inside HTML markup - they're bad practice and result in poorly factored, hard-to-manage code. Seriously consider attaching your events with JavaScript, instead.
The problem is that when assigning the handler via onclick, the this in changeUrl is undefined, because the calling context is global. Feel free to avoid using this when it can cause confusion.
Just use addEventListener alone. Also, you'll have to use getAttribute('href') instead of .href because divs are not supposed to have href properties.
const Pokemon_ID = '5';
document.getElementById('right-btn').addEventListener('click', function(e) {
// location.href = e.target.getAttribute('href') + '?id=' + Pokemon_ID;
console.log('changing URL to ' + e.target.getAttribute('href') + '?id=' + Pokemon_ID);
});
<div id="right-btn" href="pokedex.php">text</div>
A: Try this instead:
location.href += '?id=' + Pokemon_ID;
A: Because you call changeUrl() within the onclick method you loose the context of this. This in changeUrl is not your div. Maybe you have to pass this into the method with changeUrl(this) or you just pass the href with changeUrl(this.href).
Than use:
function changeUrl(target){
location.href=target.href+'?id='+Pokemon_ID;
}
A: As mentioned by CertainPerformance above, you are not passing the right arguments to you function to work correctly; Using you code as a reference, you can either pass the original event to you changeUrl() function, then use the e.target to get to your 'right-btn' element.
Javascript:
var Pokemon_ID = 1;
function changeUrl(e) {
var href = e.target.getAttribute('href');
console.log(href +'?id=' + Pokemon_ID);
return false;
}
document.getElementById( 'right-btn' ).onclick = function(e) {
changeUrl(e);
};
HTML:
<div id="right-btn" href="pokedex.php">Click Me 4</div>
However, if you realy want to use this in your function to refer to the 'right-btn' element, then you can change the code to;
Javascript:
var Pokemon_ID = 1;
function changeUrl() {
var href = this.getAttribute('href');
console.log(href +'?id=' + Pokemon_ID);
return false;
}
document.getElementById( 'right-btn' ).onclick = function(e) {
changeUrl.call(e.target);
};
The changes being the call in the event handler:
changeUrl.call(e.target);, which calls you function in the 'context' of the e.target, making the this in your changeUrl() function to the element. Then you can use the this as in var href = this.getAttribute('href'); | unknown | |
d17018 | val | You can use to_char to format characters.
So, for the format you have specified
to_char(money, '9,99,999.99' );
Would return 8,80,856.00
example
It should be noted that this is simply the length of the string provided in the first answer, and a longer second argument can be provided to properly format the number as desired.
So:
to_char(11111111111111.11, '9,99,999,99,9,99,999.99')
Would return 1,11,111,11,1,11,111.11
extra length example
This will also automatically adjust for the number of digits provided in the first argument, for instance:
to_char(5856.00, '9,99,999.99');
Would return 5,856.00
example 2
From the Oracle docs:
https://www.oradev.com/oracle_number_format.jsp
Here's an old stackoverflow post on the topic: oracle sql query to format number by comma | unknown | |
d17019 | val | One solution is to use Batch as x values and Yield as y values. Line is added with stat_summary() and argument fun.y=mean to get mean value of Yield. Then coord_flip() is used to get Batch as y axis. To change order of Batch values you can use reorder() function inside the aes() of ggplot().
ggplot (Dyestuff, aes (reorder(Batch,Yield), Yield)) + geom_jitter(aes(colour=Batch))+
stat_summary(fun.y=mean,geom="line",aes(group=1))+
coord_flip() | unknown | |
d17020 | val | You scan always the first value of text, because you forgot to move the input for strtoul right after the end of the previous scan. That's what the **end-parameter of strtoul is good for: it points to the character right after the last digit of a successful scan. Note: if nothing could have been read in, the end-pointer is equal the input pointer, and this indicates a "wrongly formated number" or the end of the string. See the following program illustrating this. Hope it helps.
int main() {
const char* input = "80 9c 95 95 96 11 bc 96 b9 95 9d 10";
const char *current = input;
char *end = NULL;
while (1) {
unsigned long val = strtoul(current, &end, 16);
if (current == end) // invalid input or end of string reached
break;
printf("val: %lX\n", val);
current = end;
}
}
A: This is also possible solution with the use of strchr, and strtoul(tmp,NULL,16);
In your solution remember that size_t arr_len = (len/3) + 1; since the last token is only 2 bytes long.
Input text tokens are converted to bytes and stored in char array:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(void)
{
char text[] = "80 9c 95 95 96 11 bc 96 b9 95 9d 10";
size_t len = strlen(text);
size_t arr_len = (len/3) + 1;
printf("len= %zu arr_len= %zu\n", len, arr_len);
printf("Text:\n%s\n", text);
char array[arr_len];
const char *p1 = text;
char tmp[3];
tmp[2] = 0;
printf("Parsing:\n");
for(size_t i=0; i< arr_len; i++)
{
p1 = strchr(p1,' ');
if(p1)
{
tmp[0] = *(p1-2);
tmp[1] = *(p1-1);
array[i]= (char)strtoul(tmp,NULL,16);
printf("%2x ", (unsigned char) array[i]);
p1++;
if(strlen(p1) == 2 ) // the last char
{
i++;
tmp[0] = *(p1);
tmp[1] = *(p1+1);
array[i]= (char)strtoul(tmp,NULL,16);
printf("%2x", (unsigned char) array[i]);
}
}
}
printf("\nArray content:\n");
for(size_t i=0; i< arr_len; i++)
{
printf("%2x ", (unsigned char) array[i]);
}
return 0;
}
Test:
len= 35 arr_len= 12
Text:
80 9c 95 95 96 11 bc 96 b9 95 9d 10
Parsing:
80 9c 95 95 96 11 bc 96 b9 95 9d 10
Array content:
80 9c 95 95 96 11 bc 96 b9 95 9d 10 | unknown | |
d17021 | val | I realized we don't need a CTE to do this, you can simply do:
SELECT TOP(1) month, COUNT(*) FROM newspaper
CROSS JOIN months
WHERE (start_month<=month) & (end_month>=month)
GROUP BY month
ORDER BY 2 DESC
;
This will grab the top row, and it will be ordered by the highest count. I am unsure of language used by CodeAcademy, but every language I know of can grab the top row in some fashion.
Edit:
I see someone posted that CodeAcademy uses SQLite, which uses Limit to get X amount of rows. So you can use:
SELECT month, COUNT(*) FROM newspaper
CROSS JOIN months
WHERE (start_month<=month) & (end_month>=month)
GROUP BY month
ORDER BY 2 DESC
LIMIT 1
; | unknown | |
d17022 | val | You can do this many ways
Via codeigniter 3.x
then first load form validation library in controller function and set rules and place holder for showing errors in view file. See for reference
https://codeigniter.com/userguide3/libraries/form_validation.html#the-controller
https://codeigniter.com/userguide3/libraries/form_validation.html#showing-errors-individually
Via HTML
You can simply do this by adding required attribute to all html elements which are required, browser will show warning automatically after clicking on submit button for required fields. see example for input html element
<input type="text" name="nama_pelapor" required class="form-control">
See for dropdown
<select class="form-control" required name="jenis_laporan">
<option>Penemuan hewan</option>
<option>Kehilangan Hewan</option>
</select>
Via Jquery
If you want to customize error messages then you do this via your own code in jquery or jquery validator library. See this documentation for jquery validation
https://jqueryvalidation.org/validate/
A: Comparison operator should be double == not single =. Edit your line of code to this.
if ($gambar =='') {} else{ | unknown | |
d17023 | val | My guess is that it's your StreamWriter that is chunking your data. Try setting AutoFlush = true. | unknown | |
d17024 | val | If you're just iterating over that series to build a list of floats, you could instead use astype(float).
It seems like you have some values in that column, though, that cannot be converted to float. For the sake of troubleshooting, maybe just try
for alpha in zip(df['age_in_years']):
try:
X_parameter.append([float(alpha)])
except:
print alpha
You should be able to replace that whole function using
X = pd.read_csv(file_name).drop_duplicates()['age_in_years'].astype(float) | unknown | |
d17025 | val | It's actually quite simple. You have points A (A.x, A.y) and B (B.x, B.y) and need the update for your character position.
Start by calculating the direction vector dir = B - A (subtract component-wise, such that dir.x = B.x - A.x; dir.y = B.y - A.y).
If you add this entire vector to your character's position, you will move it by sqrt(dir.x^2 + dir.y^2) (Pythagorean theorem). Hence, the speed will be: speed = sqrt(dir.x^2 + dir.y^2) / frameTime.
So if you want a constant speed, you have to find a multiple of the direction vector. This will be:
update = dir * speed / sqrt(dir.x^2 + dir.y^2) * frameTime
characterPosition = characterPosition + update
Don't bother with angle calculations. Vector arithmetic is usually way more robust and expressive.
A: Since u didn't provide us code I can just guess what exactly you are trying.
So I'll start with some basics.
I guess you are not just trying to make a program
but the basic movement of an 2d based game engine.
However a constant moving is based of an update method
that reads the player input about 30 times while rendering.
Example from my Projects:
public void run() {
boolean isRunning = true;
long lastNanoTime = System.nanoTime(),nowNanoTime = System.nanoTime();
final double secondAsNano = 1000000000.0;
final double sixtiethOfSecondAsNano = secondAsNano / 30.0;
int delta = 1;
while(isRunning){
nowNanoTime = System.nanoTime();
if(delta * sixtiethOfSecondAsNano + lastNanoTime < nowNanoTime){
update();
delta++;
if(delta == 30){
delta = 1;
lastNanoTime = nowNanoTime;
}
}
render();
}
}
public update(){
if(Input.moveUP) setPlayerY(getPlayerY() - 0.2);
if(Input.moveDOWN) setPlayerY(getPlayerY() + 0.2);
if(Input.moveRIGHT) setPlayerX(getPlayerX() + 0.2);
if(Input.moveLEFT) setPlayerX(getPlayerX() - 0.2);
}
I just cut it down to be easy readable so it might not work correct but it should explain you how its basically done.
A: Mathematically (I won't provide code, sorry), assuming you can move in any direction on a two dimensional plane, from the top of my head it could be something like this (taken from old school geometry):
*
*Having a speed, let's say 20 pixels per second (it could also be any other unit you chose, including in-game distance)
*Having a polling system, where you have 2 main variables: the last known coords for the char (point A: Ax, Ay), the time of the last known update.
*Having the time of the current update.
*Having the coords for the destination (point B: Bx, By).
*Figure out the current position of your character, which could be done like this (without converting from cartesian to polar coord system):
*Figure out angle of movement: find the deltas for X and Y (Dx=Bx-Ax and Dy=By-Ay respectively), and use tangent to find the angle where Angle = tan-1(Dy/Dx).
*Figure out the travelled distance (TD) from last poll to current poll where TD = speed * elapsed time
*Figure out the coords for the new position (Cx and Cy) using sine and cosine, where travelled X is Tx=TD*sin(Angle) and travelled Y is Ty=TD*cos(Angle).
*Now add the travelled distances to your original coords, and you get the current position. Where Cx=Ax+Tx and Cy=Ay+Ty.
How "smooth" movement is depends highly on the quality of your polling and somehow also on rounding for small distances.
A: I ended up figuring this out almost an hour after posting this. Sorry that my question was so vague, what I was looking for was the math behind what i was trying to do not the code itself but thanks to people who answered. Here is the solution I came up with in code :
if (q.get(0)[0][0] > q.get(0)[1][0]) {
if(q.get(0)[0][0] == q.get(0)[1][0]) {
currentLocation[0] -= 5 * Math.cos((Math.atan(((double) q.get(0)[0][1] - (double) q.get(0)[1][1]) / ((double) q.get(0)[0][0] - (double) q.get(0)[1][0]))));
currentLocation[1] += 5 * Math.sin((Math.atan(((double) q.get(0)[0][1] - (double) q.get(0)[1][1]) / ((double) q.get(0)[0][0] - (double) q.get(0)[1][0]))));
}
else{
currentLocation[0] -= 5 * Math.cos((Math.atan(((double) q.get(0)[0][1] - (double) q.get(0)[1][1]) / ((double) q.get(0)[0][0] - (double) q.get(0)[1][0]))));
currentLocation[1] -= 5 * Math.sin((Math.atan(((double) q.get(0)[0][1] - (double) q.get(0)[1][1]) / ((double) q.get(0)[0][0] - (double) q.get(0)[1][0]))));
}
} else {
if(q.get(0)[0][0] == q.get(0)[1][0]) {
currentLocation[0] += 5 * Math.cos((Math.atan(((double) q.get(0)[0][1] - (double) q.get(0)[1][1]) / ((double) q.get(0)[0][0] - (double) q.get(0)[1][0]))));
currentLocation[1] -= 5 * Math.sin((Math.atan(((double) q.get(0)[0][1] - (double) q.get(0)[1][1]) / ((double) q.get(0)[0][0] - (double) q.get(0)[1][0]))));
}
else{
currentLocation[0] += 5 * Math.cos((Math.atan(((double) q.get(0)[0][1] - (double) q.get(0)[1][1]) / ((double) q.get(0)[0][0] - (double) q.get(0)[1][0]))));
currentLocation[1] += 5 * Math.sin((Math.atan(((double) q.get(0)[0][1] - (double) q.get(0)[1][1]) / ((double) q.get(0)[0][0] - (double) q.get(0)[1][0]))));
}
}
I figured out a way to get the result I wanted though I probably over complicated it. q is an ArrayList that holds 2d arrays that are 2x2 [a/b][x/y]. and currentLocation a 2 index array that's just [x/y]. The result is the affect I wanted where it draws a line in (X units) a pass from point a to b in any direction at the same speed. This question was poorly worded and i'm sorry for wasting peoples time. | unknown | |
d17026 | val | Before I share some numbers, I'd highly recommend to not perform such premature optimizations. Consider the following code:
private func getAttributedString() -> NSMutableAttributedString{
let attributedString = NSMutableAttributedString(string: "Something ")
attributedString.append(NSAttributedString(string: "Enabled",
attributes: [NSForegroundColorAttributeName: UIColor(rgb: 0xCD0408)]))
return attributedString
}
//overwrites attributed text 100000 times
@IBAction func overwriteAttributedText(_ sender: Any) {
let timeBeforeAction = Date.init()
print ("Time taken to overwrite attributed text is ")
for _ in 1 ... 100000{
label.attributedText = getAttributedString()
}
let timeAfterAction = Date.init()
let timeTaken = timeAfterAction.timeIntervalSince(timeBeforeAction)
print(timeTaken)
}
//overwrites attributed text 100 times
@IBAction func cacheAttributedText(_ sender: Any) {
let timeBeforeAction = Date.init()
print ("Time taken to selectively overwrite attributed text is ")
for i in 1 ... 100000{
if i % 1000 == 0 {
label.attributedText = getAttributedString()
}
}
let timeAfterAction = Date.init()
let timeTaken = timeAfterAction.timeIntervalSince(timeBeforeAction)
print(timeTaken)
}
//overwrites text 100000 times
@IBAction func overWriteText(_ sender: Any) {
let defaultText = "Hello World"
let timeBeforeAction = Date.init()
print ("Time taken to overwrite text is ")
for _ in 1 ... 100000{
label.text = defaultText
}
let timeAfterAction = Date.init()
let timeTaken = timeAfterAction.timeIntervalSince(timeBeforeAction)
print(timeTaken)
}
Here are the results:
Time taken to overwrite attributed text is 0.597925961017609
Time taken to selectively overwrite attributed text is 0.004891037940979
Time taken to overwrite text is 0.0462920069694519
The results speak for themselves, but I leave it you if such optimizations are even needed.
A: While 7to4 is correct in regards to premature optimizations, the demo code he's using is misleading. The .attributedText setter itself is actually faster than setting .text; creating the attributed string is the bottleneck. Here's a modified version of his code that includes a method where the attributed string is pre-cached:
private func getAttributedString() -> NSMutableAttributedString{
let attributedString = NSMutableAttributedString(string: "Something ")
attributedString.append(NSAttributedString(string: "Enabled",
attributes: [NSAttributedStringKey.foregroundColor: UIColor.red]))
return attributedString
}
//overwrites attributed text 100000 times
@IBAction func overwriteAttributedText(_ sender: Any) {
let timeBeforeAction = Date.init()
print ("Time taken to overwrite attributed text is ")
for _ in 1 ... 100000{
label.attributedText = getAttributedString()
}
let timeAfterAction = Date.init()
let timeTaken = timeAfterAction.timeIntervalSince(timeBeforeAction)
print(timeTaken)
}
//overwrites attributed text 100000 times with a cached string
@IBAction func overwriteAttributedTextWithCachedString(_ sender: Any) {
let timeBeforeAction = Date.init()
let attributedString = getAttributedString()
print ("Time taken to overwrite attributed text with cached string is ")
for _ in 1 ... 100000{
label.attributedText = attributedString
}
let timeAfterAction = Date.init()
let timeTaken = timeAfterAction.timeIntervalSince(timeBeforeAction)
print(timeTaken)
}
//overwrites text 100000 times
@IBAction func overWriteText(_ sender: Any) {
let defaultText = "Hello World"
let timeBeforeAction = Date.init()
print ("Time taken to overwrite text is ")
for _ in 1 ... 100000{
label.text = defaultText
}
let timeAfterAction = Date.init()
let timeTaken = timeAfterAction.timeIntervalSince(timeBeforeAction)
print(timeTaken)
}
Results:
Time taken to overwrite attributed text is
0.509455919265747
Time taken to overwrite attributed text with cached string is
0.0451710224151611
Time taken to overwrite text is
0.0634149312973022
On average, the .attributedText setter itself is about 30~40% faster than the .text setter. That said, take note that it probably takes a lot of labels before this actually becomes a bottleneck. Also remember that (if by some crazy circumstance) this is your bottleneck, this optimization is only effective if the attributed string is pre-allocated ahead of time. | unknown | |
d17027 | val | Change this line
'edit' => site_url('admin/users_group_controller_update') .'/'. $this->getId($controller)
to
'edit' => site_url('admin/users_group_controller_update' .'/'. $this->getId($controller))
A: With in my files variable I had to use the db function to get it to work. No errors show now all fixed.
<?php
class Users_group_update extends Admin_Controller {
public function __construct() {
parent::__construct();
$this->load->model('admin/user/model_user_group');
}
public function index() {
$data['title'] = "Users Group Update";
$controller_files = $this->getInstalled($this->uri->segment(3));
$data['controller_files'] = array();
$files = glob(FCPATH . 'application/modules/admin/controllers/*/*.php');
if ($files) {
foreach ($files as $file) {
$controller = basename(strtolower($file), '.php');
$do_not_list = array(
'customer_total',
'dashboard',
'footer',
'header',
'login',
'logout',
'menu',
'online',
'permission',
'register',
'user_total'
);
if (!in_array($controller, $do_not_list)) {
$this->db->where('name', $this->uri->segment(3));
$this->db->where('controller', $controller);
$query = $this->db->get($this->db->dbprefix . 'user_group');
if ($query->num_rows()) {
$row = $query->row();
$data['controller_files'][] = array(
'name' => $controller,
'installed' => in_array($controller, $controller_files),
'edit' => site_url('admin/users_group_controller_update' .'/'. $row->user_group_id)
);
}
}
}
}
$this->load->view('template/user/users_group_form_update', $data);
}
public function getInstalled($name) {
$controller_data = array();
$this->db->select();
$this->db->from($this->db->dbprefix . 'user_group');
$this->db->where('name', $name);
$query = $this->db->get();
foreach ($query->result_array() as $result) {
$controller_data[] = $result['controller'];
}
return $controller_data;
}
} | unknown | |
d17028 | val | If the +0000 part is always the same and doesn't matter, you can use:
DATE(STR_TO_DATE(my_field, '%b %d %H:%i:%s +0000 %Y'))
The used specifiers here are:
Specifier | Description
----------|------------
%b | Abbreviated month name (Jan..Dec)
%d | Day of the month, numeric (00..31)
%H | Hour (00..23)
%i | Minutes, numeric (00..59)
%s | Seconds (00..59)
%Y | Year, numeric, four digits
See the full list of specifiers in the documentation under DATE_FORMAT()
A: SELECT DATE_FORMAT(STR_TO_DATE('Jan 11 17:18:53 +0000 2011 ', '%b %d %H:%i:%s +0000 %Y'), '%Y-%m-%d');
A: I don't think that +0000 is a fixed number.
so i would use
Select STR_TO_DATE('Jan 11 17:18:53 +0000 2011','%b %e %H:%i:%s +%f %Y')
A: You can use following query to get the required result:
select TO_DATE('Jan 11 17:18:53 +0000 2011', 'Mon DD HH24:MI:SS +0000 YYYY')
For more formatters you refer to following link:
https://www.postgresql.org/docs/9.0/functions-formatting.html | unknown | |
d17029 | val | I assume that you're using class-based components. You can render the image which is captured from camera by setting the response photo to a local state and conditionally rendering it.
import React from "react";
import { Image } from "react-native";
import { Camera } from "expo-camera";
import Constants from "expo-constants";
import * as ImagePicker from "expo-image-picker";
import * as Permissions from "expo-permissions";
class Cameras extends React.Component(props) {
state = {
captures: "",
};
takePicture = async () => {
if (this.camera) {
let photo = await this.camera.takePictureAsync({
base64: true,
});
this.props.cameraToggle(false);
this.setState({ captures: photo.uri });
}
};
render() {
const { captures } = this.state;
return (
<View flex={1}>
{this.state.captures ? (
<Image source={{ uri: captures }} style={{width:50,height:50}} />
) : (
<Camera {...pass in props} />
)}
</View>
);
}
}
export default Cameras;
Note:
A more clean approach would be to pass the state captures as a route param by navigating a different screen and rendering the image on that screen.
A: I am assuming you are storing value of photo in cameraImage.
I am sharing a working solution with you using expo image picker, give it a try
Your component
import * as ImagePicker from 'expo-image-picker';
import * as Permissions from 'expo-permissions';
<TouchableHighlight onPress={this._openCamera}>
<Text>Open Camera</Text>
</TouchableHighlight>
<Image source={this.state.mainImageUrl} />
Your function
_openCamera = async () => {
await Permissions.askAsync(Permissions.CAMERA);
try {
let result: any = await ImagePicker.launchCameraAsync({
base64: true,
allowsEditing: true,
aspect: [4, 3],
quality: 1,
});
if (!result.cancelled) {
this.setState({
mainImageUrl: { uri: `data:image/jpg;base64,${result.base64}` }
});
}
} catch (E) {
console.warn(E);
}
}
A: Your code:
let photo = await this.camera.takePictureAsync({
base64: true,
});
this.props.image(photo)
The takePictureAsync return an object, so your base64 data is in the field photo.base64.
Solution 2
You can store the URI into the state without base64.
const [photoUri, setPhotoUri] = React.useState(null);
const cameraRef = React.useRef(null);
const takePhoto = async () => {
setPhotoUri(null);
const camera = cameraRef.current;
if(!camera) return;
try {
const data = await camera.takePictureAsync();
setPhotoUri(data.uri);
} catch (error) {
console.log('error: ', error);
}
}
Then use it.
return (
<Image source={{uri: photoUri}} />
);
Hope it helps. | unknown | |
d17030 | val | to connect your SparkR session to Elasticsearch you need to make the connector jar and your ES configuration available to your SparkR session.
1: specifiy the jar (look up which version you need in the elasticsearch documentation; the below version is for spark 2.x, scala 2.11 and ES 6.8.0)
sparkPackages <- "org.elasticsearch:elasticsearch-spark-20_2.11:6.8.0"
2: specify your cluster config in your SparkConfig. You can add other Elasticsearch config here, too (and, of course, additional spark configs)
sparkConfig <- list(es.nodes = "your_comma-separated_es_nodes",
es.port = "9200")
*initiate a sparkR session
sparkR.session(master="your_spark_master",
sparkPackages=sparkPackages,
sparkConfig=sparkConfig)
*do some magic that results in a sparkDataframe you want to save to ES
*write your dataframe to ES:
write.df(yourSparkDF, source="org.elasticsearch.spark.sql",
path= "your_ES_index_path"
) | unknown | |
d17031 | val | Compare the speed of what you're doing now against querying the keys one-at-a-time using a keys-only query. If there isn't a clear winner, take the keys-only query, since it costs less. | unknown | |
d17032 | val | Explanation:
*
*Your code is indeed inefficient because you are calling setValue()
and setBackground() 9 times each when you can simply use
setValues() and setBackgrounds() instead once. It will get even more inefficient for a larger number of iterations.
*There is a little trick you need to do and that is to convert the data (nrows,ncolumns) returned by getRange(activeRow,22,1,9).getValues() with a shape of (1,9) to a shape of (9,1) and do the same for the background colors as well:
var rowValues=registerSheet.getRange(activeRow,22,1,9).getValues().flat().map(r=>[r]); // V:AD
var rowBackcolors = registerSheet.getRange(activeRow,22,1,9).getBackgrounds().flat().map(r=>[r]); // V:AD
*After that you can simply set the data directly to the destination sheet:
var range = form.getRange('C54:C62'); // C54:C63
range.setValues(rowValues);
range.setBackgrounds(rowBackcolors);
Solution:
Replace the for loop with this:
var rowValues=registerSheet.getRange(activeRow,22,1,9).getValues().flat().map(r=>[r]); // V:AD
var rowBackcolors = registerSheet.getRange(activeRow,22,1,9).getBackgrounds().flat().map(r=>[r]); // V:AD
var range = form.getRange('C54:C62'); // C54:C63
range.setValues(rowValues);
range.setBackgrounds(rowBackcolors);
Related article:
What does the range method getValues() return and setValues() accept? | unknown | |
d17033 | val | I really haven't done much with Bayesian posterior distributions ( and not for a while), but I'll try to help with what you've given. First,
k!(N-k)! / (N+1)! = 1 / (B(N,k) * (N + 1))
and you can calculate the binomial coefficients in Matlab with nchoosek() though it does say in the docs that there can be accuracy problems for large coefficients. How big are N and k?
Second, according to Mathematica,
integralFromZeroToOne( q^k * (1-q)^(N-k) ) = pi * csc((k-N)*pi) * Gamma(1+k)/(Gamma(k-N) * Gamma(2+N))
where csc() is the cosecant function and Gamma() is the gamma function. However, Gamma(x) = (x-1)! which we'll use in a moment. The problem is that we have a function Gamma(k-N) on the bottom and k-N will be negative. However, the reflection formula will help us with that so that we end up with:
= (N-k)! * k! / (N+1)!
Apparently, your notes were correct.
A: Let q be the probability of mis-classification. Then the probability that you would observe k mis-classifications in N runs is given by:
P(k|N,q) = B(N,k) q^k (1-q)^(N-k)
You need to then assume a suitable prior for q which is bounded between 0 and 1. A conjugate prior for the above is the beta distribution. If q ~ Beta(a,b) then the posterior is also a Beta distribution. For your info the posterior is:
f(q|-) ~ Beta(a+k,b+N-k)
Hope that helps. | unknown | |
d17034 | val | For the first part, assuming that you have a function solve[m] and a range of values for m={1,2,3,...}, you can use:
Map[solve, m]
I'm not sure what you mean by "fixing it", but this will give you an array, which you can investigate further. | unknown | |
d17035 | val | From my point of view your calculated member will be something like:
SET [Used Keys] AS
NONEMPTY([Key].[Key].[Key], [Measures].[Count])
MEMBER [AVG Keys] AS
AVG(
[Date].[Month].&[2017-01].Children,
DistinctCount([Used Keys])
) | unknown | |
d17036 | val | (edit: as far as directly answering your question about R values, see below)
One way to approach this would be to use cross-correlation. Bear in mind that you have to normalize amplitudes and correct for delays: if you have signal S1, and signal S2 is identical in shape, but half the amplitude and delayed by 3 samples, they're still perfectly correlated.
For example:
>> t = 0:0.001:1;
>> y = @(t) sin(10*t).*exp(-10*t).*(t > 0);
>> S1 = y(t);
>> S2 = 0.4*y(t-0.1);
>> plot(t,S1,t,S2);
These should have a perfect correlation coefficient. A way to compute this is to use maximum cross-correlation:
>> f = @(S1,S2) max(xcorr(S1,S2));
f =
@(S1,S2) max(xcorr(S1,S2))
>> disp(f(S1,S1)); disp(f(S2,S2)); disp(f(S1,S2));
12.5000
2.0000
5.0000
The maximum value of xcorr() takes care of the time-delay between signals. As far as correcting for amplitude goes, you can normalize the signals so that their self-cross-correlation is 1.0, or you can fold that equivalent step into the following:
ρ2 = f(S1,S2)2 / (f(S1,S1)*f(S2,S2);
In this case ρ2 = 5 * 5 / (12.5 * 2) = 1.0
You can solve for ρ itself, i.e. ρ = f(S1,S2)/sqrt(f(S1,S1)*f(S2,S2)), just bear in mind that both 1.0 and -1.0 are perfectly correlated (-1.0 has opposite sign)
Try it on your signals!
with respect to what threshold to use for acceptance/rejection, that really depends on what kind of signals you have. 0.9 and above is fairly good but can be misleading. I would consider looking at the residual signal you get after you subtract out the correlated version. You could do this by looking at the time index of the maximum value of xcorr():
>> t = 0:0.001:1;
>> y = @(a,t) sin(a*t).*exp(-a*t).*(t > 0);
>> S1=y(10,t);
>> S2=0.4*y(9,t-0.1);
>> f(S1,S2)/sqrt(f(S1,S1)*f(S2,S2))
ans =
0.9959
This looks pretty darn good for a correlation. But let's try fitting S2 with a scaled/shifted multiple of S1:
>> [A,i]=max(xcorr(S1,S2)); tshift = i-length(S1);
>> S2fit = zeros(size(S2)); S2fit(1-tshift:end) = A/f(S1,S1)*S1(1:end+tshift);
>> plot(t,[S2; S2fit]); % fit S2 using S1 as a basis
>> plot(t,[S2-S2fit]); % residual
Residual has some energy in it; to get a feel for how much, you can use this:
>> S2res=S2-S2fit;
>> dot(S2res,S2res)/dot(S2,S2)
ans =
0.0081
>> sqrt(dot(S2res,S2res)/dot(S2,S2))
ans =
0.0900
This says that the residual has about 0.81% of the energy (9% of the root-mean-square amplitude) of the original signal S2. (the dot product of a 1D signal with itself will always be equal to the maximum value of cross-correlation of that signal with itself.)
I don't think there's a silver bullet for answering how similar two signals are with each other, but hopefully I've given you some ideas that might be applicable to your circumstances.
A: A good starting point is to get a sense of what a perfect match will look like by calculating the auto-correlations for each signal (i.e. do the "cross-correlation" of each signal with itself).
A: THIS IS A COMPLETE GUESS - but I'm guessing max(abs(xcorr(S(1,:),X(1,:)))) > 0.8 implies success. Just out of curiosity, what kind of values do you get for max(abs(xcorr(S(1,:),X(2,:))))?
Another approach to validate your algorithm might be to compare A and W. If W is calculated correctly, it should be A^-1, so can you calculate a measure like |A*W - I|? Maybe you have to normalize by the trace of A*W.
Getting back to your original question, I come from a DSP background, so I get to deal with fairly noise-free signals. I understand that's not a luxury you get in biology :) so my 0.8 guess might be very optimistic. Perhaps looking at some literature in your field, even if they aren't using cross-correlation exactly, might be useful.
A: Usually in such cases people talk about "false acceptance rate" and "false rejection rate".
The first one describes how many times algorithm says "similar" for non-similar signals, the second one is the opposite.
Selecting a threshold thus becomes a trade-off between these criteria. To make FAR=0, threshold should be 1, to make FRR=0 threshold should be -1.
So probably, you will need to decide which trade-off between FAR and FRR is acceptable in your situation and this will give the right value for threshold.
Mathematically this can be expressed in different ways. Just a couple of examples:
1. fix some of rates at acceptable value and minimize other one
2. minimize max(FRR,FAR)
3. minimize aFRR+bFAR
A: Since they should be equal, the correlation coefficient should be high, between .99 and 1. I would take the max and abs functions out of your calculation, too.
EDIT:
I spoke too soon. I confused cross-correlation with correlation coefficient, which is completely different. My answer might not be worth much.
A: I would agree that the result would be subjective. Something that would involve the sum of the squares of the differences, element by element, would have some value. Two identical arrays would give a value of 0 in that form. You would have to decide what value then becomes "bad". Make up 2 different vectors that "aren't too bad" and find their cross-correlation coefficient to be used as a guide.
(parenthetically: if you were doing a correlation coefficient where 1 or -1 would be great and 0 would be awful, I've been told by bio-statisticians that a real-life value of 0.7 is extremely good. I understand that this is not exactly what you are doing but the comment on correlation coefficient came up earlier.) | unknown | |
d17037 | val | You can use realloc:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <dirent.h>
extern char *strdup(const char *src);
int scandir(char ***list, char dirname[], char const *ext)
/* Scans a directory and retrieves all files of given extension */
{
DIR *d = NULL;
struct dirent *dir = NULL;
size_t n = 0;
d = opendir(dirname);
if (d)
{
while ((dir = readdir(d)) != NULL)
{
if (has_extension(dir->d_name, ext))
{
*list = realloc(*list, sizeof(**list) * (n + 1));
(*list)[n++] = strdup(dir->d_name);
}
}
closedir(d);
}
return n;
}
int main(void)
{
char **list = NULL;
size_t i, n = scandir(&list, "/your/path", "jpg");
for (i = 0; i < n; i++) {
printf("%s\n", list[i]);
free(list[i]);
}
free(list);
return 0;
}
Note that strdup() is not a standard function but its available on many implementations.
As an alternative to realloc you can use a singly linked list of strings.
EDIT: As pointed out by @2501, is better to return an allocated array of strings from scandir and pass elems as param:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <dirent.h>
extern char *strdup(const char *src);
char **scandir(char dirname[], char const *ext, size_t *elems)
/* Scans a directory and retrieves all files of given extension */
{
DIR *d = NULL;
struct dirent *dir = NULL;
char **list = NULL;
d = opendir(dirname);
if (d)
{
while ((dir = readdir(d)) != NULL)
{
if (has_extension(dir->d_name, ext))
{
list = realloc(list, sizeof(*list) * (*elems + 1));
list[(*elems)++] = strdup(dir->d_name);
}
}
closedir(d);
}
return list;
}
int main(void)
{
size_t i, n = 0;
char **list = scandir("/your/path", "jpg", &n);
for (i = 0; i < n; i++) {
printf("%s\n", list[i]);
free(list[i]);
}
free(list);
return 0;
}
Finally, do you really need an array? Consider using a call-back function:
#include <stdio.h>
#include <string.h>
#include <dirent.h>
void cb_scandir(const char *src)
{
/* do whatever you want with the passed dir */
printf("%s\n", src);
}
int scandir(char dirname[], char const *ext, void (*callback)(const char *))
/* Scans a directory and retrieves all files of given extension */
{
DIR *d = NULL;
struct dirent *dir = NULL;
size_t n = 0;
d = opendir(dirname);
if (d)
{
while ((dir = readdir(d)) != NULL)
{
if (has_extension(dir->d_name, ext))
{
callback(dir->d_name);
n++;
}
}
closedir(d);
}
return n;
}
int main(void)
{
scandir("/your/path", "jpg", cb_scandir);
return 0;
} | unknown | |
d17038 | val | Looks like rbenv's installed under root. It should probably be installed under your (or your app user's) home directory, in this case for the user named 'deploy.'
This Passenger configuration line from nginx.conf shows where it's expected to live:
/home/deploy/.rbenv/shims/ruby
So you should probably (re)install rbenv as/under 'deploy.' | unknown | |
d17039 | val | It looks like the issue here is due to a known path length limitation. Azure has a limitation on paths in the package being more than 255 chars and in this case bringing in socket.io WITH all of it's dependencies is hitting that path.
There are several possible work arounds here.
A. - Zip up node modules and extract on the server.
Basically you zip up your modules and publish the module zip within the package. Then you can use an Azure startup task (in your cscfg) on the server to unzip the files.
Publish-AzureServicePackage will grab anything in the project, so in this case you just have a little script that you run before publishing which creates the node_modules archive and deletes node_modules.
I am planning to do a blog post on this, but it is actually relatively easy to do.
B. - Download node modules dynamically
You can download modules in the cloud. This can also be done with a startup task as is shown here.
If you look in that post you will see how you can author a startup task if you decide to do the archive route.
Feel free to ping me with any questions. | unknown | |
d17040 | val | Do you want to zip_longest the first num with the first text, second null with second text, etc.?
Then start by combining the sublists of the two inputs with zip:
[list(zip_longest(a, b, fillvalue='Description'))
for a, b in zip(claim_num, claim_text)]
Output:
[[('1', '1. A method'),
('2', '2. The method'),
('Description', '3. Description')],
[('1', '1. A method'),
('2', '2. The method'),
('3', '3. The method'),
('Description', '5. The method')]] | unknown | |
d17041 | val | Here is my answer and it works perfectly: Moves forward, Rotates and Collides with Other objects (Having RigidBody and Box/Capsule Collider). tThis is based from Burkhard's answer.
But befor all do this : Create an empty Object set it a child of your Player and drag your camera Object into your empty Object.
NB : You can place the camera behind your player to get a best view.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CubeControl : MonoBehaviour {
public float speed = 10.0f;
Rigidbody rb;
GameObject playerEmpty;
// Use this for initialization
void Start () {
rb = GetComponent<Rigidbody> ();
}
// Update is called once per frame
void Update () {
float Haxis = Input.GetAxis ("Horizontal");
float Vaxis = Input.GetAxis ("Vertical");
//Go Forward
if(Input.GetKeyDown(KeyCode.UpArrow))
{
//Works perfectly but does no longer collide
//rb.transform.position += transform.forward * Time.deltaTime * 10.0f;
//Not moving in the right direction
//rb.velocity += Vector3.forward * 5.0f;
rb.velocity += Camera.main.transform.forward * speed;
//rb.rotation() += new Vector3 (0.0f, headingAngle, 0.0f) * 5.0f;
//rb.velocity += gameObject.transform.localEulerAngles * Time.deltaTime * 10.0f ;
}
if(Input.GetKeyDown(KeyCode.DownArrow))
{
rb.velocity -= Camera.main.transform.forward * speed;
}
Vector3 rotationAmount = Vector3.Lerp(Vector3.zero, new Vector3(0f, 10.0f * (Haxis < 0f ? -1f : 1f), 0f), Mathf.Abs(Haxis));
Quaternion deltaRotation = Quaternion.Euler(rotationAmount * Time.deltaTime);
this.transform.rotation = (this.transform.rotation * deltaRotation);
}
}
A: To move an object in a along a local axis you need to use that object transform like this
dir = Camera.mainCamera.transform.forward;
If you also want to add to your current speed you need to use it like this
newVelocity = Camera.mainCamera.transform.forward * (baseSpeed + currectVelocity.magnitude);
This will set the direction to your cameras forward and add baseSpeed to your current speed.
If you want to set the distance instead of the speed you can calculate it like this
speed = distance / time;
That is the distance you want to travel and how long the dash lasts. If you do this i wouldn't add it to the current speed since that would change the distance traveled. | unknown | |
d17042 | val | If you want "C:\" to "\Device\SomeHardDisk1" you can use QueryDosDevice.
(GetLogicalDriveStrings will list them all) | unknown | |
d17043 | val | You see label as fos_user_registration_form_name, because FOSUserBundle uses translations files to translate all texts in it.
You have to add your translations to file called like Resources/translations/FOSUserBundle.nb.yml (example for norwegian) or you can modify translations file coming with the bundle (copying it to Acme\UserBundle is a better way). | unknown | |
d17044 | val | Using
<xsl:param name="start-tag"><![CDATA[<h2>]]></xsl:param>
<xsl:param name="end-tag"><![CDATA[</h2>]]></xsl:param>
and then a substring-after(substring-before combination
<xsl:template match="Data">
<xsl:value-of select="substring-before(substring-after(., $start-tag), $end-tag)"/>
</xsl:template>
should do. http://xsltransform.net/naZXpY6. | unknown | |
d17045 | val | If you want to use color from ResourceDictionary , you can access it first and pass the result color to the second parameter of method NavigationPage.SetIconColor.
Please refer to the following code:
Color color = (Color)Application.Current.Resources["defaultBackgroundColor"];
NavigationPage.SetIconColor(this, color);
The defaultBackgroundColor is a color in Application.Resources:
<Application.Resources>
<ResourceDictionary>
<!-- Colors -->
<Color x:Key="defaultBackgroundColor">Red</Color>
<Color x:Key="Yellow">#ffd966</Color>
</ResourceDictionary>
</Application.Resources> | unknown | |
d17046 | val | Turns out that the problem was the server trying to send messages to channels that have expired. The error rate has gone down considerably when I made sure that doesn't happen anymore. | unknown | |
d17047 | val | You can't Define a Name in a UDF
you must use a sub
the following will fail:
Public Function qwerty(r As Range) As Variant
qwerty = 1
Range("B9").Name = "whatever"
End Function | unknown | |
d17048 | val | Set<Integer>[] varargs = new HashSet[2];
varargs[0] = new HashSet<Integer>() ;
A: I believe array of Set should be defined like this:
Set<Integer>[] varargs = new Set[2];
varargs[0] = new HashSet<Integer>();
varargs[1] = new HashSet<Integer>(); | unknown | |
d17049 | val | In the settings.json file, add this line:
"typescript.preferences.importModuleSpecifier": "non-relative"
If this property is removed, then the ugly relative auto-import is the default option. Simply change 'typescript' to 'javascript' if you're currently using JS. To know more about this setting option, just hover on it like this:
(Bonus tip) Prefix @app/ to all import paths with the following compiler options in tsconfig.json:
{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@app/*": ["./*"]
}
},
}
A: Seems I had to restart VSCode.
Javascript (javascript,javascriptreact file types in VSCode)
An example of jsconfig.json file for reference:
{
"compilerOptions": {
"baseUrl": "./src",
"jsx": "react",
"paths": {
"@styles": ["styles/index"],
"@fonts": ["fonts/index"],
"@components": ["components/index"],
"@atoms": ["components/atoms/index"],
"@molecules": ["components/molecules/index"],
"@organisms": ["components/organisms/index"],
"@templates": ["components/templates/index"],
"@icons": ["components/atoms/Icons/index"],
"@config": ["config/index"],
"@utils": ["utils/index"],
"@hooks": ["hooks/index"],
"@constants": ["constants/index"],
"@queries": ["queries/index"],
"@reducers": ["state/store/reducers"],
"@actions": ["state/store/actions"],
"@slices": ["state/slices/"],
"@storybookHelpers": ["../.storybook/helpers"]
}
}
}
An example of how styles/index looks like:
export * from './colors';
export * from './GlobalStyle.styles';
export * from './mixins.styles';
// Or
export { COLORS } from './colors';
export { default as GlobalStyle } from './GlobalStyle.styles';
export { default as mixins } from './mixins.styles';
Will allow import (with IntelliSense):
import { COLORS, mixins, GlobalStyle } from '@styles';
For a bonus: aliases.js, which is a helper which I use to define aliases in webpack config files, it helps to not repeat yourself, for example when using the same aliases in storybook and for the application itself.
// Remember to update `jsconfig.json`
const aliases = (prefix = `src`) => ({
'@actions': `${prefix}/state/store/actions`,
'@atoms': `${prefix}/components/atoms`,
'@molecules': `${prefix}/components/molecules`,
'@organisms': `${prefix}/components/organisms`,
'@templates': `${prefix}/components/templates`,
'@components': `${prefix}/components`,
'@config': `${prefix}/config`,
'@constants': `${prefix}/constants`,
'@hooks': `${prefix}/hooks`,
'@icons': `${prefix}/components/atoms/Icons`,
'@queries': `${prefix}/queries`,
'@reducers': `${prefix}/state/store/reducers`,
'@slices': `${prefix}/state/slices`,
'@styles': `${prefix}/styles`,
'@utils': `${prefix}/utils`,
'@storybookHelpers': `../.storybook/helpers`,
});
module.exports = aliases;
// usage example at .storybook/webpack.config.js file
const path = require("path");
const alias = require(`../src/config/aliases`);
const SRC = "../src";
const aliases = alias(SRC);
const resolvedAliases = Object.fromEntries(
Object.entries(aliases).map(([key, value]) => [
key,
path.resolve(__dirname, value),
])
);
module.exports = ({ config }) => {
config.resolve.modules.push(path.resolve(__dirname, SRC));
config.resolve.alias = resolvedAliases;
return config;
};
Typescript (typescript,typescriptreact files)
At tsconfig.json use the compilerOptions.paths option, notice that the paths are relative to baseUrl:
{
"compilerOptions": {
"baseUrl": "./",
"paths": {
"@components/*": ["components/*"],
"@config": ["config"],
"@constants": ["constants"],
"@hooks": ["hooks"],
"@styles": ["styles"],
"$types/*": ["types/*"],
"@utils": ["utils"]
}
}
This allows aliases (with IntelliSense), for example:
// Example of hooks/index.ts file
export * from './useLogin';
export * from './useLocalStorage';
export * from './useAuth';
// Usage examples
import {ROUTES} from '@constants';
import {Text} from '@components/atoms';
import {mixins} from '@styles';
import {useLocalStorage} from '@hooks';
A: I had the right configuration as described by the other answers. In VS code I restarted the TypeScript server using Ctrl + Shift + P -> TypeScript: Restart TS server command and it fixed the editor highlighting the import path error.
Just for completeness here is what my tsconfig.json looks like:
{
"compilerOptions": {
...
"baseUrl": ".",
"paths": {
"@/*": ["./src/*"]
}
},
"include": ["src/**/*"]
}
A: As a side note, make sure the include in your jsconfig/tsconfig is pointing to correct paths.
A: For anyone like me who the other answers aren't working for, these are the tsconfig bits that worked for me, in addition to the settings addition in the accepted answer and ensuring you're setting includes/excludes properly:
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@/*": ["./src/*"],
}
}
Full credit to this gist: https://gist.github.com/EmilyRosina/eef3aa0d66568754a98382121fefa154 | unknown | |
d17050 | val | I had to make a lot of guesses here because you didn't include much code or explanation for what you are trying to achieve but I think I have managed to create the overall appearance of what you want.
I have created two ways that this works, one which should work for anyone and another which will work for everyone but work specifically in your case
Number one:
#menu-mainmenu li {
position: relative;
display: inline-block;
float: left;
}
#menu-mainmenu li:not(:first-child):after {
content: " | ";
position: relative;
display: inline-block;
float: left;
margin: 0 15px;
}
<ul id="menu-mainmenu">
<li>HOME</li>
<li>ABOUT</li>
<li>...</li>
<li>...</li>
</ul>
Number two:
#menu-mainmenu li {
position: relative;
display: inline-block;
float: left;
}
#menu-mainmenu li.line {
margin: 0 15px;
}
<ul id="menu-mainmenu">
<li>HOME</li>
<li class="line">|</li>
<li>ABOUT</li>
<li class="line">|</li>
<li>...</li>
<li class="line">|</li>
<li>...</li>
</ul>
A: I would use :after instead of :before
#menu-mainmenu li {
display: inline-block;
}
#menu-mainmenu > li:after {
content: " | ";
padding: 0 15px;
}
#menu-mainmenu > li:last-child:after {
content: "";
}
<ul id="menu-mainmenu">
<li>Link 1</li>
<li>Link 2</li>
<li>Link 3</li>
<li>Link 4</li>
</ul>
A: Using your CSS example , you'll need the follow:
#menu-mainmenu li{
display:inline-block;
}
Run this snippet:
#menu-mainmenu li {
display: inline-block;
}
#menu-mainmenu li + li:before {
content: " | ";
padding: 0 15px;
}
<ul id="menu-mainmenu">
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
<li>Item 4</li>
</ul>
https://jsfiddle.net/6936pon8/1/
A: So, since you were unable to share the code, I had to simulate the error, and my guess was that the <li> has a fixed width, so in the first example we have the possible error, please verify if that's your case. If it is, the second example will give you the asnwer.
Posible Error
#menu-mainmenu li {
display: inline-block;
width: 40px;
}
#menu-mainmenu li + li:before {
content: " | ";
padding: 0 15px;
}
<ul id="menu-mainmenu">
<li>hola</li>
<li>...</li>
<li>cómo</li>
<li>está</li>
</ul>
Fix
#menu-mainmenu li {
display: inline-block;
width: 50px;
position: relative;
}
#menu-mainmenu li + li:before {
content: " | ";
position: absolute;
left: -10px;
}
<ul id="menu-mainmenu">
<li>hola</li>
<li>.........</li>
<li>cómo</li>
<li>está</li>
</ul>
You can play around with absolute positioned elements as long as its parent is a relative element.
One more time, please submit as much code as you can so we can help you out, otherwise if nobody finds an answer it will be a waste of time.
A: #menu-mainmenu li:after {
content: " | ";
padding: 0 15px;
}
#menu-mainmenu li:last-child:after {
content:"";
}
A:
Hey! check this out :)
* {
margin: 0;
padding: 0;
box-sizing: border-box;
-webkit-box-sizing : border-box;
list-style-type : none;
}
#menu-mainmenu li {
display: inline-block;
padding: 5px 10px;
}
#menu-mainmenu li:after {
content : " | ";
display: inline-block;
width : 10px;
height : 30px;
padding-left : 5px;
margin-left : 10px;
background-color: yellow;
font-size : 25px;
}
<ul id="menu-mainmenu">
<li>first</li>
<li>second</li>
<li>third</li>
<li>forth</li>
</ul> | unknown | |
d17051 | val | I do not think it has anything to do with the component being lazy-loaded.
LazyLoadedComponent is not part of the AppModule – it is part of the LazyModule. According to the docs, a component can only be part of one module. If you try adding LazyLoadedComponent to AppModule also, you would get an error to that effect. So LazyLoadedComponent is not even seeing MdIconModule at all. You can confirm this by looking at the template output in the debugger – it is unchanged.
<md-icon svgIcon="play"></md-icon>
The solution appears to be adding the MdIconModule to the LazyModule, and while this alone does not fix the problem, it does add an error to the output.
Error retrieving icon: Error: Unable to find icon with the name ":play"
And the template output now looks like this, so we know it is loading.
<md-icon role="img" svgicon="play" ng-reflect-svg-icon="play" aria-label="play"></md-icon>
I added the call to addSvgIconSet from LazyLoadedComponent, and that got it working… so this proves there is an instance of the MdIconRegistry service per component – not what you want, but may point you in the right direction.
Here’s the new plunk - http://plnkr.co/edit/YDyJYu?p=preview
After further review, I found this in the docs:
Why is a service provided in a lazy loaded module visible only to that module?
Unlike providers of the modules loaded at launch, providers of lazy loaded modules are module-scoped.
Final Update! Here is the answer. MdIconModule is not properly setup for lazy loaded components... but we can easily create our own module that IS properly set up and use that instead.
import { NgModule } from '@angular/core';
import { HttpModule } from '@angular/http';
import { MdIcon } from '@angular2-material/icon';
import { MdIconRegistry } from '@angular2-material/icon';
@NgModule({
imports: [HttpModule],
exports: [MdIcon],
declarations: [MdIcon]
})
export class MdIconModuleWithProviders {
static forRoot(): ModuleWithProviders {
return {
ngModule: MdIconModuleWithProviders,
providers: [ MdIconRegistry ]
};
}
}
Plunk updated and fully working. (sorry, updated the same one) -> http://plnkr.co/edit/YDyJYu?p=preview
One might submit a pull request such that Angular Material exports modules of both styles.
A: New to Angular 6 there is a new way to register a provider as a singleton. Inside the @Injectable() decorator for a service, use the providedIn attribute. Set its value to 'root'. Then you won't need to add it to the providers list of the root module, or in this case you could also set it to your MdIconModuleWithProviders module like this:
@Injectable({
providedIn: MdIconModuleWithProviders // or 'root' for singleton
})
export class MdIconRegistry {
... | unknown | |
d17052 | val | One approach would be to use an <xsl:key> in the following way.
Keys let you index nodes by a certain property, for example you could index all nodes by some attribute value. But you could also index them by a calculated value.
In your case you have many <w> nodes like this:
<doc> <!-- Sentence # -->
<w><forme>le</forme><lemme>le</lemme><categorie>DETDFS</categorie></w> <!-- 0 -->
<w><forme>grand</forme><lemme>grand</lemme><categorie>ADJFS</categorie></w> <!-- 0 -->
<w><forme>test.</forme><lemme>test.</lemme><categorie>NCMS</categorie></w> <!-- 0 -->
<w><forme>la</forme><lemme>le</lemme><categorie>DETDFS</categorie></w> <!-- 1 -->
<w><forme>grande</forme><lemme>grand</lemme><categorie>ADJFS</categorie></w> <!-- 1 -->
<w><forme>douleur</forme><lemme>douleur</lemme><categorie>NCFS</categorie></w> <!-- 1 -->
<w><forme>du</forme><lemme>du</lemme><categorie>DETDMS</categorie></w> <!-- 1 -->
<w><forme>père</forme><lemme>père</lemme><categorie>NCMS</categorie></w> <!-- 1 -->
<w><forme>duchesne</forme><lemme>duchesne</lemme><categorie>NCMS</categorie></w> <!-- 1 -->
<w><forme>exemple.</forme><lemme>exemple.</lemme><categorie>NCMS</categorie></w> <!-- 1 -->
<w><forme>phrase</forme><lemme>phrase</lemme><categorie>NCMS</categorie></w> <!-- 2 -->
<w><forme>suivante</forme><lemme>suivante</lemme><categorie>NCMS</categorie></w> <!-- 2 -->
</doc>
The ones with the full stop demarcate a sentence end. The preceding siblings up to the previous full stop are words of the same sentence.
We could say: "All words belong to the same sentence that are preceded by the same number of full stops."
The numbers in the comments above represent exactly that number. We can calculate it with count(preceding-sibling::w[substring(forme, string-length(forme), 1) = '.']). And we can create an <xsl:key> that indexes <w> nodes by this number.
And then it's a matter of going over each node that marks a sentence end and using key() to retrieve all the <w> nodes that belong to the same sentence:
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="xml" indent="yes" />
<xsl:key
name="kSentence"
match="w"
use="count(preceding-sibling::w[substring(forme, string-length(forme), 1) = '.'])"
/>
<xsl:template match="/*">
<sentences>
<xsl:for-each select="w[substring(forme, string-length(forme), 1) = '.']">
<xsl:variable
name="sentenceNum"
select="count(preceding-sibling::w[substring(forme, string-length(forme), 1) = '.'])"
/>
<sentence>
<xsl:copy-of select="key('kSentence', $sentenceNum)/lemme" />
</sentence>
</xsl:for-each>
</sentences>
</xsl:template>
</xsl:stylesheet>
gives this result:
<sentences>
<sentence>
<lemme>le</lemme>
<lemme>grand</lemme>
<lemme>test.</lemme>
</sentence>
<sentence>
<lemme>le</lemme>
<lemme>grand</lemme>
<lemme>douleur</lemme>
<lemme>du</lemme>
<lemme>père</lemme>
<lemme>duchesne</lemme>
<lemme>exemple.</lemme>
</sentence>
</sentences>
The "phrase suivante" is not part of the result because it did not end with a full stop in my example.
A: Either use classical sibling recursion where you in the template matching the parent/container node of the w siblings start with <xsl:apply-templates select="w[1]"/> and then process <xsl:apply-templates select="following-sibling::w[1]"><xsl:with-param name="previous-ws" select="$previous-ws | ."/></xsl:apply-templates/> and have two template matches for e.g. <xsl:template match="w[not(substring(forme, string-length(forme)) = '.')]"> and <xsl:template match="w[substring(forme, string-length(forme)) = '.']"> or use a key <xsl:key name="siblings" match="w[not(substring(forme, string-length(forme)) = '.')]" use="generate-id(following-sibling::w[substring(forme, string-length(forme)) = '.'][1])"/>, then process all <xsl:apply-templates select="w[substring(forme, string-length(forme)) = '.']"/> and in the template matching w you can collect the preceding siblings simply with key('siblings', generate-id()).
A: This is very simple to do in XSLT 2.0 using xsl:fer each group with group-ending-with.
There are several methods to accomplish the same thing in XSLT 1.0 - among them "sibling recursion" or using a key to link each word to its nearest following sibling that ends with a period (both of these have been mentioned by Martin Honnen in his answer above).
The method you have attempted, using a recursive named template, is also an option. Here is a simplified example:
XML
<root>
<word>Joe</word>
<word>waited</word>
<word>for</word>
<word>train.</word>
<word>The</word>
<word>train</word>
<word>was</word>
<word>late.</word>
<word>Mary</word>
<word>and</word>
<word>Samantha</word>
<word>took</word>
<word>the</word>
<word>bus.</word>
<word>Orphan</word>
</root>
XSLT 1.0
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
<xsl:template match="/root">
<output>
<xsl:call-template name="combine-words">
<xsl:with-param name="words" select="word"/>
</xsl:call-template>
</output>
</xsl:template>
<xsl:template name="combine-words">
<xsl:param name="words" />
<xsl:param name="accumulated" select="/.."/>
<xsl:if test="$words">
<xsl:variable name="word" select="$words[1]" />
<xsl:variable name="isLast" select="substring($word, string-length($word), 1)='.'" />
<xsl:if test="$isLast">
<sentence>
<xsl:for-each select="$accumulated | $word">
<xsl:value-of select="." />
<xsl:if test="position()!=last()">
<xsl:text> </xsl:text>
</xsl:if>
</xsl:for-each>
</sentence>
</xsl:if>
<xsl:call-template name="combine-words">
<xsl:with-param name="words" select="$words[position() > 1]"/>
<xsl:with-param name="accumulated" select="$accumulated[not($isLast)] | $word[not($isLast)]"/>
</xsl:call-template>
</xsl:if>
</xsl:template>
</xsl:stylesheet>
Result
<?xml version="1.0" encoding="UTF-8"?>
<output>
<sentence>Joe waited for train.</sentence>
<sentence>The train was late.</sentence>
<sentence>Mary and Samantha took the bus.</sentence>
</output> | unknown | |
d17053 | val | Floating point numbers have an infinite number of decimal places. When you print them it will only print 2 unless you specify with the second parameter how many to print.
For example:
float x = 67.1234
Serial.print(x); // will print 67.12
Serial.print(x,3); // will print 67.123
Serial.print(x, 4); // will print 67.1234
Serial.print(x,6); // will print 67.123400
So what you are seeing is just how many places you told it to print. But the variable always has the whole number with an infinite number of decimal places.
Now with floats on an Arduino only the first 6 or 7 decimal places will be accurate. But the number itself has an infinite number of digits after the decimal if you want to print them.
Also note that the second parameter is only doing something in the print statement. Lots of people will try to do this:
float x = 67.1234 , 2;
thinking that this will somehow save a number with only 2 digits of precision, but this isn't the case. In this case it would just save the number 2. As you saw in your experiment. This is how the comma operator works. It's only in the print function that you get to specify. Again, the actual number itself has an infinite number of decimal places you can print. | unknown | |
d17054 | val | as CevaComic said you are setting the initial value as an empty array.
useEffect will only work after the component has been rendered, so when you will console.log the data stored in result you will get the initial value.
Only after the component will render for the second time, because of the changed made inside setResult, the data from the api will be logged. | unknown | |
d17055 | val | The code shown in your question is missing some critical parts to fully understand your problem, but it sounds like you're creating a new bitmap for every frame. Since Android only allows for about 16MB of allocations for each Java VM, your app will get killed after about 52 frames. You can create a bitmap once and re-use it many times. To be more precise, you are creating a bitmap (Bitmap.CreateBitmap), but not destroying it (Bitmap.recycle). That would solve your memory leak, but still would not be the best way to handle it. Since the bitmap size doesn't change, create it once when your activity starts and re-use it throughout the life of your activity. | unknown | |
d17056 | val | Ember uses Broccoli.js for it's build pipeline. Broccoli is build around the concept of trees. Please have a look in it's documentation for details.
You could exclude files from the tree using a plugin called broccoli-funnel. It expects an input node, which could be either a directory name as a string or an existing broccoli tree as a first argument. A configuration object should be provided as a second argument. The files or folders that should be excluded could be specified by exclude option on that object.
A broccoli tree is created as part of the build process in ember-cli-build.js. The function exported from that file should return a tree. By default it returns the tree created by app.toTree() directly. But you could customize that tree using broccoli-funnel before.
This diff shows how default ember-cli-build.js as provided by blueprint of Ember CLI 3.16.0 could be customized to exclude a specific file:
diff --git a/ember-cli-build.js b/ember-cli-build.js
index d690a25..9d072b4 100644
--- a/ember-cli-build.js
+++ b/ember-cli-build.js
@@ -1,6 +1,7 @@
'use strict';
const EmberApp = require('ember-cli/lib/broccoli/ember-app');
+const Funnel = require('broccoli-funnel');
module.exports = function(defaults) {
let app = new EmberApp(defaults, {
@@ -20,5 +21,7 @@ module.exports = function(defaults) {
// please specify an object with the list of modules as keys
// along with the exports of each module as its value.
- return app.toTree();
+ return new Funnel(app.toTree(), {
+ exclude: ['file-to-exclude'],
+ });
};
You should explicitly add broccoli-funnel to your dependencies even so it's available as a indirect dependency:
// if using npm
npm install -D broccoli-funnel
// if using yarn
yarn add -D broccoli-funnel
Broccoli-funnel does not only support exact file names but also regular expressions, glob strings or functions to define the files to exclude. Please have a look in it's documentation for details. | unknown | |
d17057 | val | Avoid the GAC unless you have full administration rights on the host server.
What you could do is create a project containing the source for your shared DLL. You can then add this project into each of your web site solutions, and add a reference in your site solutions to the project. This has the added advantage of enabling you to step into your shared source during debugging.
A: First add a class library project to your solution. Add this class into this library. And deploy this class to Global Assembly Cache (GAC). You can follow these steps to install your assembly to GAC. | unknown | |
d17058 | val | Use this in the delegate for your first UIWebView:
- (BOOL)webView:(UIWebView *)webView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType {
if (navigationType == UIWebViewNavigationTypeLinkClicked) {
[otherWebView loadRequest:request];
return NO;
}
return YES;
} | unknown | |
d17059 | val | I am using the latest version of PushSharp (version 3.0) in a project of mine to send toast notifications to Windows Phone Devices and it is working fine for me. I notice by the code you have above that you are using an older version of the PushSharp package, there is a new 3.0 version available from nuget.
You could use that latest package to send toast notification to windows phone devices. The latest version of PushSharp uses the WNS as opposed to the old MPNS.
If you go to that nuget get link i supplied above and download the solution you can see some examples on how to implement the push notifcations for windows phone using WNS. Look under the PushSharp.Test project (look for the WNSRealTest.cs file).
Below is an example of how you can send a toast notification to windows phone device:
var config = new WnsConfiguration(
"Your-WnsPackageNameProperty",
"Your-WnsPackageSid",
"Your-WnsClientSecret"
);
var broker = new WnsServiceBroker(config);
broker.OnNotificationFailed += (notification, exception) =>
{
//you could do something here
};
broker.OnNotificationSucceeded += (notification) =>
{
//you could do something here
};
broker.Start();
broker.QueueNotification(new WnsToastNotification
{
ChannelUri = "Your device Channel URI",
Payload = XElement.Parse(string.Format(@"
<toast>
<visual>
<binding template=""ToastText02"">
<text id=""1"">{0}</text>
<text id=""2"">{1}</text>
</binding>
</visual>
</toast>
","Your Header","Your Toast Message"))
});
broker.Stop();
As you may notice above the WnsConfiguration constructor requires a Package Name, Package SID, and a Client Secrete. To get these values your app must be registered with the Store Dashboard. This will provide you with credentials for your app that your cloud service will use in authenticating with WNS. You can check steps 1-3 on the following MSDN page for details on how to get this done. (note in the link above it states that you have to edit your appManifest.xml file with the identity of your app, I did not do this step, just make sure you have your windows phone app setup correctly to receive toast notification, this blog post will help with that.
Hope this helps. | unknown | |
d17060 | val | It's Solve is so easy just stop your app and run it again
i hope this is useful | unknown | |
d17061 | val | The Find() function returns a Range object if successful or Nothing if not. So you need to test the return value to ensure it's a Range object before you try accessing its properties and methods.
For Each c In objworksheet.Range("A2:A3").Cells
val = c.value
Set objRange = objWorksheet2.UsedRange
Set found = objRange.Find(val)
' Test for success!
If Not found Is Nothing Then
Wscript.Echo found.AddressLocal(False,False)
End If
Next | unknown | |
d17062 | val | Yes, you will need to remember the last selection yourself, wxWidgets doesn't do it for you (and neither does the native control). | unknown | |
d17063 | val | For this situation you may be better off using a windowing function like row_number()
select id, fruit, color, createddate
from
(
select id, fruit, color, createddate,
row_number() over(partition by fruit order by createddate desc) seq
from tblFruit
) d
where seq = 1;
See Demo
Using this allows you to partition the data by the fruit and order the rows within each fruit by the createddate. By placing your row_number() inside of a subquery, you will return the first row of each fruit - these are the items with a seq=1. If you are looking for items that are only Apple, then you can easily add a WHERE clause.
You could also get the result by using a subquery to select the max(createddate) for each fruit:
select f.id,
f.fruit,
f.color,
f.createddate
from tblFruit f
inner join
(
select fruit, max(createddate) CreatedDate
from tblfruit
group by fruit
) d
on f.fruit = d.fruit
and f.createddate = d.createddate;
See Demo. You get the same result and you could still apply a WHERE filter to this.
A: Based on your comment, you can use a CTE to build a list of max date for each fruit. Then you can join that back to your original table to get the full row that matches that max date.
SQL Fiddle
with MaxDates as
(select
fruit,
max(createddate) as maxdate
from
table1
group by
fruit)
select
t1.*
from
table1 t1
inner join maxdates md
on t1.fruit = md.fruit
and t1.createddate = md.maxdate
BTW, you really don't want to try and push this kind of functionality to your application. Doing this kind of stuff is infinitely better in SQL. If nothing else, think about if you have millions of rows in your table. You certainly don't want to push those millions of rows from your db to your application to sum it up to a single row, etc.
A: How about using TOP with an ORDER BY
SELECT TOP(1) *
FROM [CCM].[dbo].[tblFruit]
WHERE Fruit = 'Apple'
ORDER BY [CreatedDate] DESC | unknown | |
d17064 | val | You have a typo in line 325 (the line with that comment): it's IDInput you want to compare with, not inputID which is the name of the function it's in. | unknown | |
d17065 | val | 1 and 2:
For adjusting the distance between the options target span and reduce the margin on it to make them closer between each other. You can add the class to your spans like this, then use padding to make the lines farther from text. Play with the values.
Would look something like:
<span class="spanMenu">
.spanMenu {
padding: 5px;
margin-bottom: -20px;
}
3.
To keep the phone number from going onto two different lines use an @media
Currently inspecting and fiddling to get the correct CSS so bear with me.
To resize you can add a class="makeSmall" to the p tag like this:
<p style="text-align: center;" class="makeSmall">
Then add this @media to your CSS
@media only screen and (max-width: 600px) {
.makeSmall {
font-size:12px;
}
}
This will allow you to adjust just the text size in the p tag. This is a quick and dirty way of doing it; I don't have the time to do the hierarchy of your CSS to just target them with CSS. I hope this helps though. :)
A: Can the issue be that your phone is not cleaning out its cache correctly?
Can you try borrowing someones else's phone, who has never visited the site, and take a look at it with their phone?
If this solves your issue, you should figure out how to clean the cache from your phone to continue development. | unknown | |
d17066 | val | To list updated rows, you conceptually need either of the two things:
*
*The updating statement's effect on the table.
*A previous version of the table to compare with.
How you get them and in what form is completely up to you.
The 1st option allows you to list updates with statement granularity while the 2nd is more suitable for time-based granularity.
Some options from the top of my head:
*
*Write to a temporary table
*Add a field with transaction id/timestamp
*Make clones of the table regularly
AFAICS, Oracle doesn't have built-in facilities to get the affected rows, only their count.
A: Not a lot of details in the question so not sure how much of this will be of use ...
*
*'Sybase' is mentioned but nothing is said about which Sybase RDBMS product (ASE? SQLAnywhere? IQ? Advantage?)
*by 'replicated master database transaction' I'm assuming this means the primary database is being replicated (as opposed to the database called 'master' in a Sybase ASE instance)
*no mention is made of what products/tools are being used to 'replicate' the transactions to the 'new database' named 'TRN'
So, assuming part of your environment includes Sybase(SAP) ASE ...
*
*MDA tables can be used to capture counters of DML operations (eg, insert/update/delete) over a given time period
*MDA tables can capture some SQL text, though the volume/quality could be in doubt if a) MDA is not configured properly and/or b) the DML operations are wrapped up in prepared statements, stored procs and triggers
*auditing could be enabled to capture some commands but again, volume/quality could be in doubt based on how the DML commands are executed
*also keep in mind that there's a performance hit for using MDA tables and/or auditing, with the level of performance degradation based on individual config settings and the volume of DML activity
Assuming you're using the Sybase(SAP) Replication Server product, those replicated transactions sent through repserver likely have all the info you need to know which tables/rows are being affected; so you have a couple options:
*
*route a copy of the transactions to another database where you can capture the transactions in whatever format you need [you'll need to design the database and/or any customized repserver function strings]
*consider using the Sybase(SAP) Real Time Data Streaming product (yeah, additional li$ence is required) which is specifically designed for scenarios like yours, ie, pull transactions off the repserver queues and format for use in downstream systems (eg, tibco/mqs, custom apps)
I'm not aware of any 'generic' products that work, out of the box, as per your (limited) requirements. You're likely looking at some different solutions and/or customized code to cover your particular situation. | unknown | |
d17067 | val | It did, but I'm guessing the Saver instance you created had the default max_keep value of 5, so it overwrote them as the last 5 were created. To keep 10, change your saver creation line to
saver = tf.train.Saver(max_keep=10)
You might also want to play with the keep_checkpoint_every_n_hours argument if you don't want to save -every- one. | unknown | |
d17068 | val | The blurb in the documentation refers to the value in the manifest.json file. Dependencies in the manifest are defined by an alias mapped to a string in the format of <name>@<version>. The exact meaning of that string is not currently enforced so it just serves as documentation for the app.
If you mount an app that has dependencies, you need to set up the dependencies (e.g. using the web frontend). The web frontend's dependencies dialog lets you enter mount paths of apps you want to use to meet the dependencies.
The code of the app itself will then be able to refer to the exports of the apps mounted at those paths by the aliases defined in the manifest.
For example:
*
*You create an app called example with the following dependencies:
"dependencies": {"mySessions": "sessions@^1.0.0"}
*You install a sessions app (e.g. the sessions app from the Foxx app store) and mount it at /my-sessions.
*You install your example app and mount it somewhere else.
*You open the app details of your example app in the web frontend and open the dependencies dialog (boxes icon in the top right).
*The dialog should show a single input field titled MySessions with a help popup saying sessions@^1.0.0.
*Enter /my-sessions into the input field and save.
*Your example app should now be able to access the exports of the app at applicationContext.dependencies.mySessions. | unknown | |
d17069 | val | Structs in ColdFusion are unordered HashMaps, so there is no order at all. You can keep insertion order by using structNew("Ordered") (introduced with ColdFusion 2016). Unfortunately you can no longer use the literal syntax anymore, but I assume you are generating the data dynamically anyway.
<cfset data = structNew("Ordered")>
<cfset data["Booking"] = structNew("Ordered")>
<cfset data["Booking"]["ActionCode"] = "DI">
<cfset data["Booking"]["AgencyNumber"] = "TVR">
<cfset data["Booking"]["BookingNumber"] = "323">
<cfset data["Payment"] = structNew("Ordered")>
<cfset data["Payment"]["__type"] = "paymenttype">
<cfset data["Payment"]["PaymentProfile"] = structNew("Ordered")>
<cfset data["Payment"]["PaymentProfile"]["Value"] = 4>
<cfset data["Payment"]["PaymentProfile"]["Manual"] = false>
etc.
If you are stuck on an older ColdFusion version, you will have to use Java's LinkedHashMap.
<cfset data = createObject("java", "java.util.LinkedHashMap")>
<cfset data["Booking"] = createObject("java", "java.util.LinkedHashMap")>
<cfset data["Booking"]["ActionCode"] = "DI">
<cfset data["Booking"]["AgencyNumber"] = "TVR">
<cfset data["Booking"]["BookingNumber"] = "323">
<cfset data["Payment"] = createObject("java", "java.util.LinkedHashMap")>
<cfset data["Payment"]["__type"] = "paymenttype">
<cfset data["Payment"]["PaymentProfile"] = createObject("java", "java.util.LinkedHashMap")>
<cfset data["Payment"]["PaymentProfile"]["Value"] = 4>
<cfset data["Payment"]["PaymentProfile"]["Manual"] = false>
etc.
But be aware: LinkedHashMap is case-sensitive (and also type-sensitive: in case your keys are numbers, it does matter!).
<cfset data = createObject("java", "java.util.LinkedHashMap")>
<cfset data["Test"] = "">
<!---
accessing data["Test"] = works
accessing data["test"] = doesn't work
accessing data.Test = doesn't work
--->
Another issue you might encounter: Due to ColdFusion's internal type casting, serializeJSON() might stringify numbers and booleans in an unintended way. Something like:
<cfset data = structNew("Ordered")>
<cfset data["myBoolean"] = true>
<cfset data["myInteger"] = 123>
could easily end up like:
{
"myBoolean": "YES",
"myInteger": 123.0
}
(Note: The above literal syntax would work perefectly fine, but if you are passing the values around as variables/arguments, casting eventually happens.)
The easiest workaround is explicitly casting the value before serializing:
<cfset data = structNew("Ordered")>
<cfset data["myBoolean"] = javaCast("boolean", true)>
<cfset data["myInteger"] = javaCast("int", 123)> | unknown | |
d17070 | val | As the page says:
Use the KeyChain API when you want system-wide credentials
The purpose of the KeyChain API is to use username/password credentials to get a 'token' that this app (and potential other apps) can use. The KeyChain is also used a lot for syncing private information (like mails) in the background.
But in your case it sounds that you just want to store a private RSA key. In this case the Keystore is a better solution. Note that the Keystore only works on devices 4.3+, this is because it uses hardware supported secure storage (like ARM Trustzone and other secure execution environments) in order to store its private keys (if a hardware module is present). This feature was added in Android 4.3.
To use the Keystore you need to generate a certificate with the public key and save it with the private key in the Keystore. The Keystore uses the users lockscreen pattern/pin/etc. (if present) to encrypt the private key. Note that this sometimes lead to the irritating fact that the user cannot disable its lock-screen anymore as long as the user doesn't wipe it's certificate store or remove the app (under Android <= 5), so make sure to delete the certificate when the key is no longer needed. The Keystore will save the private key in the hardware module (if present) so it's much harder to extract the private key, or for other apps to encrypt data using the private key. Still it's not 100% fool-proof, since new exploits are found and root can circumvent (in some way) this extra layer of protection. But it's still better than saving your private key in a file.
Also if you want a deeper understanding of how this all works 'under the hood', I would recommend reading this paper that goes in great detail about the Keystore and other security mechanisms in Android. | unknown | |
d17071 | val | I think you can just use this.text.width. This has historically had some bugs associated with it, but it should be working right in the latest version. | unknown | |
d17072 | val | You should use GridView for this purpose. It has adapter. See an example here: http://developer.android.com/guide/topics/ui/layout/gridview.html
For dealing with images, i would also take a look at https://github.com/nostra13/Android-Universal-Image-Loader it just makes my life so easy. | unknown | |
d17073 | val | Could there be instances such that it could print the following i.e. some thread numbers are lost and some numbers are doubled?
st is a method local variable, also st doesn't escape the method's scope so it is thread-safe. So, multithreading will have no effect on st . The messages can be printed out of order depending on which thread runs the method at what time.
A: As you don't share any field between your threads, the order of printing can differ, but any problem concerning thread safety (race conditions) shouldn't appear.
A: No, there is no state that's being shared between different threads so the situation you described can not happen.
If instead st was a member variable of that class, instead of being passed as an argument, and was incremented - that's a different story.
How it works now is that st will be put on the execution stack, each thread has it's own execution stack and they don't share stuff from there. Therefore each thread has it's own value of st. When it's a member variable of a class it's in memory (single value) and all threads would try to use it (the same one).
@Edit: well I guess it is possible if you call the method several times with the same value :-))
A: The Java Language Specification states
The result of string concatenation is a reference to a String object
that is the concatenation of the two operand strings. The characters
of the left-hand operand precede the characters of the right-hand
operand in the newly created string.
So, although a compiler is free to optimize how the concatenation happens, it must do so while following that rule, "a" + "b" becomes "ab". In an unthread-safe, shared StringBuilder, implementation that would potentially not be the case. That implementation would therefore not be correct and could not be considered Java.
A: Each thread will always have individual StringBuilder instances.
Thread-safety is no issue when threads don't share instances.
So, the following simple method ...
public class MyThreadSafeClass
{
public String myMethod(String field1, String field2, String field3)
{
return field1 + field2 + field3;
}
}
... will be compiled to use a local StringBuilder.
public class MyThreadSafeClass
{
public String myMethod(String field1, String field2, String field3)
{
return new StringBuilder(field1).append(field2).append(field3).toString();
}
}
Each time the method is entered, a new StringBuilder instance is created.
This instance is only used withing the scope of this thread.
You are correct however that StringBuilders are not always thread-safe. (see below)
If multiple threads start calling the saveEvent method, they may be using the builder simultaneously.
public class History
{
// thread-safety issues !!!!
// In fact, here you should use a StringBuffer or some locking.
private StringBuilder historyBuilder = new StringBuilder();
public void saveEvent(String event)
{
historyBuilder.append(event).append('\n');
}
public String getHistoryString()
{
return historyBuilder.toString();
}
}
But compiler optimizations will not create these kind of constructions. The StringBuilder is always created and used only within one and the same thread.
We could try to make things more complex (static fields, multiple classloaders, ...) but always again, each StringBuilder instance is created and used by only 1 thread.
EDIT:
Perhaps useful to know: This optimization happens during the generation of the byte-code. There are other optimizations later on during JIT compilation, but this optimization is not one of them. However the JIT compiler does have an important impact in the final performance.
A: I have to partially disagree. This sentence is incomplete / missleading:
"If instead st was a member variable of that class, instead of being passed as an argument, and was incremented - that's a different story."
What matters in the original example is that the expression on the right is a rvalue. If it were not, the outcome would have been different. I will explain a bit.
So yes, Strings are immutable, and beginmt() receives a final reference to a String and this means a final reference to an immutable heap memory area. JVM will make a copy of this final reference and whatever you do inside the beginmt(), it is done on this copy, and this copy, immediately after the string is modified (st = ...), will point to another memory area. Now the point is that this final heap memory area has no pointer to it, because it is created inside the method and it seems that nothing points to it. Well, almost! The JVM may intern the string and if another thread points to the same value as the interned value it could be possible that they would actually share the same heap address. Now having a race condition exactly here is very hard to detect, so I will make a synthetic example to illustrate what could happen in case the expression on the right is a lvalue (induced by JVM's String intern-al):
public class AnExample {
private static final int N = 20000;
private static class Foo {
static HashMap<String, Foo> foos = new
HashMap<String, Foo>(N);
static synchronized Foo createInstance(String i) {
if (foos.containsKey(i))
return foos.get(i);
foos.put(i, new Foo(i));
return foos.get(i);
}
String i;
private Foo(String i) {
this.i = i;
}
Foo inc() {
synchronized(Foo.class){
i += "1";
return createInstance(i);
}
}
@Override public String toString() {
return i;
}
}
private static class Bar {
public void bar(Foo st) {
st = st.inc();
System.out.println(st);
}
}
public static void main(String... args) {
final Bar cucu = new Bar();
for (int i = 0; i < N; i++) {
final Foo st = Foo.createInstance(i + "");
new Thread(new Runnable(){
@Override public void run() {
cucu.bar(st);
}
}).start();
}
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
This will produce about 40% duplicates (I have less than 10100 unique values). Note that in the bar() method st = some_expression(st)
In my example I intentionally produce a lvalue just to show what might happen in case the JVM will intern the expression and it happens to have a reference to that (which is sent back in another thread to the same method).
The conclusion is that your code is not correct because "st is not a member variable of that class" and "st becomes a local, copied reference", etc - but because the expression on the right is a rvalue. | unknown | |
d17074 | val | I found the silly solution. The browser was zoomed to 120% unknown to me so I adjusted it back to 100% | unknown | |
d17075 | val | Take a look at JarInputStream, JarOutputStream, ZipInputStream, and ZipOutputStream. They are a part of the JDK, so external libraries are not required. Pay attention that unlike other streams these require you to care about current entry. You can find a lot of examples of how to use the java zip API. | unknown | |
d17076 | val | Looks like they only allow this feature for the 'pull' option.
A: More information about Cloud Pub/Sub Exactly Once Delivery and Push Subscriptions: https://cloud.google.com/pubsub/docs/exactly-once-delivery#exactly-once_delivery_and_push_subscriptions. | unknown | |
d17077 | val | try with the zipjs.bat:
call zipjs.bat list -source "C:\myZip.zip" -flat yes|find /i "filename" && (
echo file does exists in the zip
color
)|| (
echo file does NOT exists in the zip
) | unknown | |
d17078 | val | Vecs can contain Vecs too:
let mut vecs: Vec<Vec<i32>> = vec![]; // or Vec::with_capacity(2)
for _ in 0..2 {
vecs.push(Vec::with_capacity(100));
} | unknown | |
d17079 | val | You could use a join instead, putting the values in a derived table:
select p.*
from passages p join
(values (413, 2), (414, 3), (415, 4), (416, 5)
) v(id, category_id)
on p.id = v.id and p.category_id = v.category_id;
A: If you can store that dictionary container in a table it will be easier to use it in an (IN) clause like this:
SELECT * FROM PASSAGES AS psg
JOIN Dictionary AS D ON psg.ID = D.ID
WHERE psg.ID IN (D.ID) AND psg.CATEGORY_ID IN (D.CATEG)
and as the Dictionary table records increase the query will be fixed all time without multiple (ORs). | unknown | |
d17080 | val | You cannot directly plot a dictionary in matplotlib. It needs x values and y values.
You can see that type(df) will be a <class 'dict'> which contains the value something like this:
{'TSLA': {'2017-02-09': {'open': 266.25, 'high': 271.18, 'low': 266.15, 'close': 269.2, 'volume': 7820222}}}
so, if you want to get it graphed, you need to convert it into pandas dataFrame
your code has to change like this:
from iexfinance import Stock
import matplotlib.pyplot as plt
tsla = Stock('TSLA')
tsla.get_close()
tsla.get_price()
from iexfinance import get_historical_data
from datetime import datetime
import pandas as pd
pd.set_option('display.max_rows', 1000)
start = datetime(2017, 2, 9)
end = datetime(2017, 5, 24)
df = get_historical_data("TSLA", start=start, end=end, output_format='json')
df = pd.DataFrame(df["TSLA"]).transpose() #transpose to get dates as x
df = df.drop(columns = "volume") #deleted volume column, else it make other graphs smaller
df.index = pd.to_datetime(df.index) #change string to date format
df.plot()
plt.show();
You will get a graph like this:
Graph | unknown | |
d17081 | val | You will have to use a ng-grid plugin Flexible Height Plugin. Add this plugin to plugins property of the grid options.
$scope.gridOptions = {
data: 'nagruzkaData',
enableColumnResize: true,
showGroupPanel: true,
plugins: [new ngGridFlexibleHeightPlugin()]
};
A: Eventually, as I need to solve that problem as quick as I could, I ended up just triggering empty 'resize' event via jQuery, which made ng-grid rebuild and show all data received from the server.
A: please use this for not Not all rows are rendered in ng-grid
it will work 100%
$scope.gridOptions.excessRows = 100; | unknown | |
d17082 | val | If you want to map Incident to IncidentDTO while retaining and mapping the Agency object in the agency property (to an AgencyDTO) of an Incident instance I'd suggest renaming the agencyDTO property to agency in your IncidentDTO and then use a tweak to the CloneInjection sample from the Value Injector documentation as described here: omu.valueinjecter deep clone unlike types | unknown | |
d17083 | val | To keep an independent copy of the data, you'll want to perform a deep copy of the object using something like klona. Using Object.assign is a shallow copy and doesn't protect against reference value changes. | unknown | |
d17084 | val | Iterate through the array starting at index 1. Then append that item after its preceding element. That's it.
http://jsfiddle.net/jT8Tt/
var order = [2, 3, 1, 5, 4];
for (var i = 1; i < order.length; i++) {
$('li[data-number="' + order[i] + '"]').insertAfter($('li[data-number="' + order[i - 1] + '"]'));
}
Or in slightly more readable format
for (var i = 1; i < order.length; i++) {
var curElem = $('li[data-number="' + order[i] + '"]');
var prevElem = $('li[data-number="' + order[i - 1] + '"]');
curElem.insertAfter(prevElem);
}
A: This is going to be a bit rough since I'm not sure of your overall process but I'll give you the gist and you can implement it.
If the key is predefined...
var sort_order = ['2', '3', '1', '5', '4']
$.each(sort_order function(key,value){
// build you list here you can use the value from your sort_order
// to match the data-number and print them out in the order of your choosing.
// this will work if you dynamically create the sort_order as well.
});
The $.each function will work through the sort_order array allowing you to define an order.
A: A little late, but I'd already written this so it seems a shame to waste it:
var newOrder = ['2', '3', '1', '5', '4'],
$ul = $("ul"),
$lis = $ul.children().remove();
$.each(newOrder, function(key,val){
$ul.append($lis.filter('[data-number="' + val+ '"]'));
});
Demo: http://jsfiddle.net/zgD3n/ | unknown | |
d17085 | val | lambda x: f + g
This is a function that takes in x and returns the sum of two values that do not depend on x. Whatever values f and g were before they stay that value.
lambda x: x + x + 1
This is a function that returns the input value x as x+x+1. This function will depend on the input.
In python, unlike mathematics, when you evaluate the series of commands
a = 1
b = a
a = 2
the value of b is 1. | unknown | |
d17086 | val | Your second line works and gives the "expected" result. The first fails because the result of length(date) is a vector of length 2 rather than a single value. since you want a result for each row of your data.frame, you should use transform rather than summarise:
ddply(x, .(date), transform, freq=length(date), calc=(value1 * value2) / sum(value1))
y date value1 value2 freq calc
1 a 2013-01-01 8.0886946 -4.498656 2 -2.376917
2 b 2013-01-01 7.2203152 1.222322 2 0.576494
3 a 2013-01-02 7.9971361 -5.675020 2 -1.757606
4 b 2013-01-02 17.8242945 26.489059 2 18.285152
5 a 2013-01-03 3.0401349 10.495623 2 1.283746
6 b 2013-01-03 21.8153403 14.648083 2 12.856439
7 a 2013-01-04 14.4831518 -2.812941 2 -2.685447
8 b 2013-01-04 0.6875999 27.397730 2 1.241776
9 a 2013-01-05 6.2625381 19.979980 2 8.386698
10 b 2013-01-05 8.6569681 11.385124 2 6.606161
A: Using data.table
library(data.table)
x<-data.table(x)
x[,list(freq=length(date),cal=(value1*value2)/sum(value1)),keyby="date"]
date freq cal
1: 2013-01-01 1 -3.94483543
2: 2013-01-01 1 10.83779796
3: 2013-01-02 1 2.33439622
4: 2013-01-02 1 10.62941740
5: 2013-01-03 1 2.97776304
6: 2013-01-03 1 0.06035661
7: 2013-01-04 1 1.59372587
8: 2013-01-04 1 7.17029644
9: 2013-01-05 1 -0.64156778
10: 2013-01-05 1 -1.23650898 | unknown | |
d17087 | val | Change this
for(int i=0; i < index; i++)
{
cout << array[index] << endl;
}
To
for(int i=0; i < index; i++)
{
cout << array[i] << endl;
}
You used index at the seconde loop causing your program to print all the array cell's after the user input.
Also, if -1 is your condition you should change it to
} while(input>=0);
^^
Otherwise, also 0 will stop the loop, which is not what you asking for. | unknown | |
d17088 | val | You can use Ext.Date class for getting 12 hours format.
Here is defied some formats:-
*
*g 12-hour format of an hour without leading zeros 1 to 12
*i Minutes, with leading zeros 00 to 59
*a Lowercase Ante meridiem and Post meridiem am or pm
*A Uppercase Ante meridiem and Post meridiem AM or PM
I have created an Sencha fiddle demo it is showing how is working. Hope this will help you to solve your problem.
Ext.create('Ext.form.Panel', {
title: 'Time Card',
width: 300,
bodyPadding: 10,
renderTo: Ext.getBody(),
defaults: {
xtype: 'timefield',
minValue: '6:00 AM',
maxValue: '8:00 PM',
increment: 30,
anchor: '100%',
listeners: {
change: function (timefield, newValue) {
Ext.Msg.alert('Succes', 'Good you have selected <b>' + Ext.Date.format(newValue, 'g:i A') + '</b>');
}
}
},
items: [{
name: 'in',
fieldLabel: 'Time In'
}, {
name: 'out',
fieldLabel: 'Time Out'
}, {
xtype: 'button',
text: 'Get current GMT+14 Time',
handler: function () {
var date = new Date(),
dateGmtPlus14 = new Date(date.valueOf() + date.getTimezoneOffset() * 60000); //Multiply this by 60,000 (milliseconds in a minute) to get the milliseconds and subtract from the date to create a new date
Ext.Msg.alert('Succes', 'Current time with GMT+14 is <br> 12 hours format : <b>' + Ext.Date.format(dateGmtPlus14, 'g:i:s A') +
'</b><br>24 hours format: <b> ' + Ext.Date.format(dateGmtPlus14, 'G:i:s') + '</b>');
}
}]
});
You can use Date.getTimezoneOffset() or MDN. This returns the time difference between your date and GMT in minutes.
Multiply this by 60,000 (milliseconds in a minute) to get the milliseconds and subtract from the date to create a new date.
new Date(date.valueOf() + date.getTimezoneOffset() * 60000) | unknown | |
d17089 | val | After searching through Google and StackOverflow for hours, I finally came up with a solution to the problem on my own.
Running the type command within terminal against node, I got this returned:
:~ myusername$ type node
node is /Users/myusername/.nvm/v0.10.48/bin/node
Subsequently, after deleting that folder, Node appears to be completely removed from my system.
I have since made sure that I have deleted all node and node_module folders that I could find within /usr/ to make sure - and I would suggest that anyone attempting this also do the same.
A: Try to run the following command
brew uninstall node
After the above command, you need to scan manually for node_modules if exists. Try following.
grep -irl "node_modules/node"
sudo rm -rf result_from_above_command
rm -rf ~/.npm
I hope this will remove all the node and it's components. As I have done like this, once before.
Thank you. | unknown | |
d17090 | val | I am not absolutely sure, but I think the problem lies in the AddWithValue.
While convenient this method doesn't allow to specify the exact datatype to pass to the database engine and neither the size of the parameter. It pass always an nvarchar parameter for a C# UNICODE string.
I think you should try with the standard syntax used to build a parameter
Dim p = new SqlParameter("@code", SqlDbType.Char, 20)
p.Value = thisstring.PadLeft(20, " ")
See this very interesting article on MSDN | unknown | |
d17091 | val | The browser tab should close automatically when the auth succeeds and Azure AD B2C calls back to the app. It's possible that you might mis-configured the app or their is a bug in the specific browser you're using (we've seen this before on smaller browsers, so the data could help).
With respect to Azure AD B2C, I'd highly discourage using WebViews as Google and other identity providers explicitly disable WebView support.
I'd recommend you to enable logging and share them with me and file an issue on the library if needed(https://github.com/AzureAD/microsoft-authentication-library-for-android/wiki). | unknown | |
d17092 | val | I was able to accomplish updating the parent windows URL in the address bar using history.pushState by sending the new URL to the parent from the child Iframe window using postMessage and on the parent window listening for this event.
WHen the parent receives the child iframes postMessage event, it updates the URL with pushSTate using the URL passed in that message.
Child Iframe
<script>
// Detect if this page is loaded inside an Iframe window
function inIframe() {
try {
return window.self !== window.top;
} catch (e) {
return true;
}
}
// Detect if the CTRL key is pressed to be used when CTRL+Clicking a link
$(document).keydown(function(event){
if(event.which=="17")
cntrlIsPressed = true;
});
$(document).keyup(function(){
cntrlIsPressed = false;
});
var cntrlIsPressed = false;
// check if page is loaded inside an Iframe?
if(inIframe()){
// is the CTRL key pressed?
if(cntrlIsPressed){
// CTRL key is pressed, so link will open in a new tab/window so no need to append the URL of the link
}else{
// click even on links that are clicked without the CTRL key pressed
$('a').on('click', function() {
// is this link local on the same domain as this page is?
if( window.location.hostname === this.hostname ) {
// new URL with ?sidebar=no appended to the URL of local links that are clicked on inside of an iframe
var linkUrl = $(this).attr('href');
var noSidebarUrl = $(this).attr('href')+'?sidebar=no';
// send URL to parent window
parent.window.postMessage('message-for-parent=' +linkUrl , '*');
alert('load URL with no sidebar: '+noSidebarUrl+' and update URL in arent window to: '+linkUrl);
// load Iframe with clicked on URL content
//document.location.href = url;
//return false;
}
});
}
}
</script>
Parent window
<script>
// parent_on_message(e) will handle the reception of postMessages (a.k.a. cross-document messaging or XDM).
function parent_on_message(e) {
// You really should check origin for security reasons
// https://developer.mozilla.org/en-US/docs/DOM/window.postMessage#Security_concerns
//if (e.origin.search(/^http[s]?:\/\/.*\.localhost/) != -1
// && !($.browser.msie && $.browser.version <= 7)) {
var returned_pair = e.data.split('=');
if (returned_pair.length != 2){
return;
}
if (returned_pair[0] === 'message-for-parent') {
alert(returned_pair[1]);
window.history.pushState('obj', 'newtitle', returned_pair[1]);
}else{
console.log("Parent received invalid message");
}
//}
}
jQuery(document).ready(function($) {
// Setup XDM listener (except for IE < 8)
if (!($.browser.msie && $.browser.version <= 7)) {
// Connect the parent_on_message(e) handler function to the receive postMessage event
if (window.addEventListener){
window.addEventListener("message", parent_on_message, false);
}else{
window.attachEvent("onmessage", parent_on_message);
}
}
});
</script>
A: Another solution using Window.postMessage().
Iframe:
<a href="/test">/test</a>
<a href="/test2">/test2</a>
<script>
Array.from(document.querySelectorAll('a')).forEach(el => {
el.addEventListener('click', event => {
event.preventDefault();
window.parent.postMessage(this.href, '*');
});
});
</script>
Main page:
Current URL: <div id="current-url"></div>
<iframe src="iframe-url"></iframe>
<script>
const $currentUrl = document.querySelector('#current-url');
$currentUrl.textContent = location.href;
window.addEventListener('message', event => {
history.pushState(null, null, event.data);
$currentUrl.textContent = event.data;
});
</script>
See demo on JS Fiddle. | unknown | |
d17093 | val | When you add an object to an array, the array just keeps a reference to that object (a pointer). It doesn't create a copy of that object.
So, in the code example above, you're always dealing with the same instance of Tutorial:
*
*First, you create a new Tutorial with alloc and init, and store a reference to it with the tutorial pointer.
*Then you add it to your array. Your array retains the object, which means it keeps a reference to it.
*Then you set the title and url properties of your existing object.
*Then you grab another reference to the same object, and call it objNew. You get this reference by asking the array for a pointer to its first object.
*You then print the properties of the object.
A: When the following line of code executed,
[newTutorials addObject:tutorial];
What it does is , it add the address (reference) of tutorial object which you created in the previous line to the array newTutorials.
Your confusion is this : " You haven't set the values for the tutorial object inside the array but why is it displaying mytitle and myurl when you NSLoged the propeties of tutorials?" The answer is simple "You've not stored the tutorial object inside the array but you have stored a reference to the tutorial object"
Since when you've stored the reference and you've done the following to tutorial object :
tutorial.title = @"mytitle";
tutorial.url = @"myurl";
When you try to print the properties of the reference stored in the array, it prints mytitle and myurl because that is what you've assigned to the actual object's properties. | unknown | |
d17094 | val | First solution you can add one more option tag for default value
Html Code
<div ng-app="MyApp">
<div ng-controller="MyCtrl">
<select ng-options="opt as opt for opt in testOpt"
data-ng-model="resultOpt"
data-ng-change="checkResultOpt(resultOpt)">
<option value=''>Choose Category </option>
</select>
</div>
</div>
Controller Code
var MyApp = angular.module('MyApp', [])
.controller('MyCtrl', ['$scope', function ($scope) {
$scope.testOpt = [
'ID',
'Name',
'Email',
'Address'
];
$scope.resultOpt = '';
}]);
Working code
Second Solution : you just add one more item in your array list after get from http call
Html Code
<div ng-app="MyApp">
<div ng-controller="MyCtrl">
<select ng-options="opt as opt for opt in testOpt"
data-ng-model="resultOpt"
data-ng-change="checkResultOpt(resultOpt)">
</select>
</div>
</div>
Controller Code :
var MyApp = angular.module('MyApp', [])
.controller('MyCtrl', ['$scope', function ($scope) {
$scope.testOpt = [
'ID',
'Name',
'Email',
'Address'
];
$scope.testOpt.splice(0, 0, 'Choose Category');
$scope.resultOpt = 'Choose Category';
}]);
Working Code
hope this will help you | unknown | |
d17095 | val | gzip is basically a header + deflate + a checksum.
Gatling will retain the original Content-Encoding response header so you can check if the payload was gzipped, and then trust the gzip codec to do the checksum verification and throw an error if the payload was malformed. | unknown | |
d17096 | val | In then location manger didUpdateLocations set manager.delegate to = nil after you call manager.stopUpdatingLocation(). Let me know if you want me to show you how it looks in my code.
func findLocation() {
locationManager = CLLocationManager()
locationManager.delegate = self
locationManager.desiredAccuracy = kCLLocationAccuracyBest
locationManager.requestAlwaysAuthorization()
if CLLocationManager.locationServicesEnabled() {
locationManager.startUpdatingLocation()
}
}
func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) {
let userLocation = locations[0] as CLLocation
manager.stopUpdatingLocation()
manager.delegate = nil
let loc = CLLocation(latitude: userLocation.coordinate.latitude, longitude: userLocation.coordinate.longitude)
getLocation(location: loc)
} | unknown | |
d17097 | val | I suppose that cable will have the same functionality as a connector for a projector or second display right?
If that is the case then the answer is: IS POSSIBLE.
But, everything that is want to show in the second display have to be explicitly done by you. There is no mirroring system or something alike.
Read here, there is a sample app also :)
http://mattgemmell.com/2010/06/01/ipad-vga-output | unknown | |
d17098 | val | Starting from dotnet core 3, Ef will throw an exception if a Linq query couldn't be translated to SQL and results in Client-side evaluation. In earlier versions you would just receive a warning. You will need to improve your Linq query so that it can be evaluated on client side.
Refer to this link for details,
https://learn.microsoft.com/en-us/ef/core/querying/client-eval
You are using reflection to do the query, why not use the property directly? If your boolean can be be NULL then declare the type as bool? IsActive. That way you can check for null in the where,
this._context.Set<TModel>().Where(r => (r.IsActive==true))
In case you are trying to create a repository then try declaring an interface that has IsActive as a field.
interface ISoftDeleteTarget
{
public IsActive{get;}
} | unknown | |
d17099 | val | First, creating these types of sql objects should use begin.. end blocks. Second is,you can ignore the else statement.
CREATE TRIGGER invalidScore ON dbo.dbo_score
AFTER INSERT
AS
BEGIN
DECLARE @score DECIMAL;
SET @score = (SELECT s.score FROM Inserted s);
IF(@score > 10)
BEGIN
RETURN 'score must be less than 10'
ROLLBACK TRAN
END
END
A: There are 3 things you need to change for this trigger to work:
*
*Remove the else section - its optional.
*Handle the fact that Inserted may have multiple rows.
*Throw the error rather than using the return statement so you can handle it in the client. And throw it after rolling back the transaction in progress.
Corrected trigger follows:
create trigger invalidScore on dbo.dbo_score
after insert
as
begin
if exists (select 1 from Inserted S where S.Score > 10) begin
rollback tran;
throw 51000, 'score must be less than 10', 1;
end
end
A: 'Else' is an option section you can remove this and use it,but i may like you to consider using check constraints for scenarios like this rather than adding a trigger check on score column
e.g.
CREATE TABLE dbo.dbo_score(
Score int CHECK (score < 10)
);
A CHECK constraint is faster, simpler, more portable, needs less code and is less error prone | unknown | |
d17100 | val | In MainMenuScreen you have to draw things in the render() method, not in the show() method. Like this:
@Override
public void render() {
stage.draw();
// ... possibly more drawing code
} | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.