_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d6001 | train | One way to do this is to use tapply() function:
df <- data.frame(City=c( "city1","city2","city3","city1","city2","city5"),
County = c("a", "b","a","b","a","b"))
df$City[which(tapply(df$County, df$City, length) > 1)]
This will create the following output:
> df$City[which(tapply(df$County, df$City, length) > 1, useNames=FALSE)]
[1] city1 city2
Levels: city1 city2 city3 city5
As you can see from the above - it will list which cities are in more than 1 region.
A: We can use tidyverse
library(dplyr)
df %>%
group_by(City) %>%
filter(n_distinct(County) > 1) %>%
distinct(City) %>%
pull(City)
#[1] city1 city2
#Levels: city1 city2 city3 city5 | unknown | |
d6002 | train | Solved. Problem was caused because we were using an old version of net-ldap (0.1.1). I updated this gem to the last version (0.3.1) and works like a charm. | unknown | |
d6003 | train | Solution:
*
*My machine was with 2 flutter sdk installed. I unnistalled one, and everything worked again. | unknown | |
d6004 | train | Is there another process binding to TCP/11211?
Perhaps you tried to start the memcached service as a non-privileged user and it failed with:
$ service memcached start
Starting memcached: [ OK ]
touch: cannot touch ‘/var/lock/subsys/memcached’: Permission denied
After that, service memcached status will falsely report that memcached is not running:
$ service memcached status
memcached dead but subsys locked
But it is, and it is binding to port 11211, in order to check for this you can use:
$ fuser -n tcp 11211
11211/tcp: 4439
Or:
$ pgrep -l memcached
4439 memcached
Memcached will fail to start because it cannot bind to 11211, as the running instance is already bound to it. Unfortunately there are some systems (I'm looking at you, CENTOS) where it may not leave any useful hint at /var/log/messages or /var/log/syslog. That is why many of the previous answers to this question that fiddle with the binding address will look like they solved the problem.
How do you fix it?
Since service stop memcached will not work, you have to kill it:
$ pkill memcached
Or this (where 4439 is the pid you found in the previous step):
$ kill 4439
Then you can do it right, using sudo:
$ sudo service memcached start
Starting memcached: [ OK ]
$ service memcached status
memcached (pid 6643) is running...
A: Solved this problem by typing the following commands in terminal:
1) su (becoming root).
2) killall -9 memcached (killing memcached).
3) /etc/init.d/memcached start (starting memcached by hands).
Alternatively: 3) service memcached start.
A: check /etc/sysconfig/memcached
make sure the OPTIONS="-l 127.0.0.1" is correct
A: Remove -l from OPTION.
e.g., Instead of
OPTION="-l 2.2.2.2"
try using
OPTION="2.2.2.2"
This worked for me.
A: To resolve this problem, run the following script as root
rm /var/run/memcached/memcached.pid
rm /var/lock/subsys/memcached
service memcached start
A: Removing and reinstalling memcached is what worked for me:
[acool@acool super-confidential-dir]$ sudo yum remove memcached
...
[acool@acool super-confidential-dir]$ sudo yum install memcached
After the above commands and starting it I got:
[acool@acool super-confidential-dir]$ sudo service memcached status
memcached dead but pid file exists
At that point I killed it and removed the pid file:
[acool@acool super-confidential-dir]$ sudo killall -s 9 memcached
...
[acool@acool super-confidential-dir]$ sudo rm /var/run/memcached/memcached.pid
And finally started it and checked its status:
[acool@acool super-confidential-dir]$ sudo service memcached start
...
[acool@acool super-confidential-dir]$ sudo service memcached status
memcached (pid 13804) is running...
And then I was happy again.
Good luck.
A: In my case I wanted to use memcache through the socket with
OPTIONS="-t 8 -s /run/memcached/memcached.sock -a 0777 -U 0"
copied from another OS, and get the same problem.
Then I realized that I just forgot, that in my OS /run/ doesn't exist. That's it. Just check your path, hah | unknown | |
d6005 | train | Yes you are correct. If you use any private Apple API in your app, it will be rejected.
Note that a third party code library is not the same thing as a private API (such as libraries you download from github, sourceForge, etc.) As long as the library(s) you use follow the rules, that is fine.
I'm not familiar with CTMessageCenter, so I don't know which it is. | unknown | |
d6006 | train | Ok - for this question I got the "Tumbleweed Badget" - LOL
I admit it's a very special one and makes obvious, how much I am a beginner here. So I asked another question based on the same problem and spend some effort to make it easier to understand the code and to reproduce it. This paid off! Look here for the solution if you tumble once into that kind of problem: Is it a bug in Kivy? DropDown + ScreenManager not working as expected | unknown | |
d6007 | train | Your first error probably comes from a typo somewhere.
firebase.auth(...).signInWithLoginAndPassword is not a function
Notice it says signInWithLoginAndPassword, the function is called signInWithEmailAndPassword. In the posted code it's used correctly, so it's probably somewhere else.
firebase.auth(...).GoogleAuthProviders is not a constructor
You have not posted the code where you use this, but I assume this error happens when you create your provider variable, that you use in firebase.auth().signInWithPopup(provider)
That line should be var provider = new firebase.auth.GoogleAuthProvider();
Based on the error message, I think you might be doing new firebase.auth().GoogleAuthProvider(); Omit the brackets after auth, if that's the case.
A: There is no way to sign your node.js app into firebase with email+password or one of the social providers.
Server-side processes instead sign into Firebase using so-called service accounts. The crucial difference is in the way you initialize the app:
var admin = require('firebase-admin');
admin.initializeApp({
serviceAccount: "path/to/serviceAccountCredentials.json",
databaseURL: "https://databaseName.firebaseio.com"
});
See this page of the Firebase documentation for details on setting up a server-side process.
A: Do not call GoogleAuthProvider via an Auth() function.
According to the documentation you have to create an instance of GoogleAuthProvider.
let provider = new firebase.auth.GoogleAuthProvider()
Please check the following link https://firebase.google.com/docs/auth/web/google-signin | unknown | |
d6008 | train | It is possible. You do not even need more than one step. Map-Reduce can be implemented in a single step. You can create a step with ItemReader and ItemWriter associated with it. Think of ItemReader -ItemWriter pair as of Map- Reduce. You can achieve the neccessary effect by using custom reader and writer with propper line aggregation. It might be a good idea for your reader/writer to implement Stream interface to guarantee intermediate StepContext save operation by Spring batch.
I tried it just for fun, but i think that it is pointless since your working capacity is limited by single JVM, in other words: you could not reach Hadoop cluster (or other real map reduce implementationns) production environment performance. Also it will be really hard to be scallable as your data size grows.
Nice observation but IMO currently useless for real world tasks.
A: I feel that a batch processing framework should separate programming/configuration and run-time concerns.It would be nice if spring batch provides a generic solution over a all major batch processing run times like JVM, Hadoop Cluster(also uses JVM) etc.
-> Write batch programs using Spring batch programming/Configuration model that integrates other programming models like map-reduce ,traditional java etc.
-> Select the run-times based on your need (single JVM or Hadoop Cluster or NoSQL).
Spring Data attempts solve a part of it, providing a unified configuration model and API usage for various type of data sources.). | unknown | |
d6009 | train | Assuming you're asking whether the memory managed by a valarray is guaranteed to be contiguous, then the answer is yes, at least if the object isn't const (C++03, §26.3.2.3/3 or C++11, §26.6.2.4/2):
The expression &a[i+j] == &a[i] + j evaluates as true for all size_t i and size_t j such
that i+j is less than the length of the non-constant array a. | unknown | |
d6010 | train | DLLs and code using DLLs which are linked against different versions of the runtime library (and possibly other libraries) are in danger of breaking, if one or more of the following happens:
*
*Interface to DLL uses classes/structures, where the size might differ depending on version of runtime library. (One unfortunate member might be enough).
*Heap objects are not being allocated/freed at the same side of the interface.
*Sometimes people play with compiler settings regarding alignment. Maybe different versions of compilers also change their "policy".
*Look out for #pragma (pack,..), __declspec(align()) etc.
*In older Versions of VS there were "single threaded" and "multi-threaded" versions of run time libraries. Mixing DLLs and Applications not linked against the same kind of library could cause trouble.
*#include <seeminglyharmlessheader.h> in header files, seen by both application and DLL can hold some nasty surprises in stock.
*Maybe an (unnoticed) change in other compiler/linker settings, e.g. regarding exception handling.
I could not fully follow the 32bit/64bit things you said in the question part. I don't think, mixing both works, but I also think you are not trying that. | unknown | |
d6011 | train | Some of the naming conventions used in your example are a little weird. If all you want to do is sort I would consider changing the "name" parameter to something more descriptive. Keeping with your example the following might help.
class V1::DataController < V1::ApplicationController
...
def index
#ASC
if data_params[:order].eql?('asc')
@data = Data.asc(data_params[:name])
else
#DESC
@data = Data.desc(data_params[:name])
end
render 'v1/data/index'
end
def data_params
params.permit(:name, :order)
end
end | unknown | |
d6012 | train | As you tagged you question with "perl", you can use Perl. Try LWP - popular module, and much more convenient than curl. For more complex tasks, try WWW::Mechanize.
A: You might want to look at the HTTP Extension. It looks like a pretty complete abstraction implemented over curl.
A: If you want something more sophisticated, there is the Zend_Http_Client ZF Component.
A: You can use Streams to POST also. I generally prefer that over an extension or even cURL since it's core PHP and more likely to be there.
Though, I build open source software that gets installed on machines where the user has low privileges and likely zero control. Your users' options may be different. | unknown | |
d6013 | train | npm install react-native-safe-area-context
Try this and build.
If not,
Please check your computer's environment variables and set JAVA_HOME if it has not already been setup. | unknown | |
d6014 | train | Keep in mind, DynamoDB has a 400KB limit on each item.
I would recommend using S3 for images and PDF documents. It also allows you to set up a CDN much more easily, rather than using something like DynamoDB.
You can always link your S3 link to an item in DynamoDB if you need to store data related to the file.
A: AWS DynamoDB has a limit on the row size to be max of 400KB. So, it is not advisable to store the binary content of image/PDF document in a column directly. Instead, you should store the image/PDF in S3 and have the link stored in a column in DynamoDB.
If you were using Java, you could have leveraged the S3Link abstraction that takes care of storing the content in S3 and maintaining the link in DynamoDB column. | unknown | |
d6015 | train | Triggering a click outside the element should do the trick.
$(document).click();
There is no official way to 'close' the bootstrap dropdown, however, the above one is a workaround to handle that. It basically triggers a fake click outside the dropdown, therefore, closing it.
If you want it to close every time user starts scrolling, then:
$(window).on('scroll',function(){
$(document).click();
});
A:
var prev = 0;
var $window = $(window);
var nav = $('.nav');
$window.on('scroll', function(){
var scrollTop = $window.scrollTop();
nav.toggleClass('hidden', scrollTop > prev);
prev = scrollTop;
});
@import bourbon
body
font-family: helvetica neue, helvetica, arial, sans-serif
height: 8000px
.nav
background: #111
text-align: center
color: #fff
+padding(20px null)
+position(fixed, null 0px 0px 0px)
+transition(transform 1s $ease-in-out-quint)
.nav.hidden
+transform(translateY(100%))
p
margin: 0
+padding(20px)
<div class="nav">Scroll to show/hide this bar!</div>
<p>They see me scrolling...</p>
A: <!DOCTYPE html>
<html>
<head>
<link href="https://netdna.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap.min.css" rel="stylesheet"/>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.2/js/bootstrap.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-multiselect/0.9.15/js/bootstrap-multiselect.min.js"></script>
<link href="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-multiselect/0.9.15/css/bootstrap-multiselect.css"
rel="stylesheet"/>
</head>
<body>
<div class="container">
<div class="example">
<script type="text/javascript">
$(document).ready(function () {
$('#multi-select-demo').multiselect();
});
window.addEventListener("wheel", myScript);
function myScript() {
$("#multi-select-demo").dropdown('toggle');
}
</script>
<select id="multi-select-demo" multiple="multiple">
<option value="jQuery">jQuery tutorial</option>
<option value="Bootstrap">Bootstrap Tips</option>
<option value="HTML">HTML</option>
<option value="CSS">CSS tricks</option>
<option value="angular">Angular JS</option>
</select>
</div>
</div>
</body>
</html> | unknown | |
d6016 | train | include("../includes/db.php");
$result = $link->query("SELECT * FROM users");
echo $result->num_rows;
My bad for the previous answer. It's been a while since I've used PHP | unknown | |
d6017 | train | Not directly, but you can do the following:
In your install4j project add a launcher (with arbitrary configuration) and open the project file in a text editor. Locate the launcher element and swap it out with the contents of your exe4j file. | unknown | |
d6018 | train | Is there any chance you had to set the field name to some file variable? So, I believe you expected [saveField]="file" to set field name to 'file' string but instead it searches for some this.filevariable which is undefined so you got field name set to the default 'files' value?
A: Followed @GProst suggestions and analysed a bit and the below fix worked and I don't know the solution yet.
As per the Kendo UI angular documentation,
Sets the FormData key which contains the files submitted to saveUrl.
The default value is files.
So, I just changed the parameter name from file to files and it worked.
const upload = multer({ storage });
router.post('/upload', upload.single('files'), fileUpload); | unknown | |
d6019 | train | Getting the gradient from a random image is going to be tough for any image processor. You would probably have better luck by getting the pixel data for the image, finding the top and bottom pixel, pulling the color data out of them, then using these values to create your gradient.
A: Get started with this tool
http://gradients.glrzad.com/ and then read about radial gradients here to make it radial http://hacks.mozilla.org/2009/11/css-gradients-firefox-36/
You can get this far with that tool:
-webkit-gradient(
linear,
left bottom,
left top,
color-stop(0, rgb(68,214,254)),
color-stop(0.49, rgb(53,150,230)),
color-stop(0.51, rgb(123,180,236)),
color-stop(1, rgb(255,255,255))
)
-moz-linear-gradient(
center bottom,
rgb(68,214,254) 0%,
rgb(53,150,230) 49%,
rgb(123,180,236) 51%,
rgb(255,255,255) 100%
)
Then make it radial, mozilla only here
-moz-radial-gradient(right bottom,
ellipse,
#44D6FE 0%,
#3596E6 49%,
#7BB4EC 51%,
#FFFFFF 100%) repeat scroll 0 0 transparent
and here is an image of what it looks like
A: Your example is an overlap of two radial gradients. There is a white to mid-blue stop to opaque radial gradient centered above the button with a mid-opacity; on top of simple blue to white radial gradient centered to the bottom right. | unknown | |
d6020 | train | I don't really understand what you want, but isn't diff -ur enough for you? It will work even on directories without any kind of version control.
A: git diff does exactly that. but it only works for git projects.
hg diff, svn diff pretty every version control system can diff directory trees
A: From git diff manpage:
git diff [--options] [--] [<path>...]
[...]
If exactly two paths are given, and at least one is untracked, compare the two files / directories. This behavior can be forced by --no-index.
If you want to compare two versions (e.g. two tagged releases, or two branches) in two different repositories, you can use trick described in GitTips page on Git Wiki, under "How to compare two local repositories".
Assuming that you are inside one repository, and second repository is in /path/to/repo, so its GIT_DIR is /path/to/repo/.git if it is non-bare repository, you can something like the following:
$ GIT_ALTERNATE_OBJECT_DIRECTORIES=/path/to/repo/.git/objects \
git diff $(git --git-dir=/path/to/repo/.git rev-parse --verify A) B
where A and B are revisions you want to compare. Of course you can also specify path limiter in above expression.
Explanation: GIT_ALTERNATE_OBJECT_REPOSITORIES variable can be used to make git commands concatenate object database of the two repositories. git --git-dir=... rev-parse ... is used to turn name (extended SHA-1 expression) in repository given as parameter to git-dir option into unique SHA-1 identifier. The $( ... ) construct puts result of calling given command in command line. git diff is used to compare two revisions (where one is from alternate object repository).
Alternate solution would be to simply import other repository into given repository, using git remote add (and git fetch). Then you would have everything locally, and would be able to do comparision inside single repository.
A:
Is there some sort of utility for hg/git where I can do a tree diff... [s]o that say, there's a mark between newly added files, deleted files... [emphasis added]
Yes. We can git diff the current directory against another directory and...
...mark the added, deleted, and modified files:
git diff --name-status --no-index ./ path/to/other/dir
...show only added files:
git diff --diff-filter=A --name-status --no-index ./ path/to/other/dir
... show only deleted files:
git diff --diff-filter=D --name-status --no-index ./ path/to/other/dir
...show only modified files:
git diff --diff-filter=M --name-status --no-index ./ path/to/other/dir
See also: https://git-scm.com/docs/git-diff
A: To simply create a diff patch in git's diff format from two arbitrary files or directories, without any fancy repository stuff or version control:
git diff --no-index some/path other/path > some_filename
Jakub Narębski's comment on knittl's answer hinted at the answer... For simplicity's sake, that's the full command.
The > part creates a file and redirects the output to it. If you don't want a file and just want the output printed in your console so you can copy it, just remove the > some_filename part.
For convenient copying and pasting, if you've already cded to a directory containing the original directory/file named a and the modified directory b, it'll be:
git diff --no-index a b > patch
A: It's not git-specific, but as a general diff utility for Windows, I really like WinMerge.
A: There Are Following Another Ways For Diffing between two entire directories/projects.
*
*In Git There is Syntax:
Syntax: git-diff [] [--] […]
This form is to view the changes you made relative to the index (staging area for the next commit). In other words, the differences are what you could tell Git to further add to the index but you still haven’t.
Here is the.
URL: https://git-scm.com/docs/git-diff
git diff --no-index directory 1 project /path directory 2 project/path >> File name
*Using Linux Command diff --brief --recursive dir1path/ dir2Path/
*If you are using windows there is an application WinMerge.
A: Creating the patch from dir1 to dir2 (--binary only needed for binaries):
git diff --no-prefix --no-index --binary dir1 dir2 > dir.diff
Applying the patch to contents of working directory, which has the same contents as dir1:
cd dir1_copy
git apply ../dir.diff
git apply has default -p1 which strips the leading directories in diff commands. | unknown | |
d6021 | train | s <- c(1,2,3)
result = matrix(0, nrow = max(s), ncol = length(s))
for (i in seq_along(s)) result[1:s[i], i] = 1
result
# [,1] [,2] [,3]
# [1,] 1 1 1
# [2,] 0 1 1
# [3,] 0 0 1
Keeping rowsums as 1
s <- c(1,2,3)
result = matrix(0, nrow = sum(s), ncol = length(s))
result[cbind(1:sum(s), rep(seq_along(s), times = s))] = 1
result
# [,1] [,2] [,3]
# [1,] 1 0 0
# [2,] 0 1 0
# [3,] 0 1 0
# [4,] 0 0 1
# [5,] 0 0 1
# [6,] 0 0 1
A: set.seed(523)
s <- c(1, 2, 3)
n <- 6
sapply(s, function(i) sample(c(rep(1, i), rep(0, n - i))))
# [,1] [,2] [,3]
# [1,] 0 1 1
# [2,] 1 0 0
# [3,] 0 1 0
# [4,] 0 0 1
# [5,] 0 0 0
# [6,] 0 0 1
A: # Input:
s <- c(1,2,3)
# ...
set.seed(1) # For reproducibility
nr <- sum(s)
nc <- length(s)
mat <- matrix(0L, nrow = nr, ncol = nc)
mat[cbind(seq_len(nr), sample(rep(seq_len(nc), s)))] <- 1L
# Output:
[,1] [,2] [,3]
[1,] 1 0 0
[2,] 0 0 1
[3,] 0 1 0
[4,] 0 0 1
[5,] 0 1 0
[6,] 0 0 1 | unknown | |
d6022 | train | Only Standard, Enterprise and Datacenter editions support clustering:
SQL 2008 - Compare Edition Features | unknown | |
d6023 | train | 1. Prerequisites:
Download and Install the following modules.
First install the Microsoft Online Services Sign-In Assistant for IT Professionals RTW from the Microsoft Download Center.
Then install the Azure Active Directory Module for Windows PowerShell (64-bit version), and click Run to run the installer package.
*Open Windows PowerShell
*Run the following command
Get-ExecutionPolicy
If the output is anything other than
"unrestricted", run the following command.
Set-ExecutionPolicy Unrestricted –Scope CurrentUser
Confirm the change of settings.
*Connect to MsolService via PowerShell
Running the command below will bring up a popup would require you to enter your Office 365 Administrator Credentials.
$UserCredential = Get-Credential
Import-Module MSOnline
Connect-MsolService –Credential $UserCredential
Enable
$St = New-Object -TypeName Microsoft.Online.Administration.StrongAuthenticationRequirement
$St.RelyingParty = "*"
$Sta = @($St)
Set-MsolUser -UserPrincipalName [email protected] -StrongAuthenticationRequirements $Sta
Disable
$Sta = @()
Set-MsolUser -UserPrincipalName [email protected] -StrongAuthenticationRequirements $Sta | unknown | |
d6024 | train | Just because you don't have joins implemented by the DBMS doesn't mean you can't have multiple tables. In App Engine, these are called 'entity types', and you can have as many of them as you want.
Generally, you need to denormalize your data in order to avoid the need for frequent joins. In the few situations where they're inevitable, you can use other techniques, like doing the join in user code.
A: Combining it to one big table is always an option, but it results unnecessarily large and redundant tables most of the time, thus it will make your app slow and hard to maintain.
You can also emulate a join, by iterating through the results of a query, and running a second query for each result found for the first query. If you have the SQL query
SELECT a.x FROM b INNER JOIN a ON a.y=b.y;
you can emulate this with something like this:
for b in db.GqlQuery("SELECT * FROM b"):
for a in db.GqlQuery("SELECT * FROM a WHERE y=:1", b.y):
print a.x
A: If you are looking for way's to design the datatable. I'd recommend you do a bit of research before you start the work. There are pretty magical properties for Google App Engine like :
*
*Self-merge
*Multi-valued list property
That would be very helpful in your design. I've shared my experience here.
To learn about scale-ability there is an exclusive free course in Udacity here just on the topic. It's taught by the founder of reddit.com and he clearly explains the whole scaling things happening in reddit, one of the sites with the highest number of visitors. He shows the entire course demo implementation in gae (and that was a jackpot for me!). They offer the entire course videos free to download here . I've been toiling hard with app engine before I got these resources. So I thought sharing this might help other who are stepping the foot in waters.
A: Switching from a relational database to the App Engine Datastore requires a paradigm shift for developers when modeling their data. Take a look here to get a better idea. This will require you to think more up front about how to fit your problem into the constraints the datastore imposes, but if you can then you are guaranteed that it will run quickly and scale. | unknown | |
d6025 | train | Yes, you can mark conversion operators explicit since C++11.
explicit operator double() { /* ... */ }
This will prevent copy-initialization, e.g.,
double y = x;
return x; // function has double return type
f(x); // function expects double argument
while allowing explicit conversions such as
double y(x);
double y = static_cast<double>(x);
A: Use explicit:
struct X {
explicit operator double() const { return 3.14; }
};
double y = static_cast<double>(X{}); // ok;
double z = X{}; // error | unknown | |
d6026 | train | I rewrite the sample from Apple, you can see I comment self.tableView.tableHeaderView = self.searchController.searchBar; then set it to navigation bar's title view, just like you. You can find search bar is here in snapshot.
- (void)viewDidLoad {
[super viewDidLoad];
APLResultsTableController *qresultsTableController = [[APLResultsTableController alloc] init];
self.resultsTableController = qresultsTableController;
_searchController = [[UISearchController alloc] initWithSearchResultsController:qresultsTableController];
self.searchController.searchResultsUpdater = self;
[self.searchController.searchBar sizeToFit];
// self.tableView.tableHeaderView = self.searchController.searchBar;
self.navigationItem.titleView = self.searchController.searchBar;
self.searchController.hidesNavigationBarDuringPresentation = NO;
// we want to be the delegate for our filtered table so didSelectRowAtIndexPath is called for both tables
self.resultsTableController.tableView.delegate = self;
self.searchController.delegate = self;
self.searchController.dimsBackgroundDuringPresentation = NO; // default is YES
self.searchController.searchBar.delegate = self; // so we can monitor text changes + others
// Search is now just presenting a view controller. As such, normal view controller
// presentation semantics apply. Namely that presentation will walk up the view controller
// hierarchy until it finds the root view controller or one that defines a presentation context.
//
self.definesPresentationContext = YES; // know where you want UISearchController to be displayed
}
snapshot is here,
UISearchBarController has a property named hidesNavigationBarDuringPresentation default is YES, set it to NO if you want to use it as navigation item's title view
self.searchController.hidesNavigationBarDuringPresentation = NO;
I have tested, it works. hope it helpful. | unknown | |
d6027 | train | Your command is not working because you don't have underscores separating the columns; further, you wanted the data ascending but you told it to sort in reverse (descending) order. Use:
grep "abc" *.txt | sort -n -k 2
Or:
grep "abc" *.txt | sort -k 2n
Note that if there are multiple files, your grep output will be prefixed with a file name. You will have to decide whether that matters. It only screws things up if there are spaces in any of the file names. The -h option to grep suppresses the file names.
A: I suggest to delete -t_ parameter, because as I see you use spaces as separator, not underscore. After that it works for me:
$ cat t | sort -n -k2
abc 0.01 8.0
abc 0.02 8.7
abc 0.06 9.4
abc 0.2 9.0
Updated: and yes, as @jonathan-leffler said you also should omit -r option for sorting in ascending order.
A: You can replace your entire script, including the call to grep, with one call to awk
awk '/abc/{a[$2,i++]=$0}END{l=asorti(a,b);for(i=1;i<=l;i++)print a[b[i]]}' *.txt
Example
$ ls *.txt
four.txt one.txt three.txt two.txt
$ cat *.txt
abc 0.02 8.3
foo
abc 0.06 9.4
bar
blah blah
abc 0.2 9.0
blah
abc 0.01 8.0
blah
abc 0.02 8.7
blah blah
$ awk '/abc/{a[$2,i++]=$0}END{l=asorti(a,b);for(i=1;i<=l;i++)print a[b[i]]}' *.txt
abc 0.01 8.0
abc 0.02 8.3
abc 0.02 8.7
abc 0.06 9.4
abc 0.2 9.0
A: First problem: you've used -t_ to specify the field separator, but the output doesn't contain _ characters.
Next problem: the -k option has two arguments, start field and end field. By omitting the end field, you get end_of_line by default.
I'd suggest writing it like this:
grep "abc" *.txt | sort -nr -k2,2 | unknown | |
d6028 | train | If you want the move and animate action to run paralel you can use:
Option1: use CCSpawn instead of a CCSequence. CCSequence is needed because you would like to call a function after completion.
id action = [CCSpawn actions:
[CCMoveTo actionWithDuration:moveDuration position:touchLocation],
[CCAnimate actionWithAnimation:self.walkingAnim],
nil];
id seq = [CCSequence actions:
action,
[CCCallFunc actionWithTarget:self selector:@selector(objectMoveEnded)],
nil];
[self runAction:seq];
Option2: just add any action multiple times and will be run in paralel. Because of the func-call a CCSequence again needed:
id action = [CCSequence actions:
[CCMoveTo actionWithDuration:moveDuration position:touchLocation],
[CCCallFunc actionWithTarget:self selector:@selector(objectMoveEnded)],
nil];
[self runAction:action];
[self runAction:[CCAnimate actionWithAnimation:self.walkingAnim]];
A: What this sequence does is:
*
*move self to the destination
*once arrived at destination, play the walking animation
*when the walk animation is finished, run the selector
I bet you meant to run the move and animate actions separately and at the same time (each with their own call to runAction) and not within a sequence. | unknown | |
d6029 | train | There is a fast split optimization, depending on the circumstances. However from your description, I would simply do the INSERT /+* APPEND */
You may want to employ some parallelism too, if you have the resources and you are looking to speed up the inserts. | unknown | |
d6030 | train | The way to do this quickly is to use a bulk-insert in a stored procedure. You end up running a query to get back all the IDs but it reduces the main bottleneck: the number of trips to the database. | unknown | |
d6031 | train | Use next command:
docker run -it your_image
The root cause is you missed -i, see this which make the container can't receive your input:
--interactive , -i Keep STDIN open even if not attached
And if you use docker-compose, remember to add next to compose file:
stdin_open: true
tty: true
stdin_open same to -i in docker run and tty same to -t. | unknown | |
d6032 | train | I posted my similar question in the gradle forums and was able to solve the issue:
https://discuss.gradle.org/t/unit-test-plugins-afterevaulate/37437/3
Apparently afterEvaluate is not the best/right place to perform the task creation. If you have a DomainObjectCollection in your extension and want to create a task for each element in the collection, task creation should be done in the all-Callback of the collection:
final MyExtension extension = project.getExtensions().create("extension", MyExtension.class);
extension.configurations.all((c) -> {
// register task here
});
If you have simple properties in the extension that are feed to the task as input, you should use lazy configuration:
public class MyExtension {
public final Property<String> property;
public final NamedDomainObjectContainer<Configuration> configurations;
@Inject
public MyExtension(final ObjectFactory objectFactory) {
property = objectFactory.property(String.class).convention("value");
configurations = objectFactory.domainObjectContainer(Configuration.class);
}
}
public abstract class MyTask extends DefaultTask {
@Input
private final Property<String> property = getProject().getObjects().property(String.class);
public Property<String> getProperty() {
return property;
}
}
And the apply method:
public class MyPlugin implements Plugin<Project> {
@Override
public void apply(final Project aProject) {
final MyExtension extension = aProject.getExtensions().create("extension", MyExtension.class);
aProject.getTasks().register("myTask", MyTask.class).configure((t) -> {
t.getProperty().set(extension.property);
});
}
}
A: You should call project.evaluationDependsOn(":"):
@Before fun setup() {
project = ProjectBuilder.builder().build()
project.pluginManager.apply("my.plugin.name")
project.evaluationDependsOn(":") // <<--
...
}
It executes your afterEvaluate callback. | unknown | |
d6033 | train | The way to achieve this is simply by using List as return value. So for example for a repository defined like this:
interface CustomerRepository extends Repository<Customer, Long> {
List<Customer> findByLastname(String lastname, Pageable pageable);
}
The query execution engine would apply the offset and pagesize as handed in by the Pageable but not trigger the additional count query as we don't need to construct a Page instance. This also documented in the relevant sections of the reference documentation.
Update: If you want the next page / previous page of Page but still skip the count query you may use Slice as the return value.
A: I was able to avoid the count performance degradation in a dynamic query (using Spring Data Specifications) with the base repository solution indicated in several post.
public class ExtendedRepositoryImpl<T, ID extends Serializable> extends SimpleJpaRepository<T, ID> implements ExtendedRepository<T, ID> {
private EntityManager entityManager;
public ExtendedRepositoryImpl(JpaEntityInformation<T, ?> entityInformation, EntityManager entityManager) {
super(entityInformation, entityManager);
this.entityManager = entityManager;
}
@Override
public List<T> find(Specification<T> specification, int offset, int limit, Sort sort) {
TypedQuery<T> query = getQuery(specification, sort);
query.setFirstResult(offset);
query.setMaxResults(limit);
return query.getResultList();
}
}
A query to retrieve 20 record slices, from a 6M records dataset, takes milliseconds with this approach. A bit over the same filtered queries run in SQL.
A similar implementation using Slice<T> find(Specification<T> specification, Pageable pageable) takes over 10 seconds.
And similar implementation returning Page<T> find(Specification<T> specification, Pageable pageable) takes around 15 seconds.
A: I had recently got such a requirement and the latest spring-boot-starter-data-jpa library has provided the out-of-box solution. Without count feature pagination can be achieved using org.springframework.data.domain.Slice interface.
An excerpt from blog
Depending on the database you are using in your application, it might become expensive as the number of items increased. To avoid this costly count query, you should instead return a Slice. Unlike a Page, a Slice only knows about whether the next slice is available or not. This information is sufficient to walk through a larger result set.
Both Slice and Page are part of Spring Data JPA, where Page is just a sub-interface of Slice with a couple of additional methods. You should use Slice if you don't need the total number of items and pages.
@Repository
public interface UserRepository extends CrudRepository<Employee, String> {
Slice<Employee> getByEmployeeId(String employeeId, Pageable pageable);
}
Sample code-snippet to navigate through larger result sets using Slice#hasNext. Until the hasNext method returns false, there is a possibility of data presence for the requested query criteria.
int page = 0;
int limit = 25;
boolean hasNext;
do {
PageRequest pageRequest = PageRequest.of(page, limit );
Slice<Employee> employeeSlice = employeeRepository.getByEmployeeId(sourceId, pageRequest);
++page;
hasNext = employeeSlice .hasNext();
} while (hasNext); | unknown | |
d6034 | train | Is it possible to run background tasks in Cloud Run ? i thought it operates only on request, and therefore after a request is handled it stops working, until the next requests comes in.
A: The trick is to pass a custom container command in Cloud Run with the value /usr/bin/supervisord to actually start the supervisor/Horizon.
But there's another trick you need, as Cloud Run expects the service to respond to a health check request. You should not only start Horizon from supervisor (which by default would override the container command and not start Apache), but also start Apache, so the service can be considered healthy.
You can also pick a more lightweight solution, that will return HTTP 200 on request to the root of your service, as Apache feels a bit overkill for this purpose.
Disclosure: I believe I happen to be currently working for ilovecookies | unknown | |
d6035 | train | Forget about apache poi HWPF. It is in scratchpad and without any progress since decades. And there are no useable methods to insert or create new paragraphs. All Range.insertBefore and Range.insertAfter methods which take more than only text are private and deprecated and doesn't work properly also since decades. The reason of that may be that the binary file format of Microsoft Word HWPF of course is the most horrible file format of all the other horrible file formats like HSSF, HSLF. So who wants bothering with this?
But to answer your question:
In word processing text is structured in paragraphs containing text runs. Each paragraph takes a new line by default. But "Text\nText" or "Text\rText" or "Text\r\nText" stored in a text run would only mark a line break within that text run but not a new paragraph. Would ..., because of course Microsoft Word has it's own rules. There \u000B marks that line break within the text run.
So what you could do is the following:
import java.io.FileInputStream;
import java.io.FileOutputStream;
import org.apache.poi.hwpf.*;
import org.apache.poi.hwpf.usermodel.*;
public class ReadAndWriteDOCTable {
public static void main(String[] args) throws Exception {
HWPFDocument document = new HWPFDocument(new FileInputStream("TemplateDOC.doc"));
Range bodyRange = document.getRange();
System.out.println(bodyRange);
TableIterator tableIterator = new TableIterator(bodyRange);
while (tableIterator.hasNext()) {
Table table = tableIterator.next();
System.out.println(table);
TableCell cell = table.getRow(0).getCell(0); // first cell in table
System.out.println(cell);
Paragraph paragraph = cell.getParagraph(0); // first paragraph in cell
System.out.println(paragraph);
CharacterRun run = paragraph.insertBefore("Test\u000BTest");
System.out.println(run);
}
FileOutputStream out = new FileOutputStream("ResultDOC.doc");
document.write(out);
out.close();
document.close();
}
}
That places the text run "Test\u000BTest" before first paragraph in first cell of each table in the document. And the \u000B marks a line feed within that text run.
Maybe that is what you wanted to achieve? But, as said, forget about apache poi HWPF. The next unsolvable problem is only a step far away. | unknown | |
d6036 | train | you can use timthumb.php if would like
<img src="<?php echo path/to/timthumb.php?src="path"&h=100&w=100&q=500 ?>"/> | unknown | |
d6037 | train | I extended the WC_Form_Handler class in my functions.php file, copied a method I needed and edited it, and gave it higher hook priority in my extended class than it's in the original class:
class WC_Form_Handler_Ext extends WC_Form_Handler {
/**
* Hook in method.
*/
public static function init() {
add_action( 'wp_loaded', array( __CLASS__, 'update_cart_action' ), 30 );
}
/**
* Remove from cart/update.
*/
public static function update_cart_action() {
// method content edited
}
}
WC_Form_Handler_Ext::init();
UPDATE:
To make price change after variation value in cart is updated, this function needs to be added to the functions.php
function find_matching_product_variation_id($product_id, $attributes)
{
return (new WC_Product_Data_Store_CPT())->find_matching_product_variation(
new WC_Product($product_id),
$attributes
);
}
And it should be called from the loop in functions.php I mentioned in my question this way:
$attributes = $cart_content[$key]['variation'];
$variation_id = find_matching_product_variation_id($product_id, $attributes);
$price = get_post_meta($variation_id, '_price', true);
$item['data']->set_price($price); | unknown | |
d6038 | train | What you are trying to call is an instance method. Call it this way:
if(isset($_POST["Method"]))
{
$function = $_POST["Method"];
$method = new ReflectionMethod('methods', $function);
$method->invoke($this);
}
A: Try forcing a content-type header before sending the output,
header("Content-type: application/json");
A: There's simply a typo in your class constructor method, it should be
function __construct()
One other thing to keep in mind is that I'm not sure you should be setting the contentType to json, as that variable is for what you're sending... not what you're receiving.
So if you end up having a situation where the post variables are being stripped, try and remove the contentType form your ajax call.
A: Try removing the echo in test method. You are calling
call_user_func($function);
and your $function is not returning but echoing, .i.e
function test() {
$json = array(
"kyle" => "broflowksi",
"eric" => "cartman",
"stan" => "marsh"
);
echo json_encode($json); // This line should be returning
}
I had dealt with similar issue earlier with a php function call (not in ajax particular).to catch. | unknown | |
d6039 | train | char (*string)[100];
OK, string represents a pointer to 100 char, but it's not initialized. Therefore, it represents an arbitrary address. (Even worse, it doesn't necessarily even represent the same arbitrary address when accessed on subsequent occasions.)
gets(string[i]);
Hmm, this reads data from standard input to a block of memory starting at the random address, plus i*100, until a NUL byte is read.
Not very robust. Not guaranteed to produce an error, either. If you didn't get one, the random address must have been mapped to writable bytes.
As you see, I haven't allocated any memory to individual strings i.e, s[0],s[1] but surprisingly my compiler didn't give me any warnings or errors for it and it works perfectly.
Time to reduce your expectations, or increase the diagnostic level of the compiler. Try -Wall -Wextra, if it takes options in that style. It should be warning you that string is used without being initialized first. A warning that gets has been deprecated would also be a good idea, although my compiler doesn't mention it.
C and C++ allow you to manage memory yourself. You are responsible for validating pointers. In C, this is a major source of problems.
In C++, the solution is to use alternatives to pointers, such as references (which cannot be uninitialized) and smart pointers (which manage memory for you). Although the question is tagged C++ and compiles as a C++ program, it's not written in a style that would be called C++. | unknown | |
d6040 | train | You need to declare selectedNote as optional like this:
var selectedNote: Note?
And later check if value exist before using it.
if let note = selectedNote {
// Set up the detail view controller to show.
let detailViewController = DetailViewController()
detailViewController.detailDescriptionLabel = note.valueForKey("noteBody") as! UILabel
// Note: Should not be necessary but current iOS 8.0 bug requires it.
tableView.deselectRowAtIndexPath(tableView.indexPathForSelectedRow()!, animated: false)
// original code
navigationController?.pushViewController(detailViewController, animated: true)
}
Update:
The problem is that you are trying to create DetailViewController
let detailViewController = DetailViewController()
But what you need instead is to have reference to the DetailViewController in order to pass information to it.
So you can create segue from Master to Detail controller in Interface builder. Then remove logic from didSelectRowAtIndexPath
override func tableView(tableView: UITableView, didSelectRowAtIndexPath indexPath: NSIndexPath) {
// Note: Should not be necessary but current iOS 8.0 bug requires it.
tableView.deselectRowAtIndexPath(tableView.indexPathForSelectedRow()!, animated: false)
}
And implement it in prepareForSegue method:
override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
if segue.identifier == "showDetail" {
if let cell = sender as? UITableViewCell {
let indexPath = tableView.indexPathForCell(cell)!
var selectedNote: Note?
if filteredObjects?.count > 0 {
selectedNote = filteredObjects![indexPath.row]
}else {
selectedNote = self.fetchedResultsController.objectAtIndexPath(indexPath) as? Note // <--this is "everything"
}
if let note = selectedNote {
let controller = (segue.destinationViewController as! UINavigationController).topViewController as! DetailViewController
controller.detailItem = note
}
}
}
}
showDetail - segue identifier which you need to setup in IB.
var detailItem: AnyObject? - you need to declare it in DetailViewController. | unknown | |
d6041 | train | Thanks to @jfriend00 for pointing me in the direction of websockets and this article for general guidance
I decided to use ws for the server:
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 3005 });
wss.on('connection', function connection(ws) {
ws.on('message', function incoming(message) {
console.log('received: %s', message);
});
setInterval(() => {
ws.send(`${new Date()}`);
}, 1000);
});
And on the Arduino the ATmega branch of the WebSockets library
#define MONITOR_SPEED 921600
#include <SPI.h>
#include <Ethernet.h>
#include <WebSocketsClient.h>
WebSocketsClient webSocket;
//MAC address for arduino- these can be manually assigned
//but cannot conflict with any other address on the same network
byte mac[] = {0xAE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED};
// Set the static IP address to use if the DHCP fails to assign
static IPAddress ip(192, 168, 0, 177);
static IPAddress myDns(192, 168, 0, 1);
static EthernetClient client;
void webSocketEvent(WStype_t type, uint8_t * payload, size_t length) {
switch(type) {
case WStype_DISCONNECTED:
Serial.println("[WSc] Disconnected!\n");
break;
case WStype_CONNECTED:
{
Serial.print("[WSc] Connected to url: ");
Serial.println((char *)payload);
// send message to server when Connected
webSocket.sendTXT("Connected");
}
break;
case WStype_TEXT:
Serial.print("[WSc] get text: ");
Serial.println((char *)payload);
// send message to server
// webSocket.sendTXT("message here");
break;
case WStype_BIN:
Serial.print("[WSc] get binary length: ");
Serial.println(length);
// hexdump(payload, length);
// send data to server
// webSocket.sendBIN(payload, length);
break;
}
}
void setup()
{
// Open serial communications and wait for port to open:
Serial.begin(MONITOR_SPEED); //921600
while (!Serial)
{
; // wait for serial port to connect. Needed for native USB port only
}
// start the Ethernet connection:
Serial.println("Initialize Ethernet with DHCP:");
if (Ethernet.begin(mac) == 0)
{
Serial.println("Failed to configure Ethernet using DHCP");
// Check for Ethernet hardware present
if (Ethernet.hardwareStatus() == EthernetNoHardware)
{
Serial.println("Ethernet shield was not found.");
Serial.println("Program will continue to run without shield but will not report to web server");
}
if (Ethernet.linkStatus() == LinkOFF)
{
Serial.println("Ethernet cable is not connected.");
}
// try to congifure using IP address instead of DHCP:
Ethernet.begin(mac, ip, myDns);
}
byte macBuffer[6]; // create a buffer to hold the MAC address
Ethernet.MACAddress(macBuffer); // fill the buffer
Serial.print(" MAC address : ");
for (byte octet = 0; octet < 6; octet++)
{
Serial.print(macBuffer[octet], HEX);
if (octet < 5)
{
Serial.print(':');
}
}
Serial.println("");
Serial.print(" DHCP assigned IP : ");
Serial.println(Ethernet.localIP());
webSocket.begin("192.168.0.189", 3005);
webSocket.onEvent(webSocketEvent);
}
void loop()
{
webSocket.loop();
Ethernet.maintain();
if (!client.connected()) {
client.stop();
}
}
This should print the current date and time on your serial monitor:
[WSc] get text: Fri Mar 19 2021 10:13:12 GMT-0400 (Atlantic Standard Time)
[WSc] get text: Fri Mar 19 2021 10:13:13 GMT-0400 (Atlantic Standard Time)
[WSc] get text: Fri Mar 19 2021 10:13:14 GMT-0400 (Atlantic Standard Time)
[WSc] get text: Fri Mar 19 2021 10:13:15 GMT-0400 (Atlantic Standard Time)
[WSc] get text: Fri Mar 19 2021 10:13:16 GMT-0400 (Atlantic Standard Time)
And on your server:
received: Connected | unknown | |
d6042 | train | It's because the value of this inside any Javascript function depends on how that function was called.
In your non-working example, this.hookNav is inside of the renderItem method. So it will adopt the this of that method - which, as I just said, depends on how it is called. Further inspecting your code shows that the renderItem method isn't called directly by your component but passed as a prop (called renderItem) of the FlatList component. Now I don't know how that component is implemented (I've never used React Native), but almost certainly it calls that function at some point - and when it does, it can't be in the context of your component. This is because FlatList can't possibly know where that function prop is coming from, and when it calls it, it will be treated as an "ordinary" function, not in the context of any particular object. So its this context won't be your component instance, as intended, but will simply be the global object - which doesn't have any hookNav method.
This problem doesn't happen in your second example because you simply access the function as part of this.state - there is no passing of a method, like the renderItem in the previous example, which depends on an internal this reference. Note that this won't be the case if your hookNav refers to this inside it.
Such problems with this context are very common in React, but fortunately there are two general solutions which will always fix this:
*
*Bind methods in your component's constructor. If the constructor, in the first example, includes the statement this.renderItem = this.renderItem.bind(this);, then it would work fine. (Note that this is recommended in the official React docs.) It's the "best" solution technically (performance is slightly better than in the other option) - but does involve quite a bit of boilerplate if you have lots of methods which need such binding.
*Use arrow functions to define the methods instead. Instead of renderItem({ item, index }) {...}, write it as renderItem = ({ item, index }) => {...}. This works because arrow functions adopt the this of their lexical scope, which will be the class itself - so in other words this will always refer to the component instance, as you almost always want.
Don't worry if this is confusing - the behaviour of the this keyword in JS is a common stumbling block for beginners to JS, and often enough for those more experienced too. There are a number of good explanations online which demystify it, of which I can especially recommend this one.
A: You have to bind the method in to this of your class. One way to do so will be like below.
constructor(props){
super(props);
this.renderItem = this.renderItem.bind(this);
} | unknown | |
d6043 | train | Basically, I want to console.log() the name of the variable rather than root in the sum function.
You can't. When your sum function is called, it is passed a value. That value is a pointer to an object and there is no connection at all to the variable that the pointer came from. If you did this:
let tree = new TreeNode(1);
let x = y = tree;
sum(x);
sum(y);
there would be no difference at all in the two calls to sum(). They were each passed the exact same value (a pointer to a TreeNode object) and there is no reference at all to x or y or tree in the sum() function.
If you want extra info (like the name of a variable) for debugging reasons and/or logging, then you may have to pass that extra name into the function so you can log it.
A: You can change the sum function for debugging purposes:
function sum(root, path) {
if(!path) {
path = 'root';
}
console.log(path);
if(root === null) return 0;
return root.val + sum(root.left, path+'.left') + sum(root.right, path+'.right');
} | unknown | |
d6044 | train | First solution
Make sure your nodejs version is not superior than the latest stable one. For that you can use n package from npm:
npm install -g n
n stable
# if one of the commands does not pass, you may need to use sudo
sudo npm install -g n
sudo n stable
Then you would wanna use sass package instead of node-sass, as it's deprecated. And for that run those commands:
npm uninstall node-sass --save
npm install sass --save
Second solution
If you need or want node-sass for some reasons, you should downgrade your nodejs version to like v14. For that you can use n package from npm:
npm install -g n
n 14
# if one of the commands does not pass, you may need to use sudo
sudo npm install -g n
sudo n 14
A: I got rid of everything and re-installed Xcode, Node and node-sass again, and the issue is resolved. | unknown | |
d6045 | train | There are two common ways to solve it.
1. add to your android/build.gradle file, under allprojects/repositoris sections
maven {
url 'https://maven.google.com'
}
so that it should looks like:
allprojects {
repositories {
mavenLocal()
maven {
url 'https://maven.google.com'
}
maven {
// All of React Native (JS, Obj-C sources, Android binaries) is installed from npm
url "$rootDir/../node_modules/react-native/android"
}
maven { url "https://jitpack.io" }
jcenter()
}
}
*Another way (works for me) - for react-native-google-fit lib, and in android/build.gradle of library change dependency to
compile 'com.google.android.gms:play-services-auth+'
compile 'com.google.android.gms:play-services-fitness+'
And use your fork in your project | unknown | |
d6046 | train | The position of a node does not change when it rotates. Maybe you could look at connecting the nodes with a physics mechanism such as SCNPhysicsHingeJoint? I found this example which could be useful for your scenario:
http://lepetit-prince.net/ios/?p=3540
Another example:
http://appleengine.hatenablog.com/entry/2017/08/17/154852 | unknown | |
d6047 | train | Try using NSDecimalNumber. There's a good tutorial here:
http://www.cimgf.com/2008/04/23/cocoa-tutorial-dont-be-lazy-with-nsdecimalnumber-like-me/
A: If I'm not wrong and if I well remember the appropriate course at college i think it's a matter of conversion from reality (where you have infinite values) to virtual (where you have finite values even if you have a lot): so you can't actually represent ALL the numbers. You also have to face with operations you're making: multiplications and divisions generate a lot of this troubles, because you'll have a lot of decimal numbers and, in my opinion, C-Based languages are not the best around to manage this kind of matter.
Hoping this will point you to a better source of information :). | unknown | |
d6048 | train | you can use nth-last-child(2) or nth-last-of-type(2), this will select the 2nd last item.
li {
display: inline-block
}
li:nth-last-child(2) {
color: red
}
<ul>
<li>test</li>
<li>test</li>
<li>test</li>
<li>test</li>
<li>test</li>
</ul>
<hr />
<ul>
<li>test</li>
<li>test</li>
<li>test</li>
<li>test</li>
<li>test</li>
<li>test</li>
<li>test</li>
</ul> | unknown | |
d6049 | train | You need to .encode() your Unicode strings before print:
print text.encode('utf-8')
print author.encode('utf-8')
print Tags.encode('utf-8') | unknown | |
d6050 | train | You can use a bool which allows you to specify the queries that must match to constitute a hit. Your query will look something like this
{
"query": {
"bool" : {
"must" : [
{ "match": { "user" : "dude" } },
{ "match": { "path" : "/" } }
]
}
},
"fields": ["name"]
}
What this does is checks that there is an exact match for the term "dude" and path "/" | unknown | |
d6051 | train | An application written in .NET C# should always be compatible with any version of windows that has the .NET Framework installed, there's only a small difference between the .NET Framework Client and Full versions (Has to do with certain features that the client version doesn't have, see it as a lightweight version of the .NET Framework, also since 4.5 it is discontinued => http://msdn.microsoft.com/en-us/library/cc656912.aspx) | unknown | |
d6052 | train | include_directories() populates a directory property called INCLUDE_DIRECTORIES:
http://www.cmake.org/cmake/help/v2.8.12/cmake.html#prop_dir:INCLUDE_DIRECTORIES
Note that CMake 2.8.11 learned the target_include_directories command, which populates the INCLUDE_DIRECTORIES target property.
http://www.cmake.org/cmake/help/v2.8.12/cmake.html#command:target_include_directories
Note also that you can encode the fact that 'to compile against the headers of the lib target, the include directory lib/inc is needed' into the lib target itself by using target_include_directories with the PUBLIC keyword.
add_library(lib STATIC ${lib_hdrs} ${lib_srcs}) # Why do you list the headers?
target_include_directories(lib PUBLIC "${ROOT_SOURCE_DIR}/lib/inc")
Note also I am assuming you don't install the lib library for others to use. In that case you would need to specify different header directories for the build location and for the installed location.
target_include_directories(lib
PUBLIC
# Headers used from source/build location:
"$<BUILD_INTERFACE:${ROOT_SOURCE_DIR}/lib/inc>"
# Headers used from installed location:
"$<INSTALL_INTERFACE:include>"
)
Anyway, that's only important if you are installing lib for others to use.
After the target_include_directories(lib ...) above you don't need the other include_directories() call. The lib target 'tells' proj1 the include directories it needs to use.
See also target_compile_defintions() and target_compile_options(). | unknown | |
d6053 | train | Based on another of your questions, it looks like the parameter item_in is a struct with several char * fields. There is a serious problem because the array temp only exists for the duration of this function. You are assigning the address of a temporary array to pointers in item_in. When the function returns, the array goes out of scope and its memory is no longer yours.
You could fix this by allocating memory to the pointers and copying the data but the best solution is to use std::string from the C++ standard library. It handles resource management and operations like assignment work as you would expect. | unknown | |
d6054 | train | Description
This regex will find td tags and return them in groups of two.
<td\b[^>]*>([^<]*)<\/td>[^<]*<td\b[^>]*>([^<]*)<\/td>
Summary
*
*<td\b[^>]*> find the first td tag and consume any attributes
*([^<]*) capture the first inner text, this can be greedy but we assume the cell has no nested tags
*<\/td> find the close tag
*[^<]* move past all the rest of the text until you, this assumes there are no additional tags between the first and second td tag
*<td\b[^>]*> find the second td tage and consume any attributes
*([^<]*) capture the second inner text, this can be greedy but we assume the cell has no nested tags
*<\/td> find the close tag
Groups
Group 0 will get the entire string
*
*will have the first td group
*will have the second td group
VB.NET Code Example:
Imports System.Text.RegularExpressions
Module Module1
Sub Main()
Dim sourcestring as String = "replace with your source string"
Dim re As Regex = New Regex("<td\b[^>]*>([^<]*)<\/td>[^<]*<td\b[^>]*>([^<]*)<\/td>",RegexOptions.IgnoreCase OR RegexOptions.Singleline)
Dim mc as MatchCollection = re.Matches(sourcestring)
Dim mIdx as Integer = 0
For each m as Match in mc
For groupIdx As Integer = 0 To m.Groups.Count - 1
Console.WriteLine("[{0}][{1}] = {2}", mIdx, re.GetGroupNames(groupIdx), m.Groups(groupIdx).Value)
Next
mIdx=mIdx+1
Next
End Sub
End Module
$matches Array:
(
[0] => Array
(
[0] => <td class="head">Health Points:</td>
<td>445 (+85 / per level)</td>
[1] => <td class="head">Health Regen:</td>
<td>7.25</td>
[2] => <td class="head">Energy:</td>
<td>200</td>
[3] => <td class="head">Energy Regen:</td>
<td>50</td>
[4] => <td class="head">Damage:</td>
<td>53 (+3.2 / per level)</td>
[5] => <td class="head">Attack Speed:</td>
<td>0.694 (+3.1 / per level)</td>
[6] => <td class="head">Attack Range:</td>
<td>125</td>
[7] => <td class="head">Movement Speed:</td>
<td>325</td>
[8] => <td class="head">Armor:</td>
<td>16.5 (+3.5 / per level)</td>
[9] => <td class="head">Magic Resistance:</td>
<td>30 (+1.25 / per level)</td>
[10] => <td class="head">Influence Points (IP):</td>
<td>3150</td>
[11] => <td class="head">Riot Points (RP):</td>
<td>975</td>
)
[1] => Array
(
[0] => Health Points:
[1] => Health Regen:
[2] => Energy:
[3] => Energy Regen:
[4] => Damage:
[5] => Attack Speed:
[6] => Attack Range:
[7] => Movement Speed:
[8] => Armor:
[9] => Magic Resistance:
[10] => Influence Points (IP):
[11] => Riot Points (RP):
)
[2] => Array
(
[0] => 445 (+85 / per level)
[1] => 7.25
[2] => 200
[3] => 50
[4] => 53 (+3.2 / per level)
[5] => 0.694 (+3.1 / per level)
[6] => 125
[7] => 325
[8] => 16.5 (+3.5 / per level)
[9] => 30 (+1.25 / per level)
[10] => 3150
[11] => 975
)
)
Disclaimer
Parsing html with a regex is really not the best solution as there a ton of edge cases what we can't predict. However in this case if input string is always this basic, and you're willing to accept the risk of the regex not working 100% of the time, then this solution would probably work for you. | unknown | |
d6055 | train | To get the above code compiled the vector type has to be defined as an unsafe pointer.
TComponent* Comp = new TComponent(this);
std::vector<__unsafe TComponent*> Comps;
Comps.push_back(Comp);
I openened a support case for an other problem I had. The embarcadero support gave me the following information which I applied to this problem and it seems to work:
__unsafe tells the compiler that object lifetimes will be handled and no ARC code is generated for the objects
More about this topic:
http://docwiki.embarcadero.com/RADStudio/Berlin/en/Automatic_Reference_Counting_in_C%2B%2B#weak_and_unsafe_pointers | unknown | |
d6056 | train | Just initialize aantalslagen to 0 in JavaScript and don't mess with any <form> or POST request or anything like that because you don't need to since it doesn't seem like you're saving the value in some cookie so the value will stay if the user refreshes the page. If you do want the value to stay once the user refreshes, I suggest you look into localStorage.
//The input element:
var input = document.getElementById("aantalslagen-input");
//The td element:
var td = document.getElementById("aantalslagen-td");
//The button element:
var button = document.getElementById("aantalslagen-button");
//The variable:
var aantalslagen = 0;
//When the button is clicked...
button.addEventListener("click", function() {
//Increment aantalslagen:
aantalslagen += 1;
//Update it in the input and td:
input.value = aantalslagen;
td.textContent = aantalslagen;
});
<br><br><br>
<table width="422" height="179" border="0">
<tr>
<td>totaal aantal slagen: </td>
<td> </td>
</tr>
<tr>
<td>totaal aantal slagen php:</td>
<td><input type="text" VALUE="0" name="aantalslagen_plus" id="aantalslagen-input" size="10">
<td id="aantalslagen-td">0</td>
</tr>
</table>
<br>
<button id="aantalslagen-button"> + </button>
A: Your <button> is inside of a <form> element and the default behaviour is for the form to submit to the server.
Remove the <form> element since it no longer seems to serve any purpose. | unknown | |
d6057 | train | The comment from @tripleee inspired me to this approach which I wouldn't call a solution because it doesn't fit to all needs in my opening question. But it seems as a good compromise to me.
I combined recursive calls and a try..except block. The recursion is because I didn't want to separate the functionality into a second function.
#!/usr/bin/env python3
import io
import pathlib
import typing
def foobar(file_like_obj: typing.Union[pathlib.Path, typing.IO]):
print('foobar(): called')
try:
with file_like_obj.open('w') as handle:
print('re-call myself')
foobar(handle)
except AttributeError:
file_like_obj.write('foobar')
if __name__ == '__main__':
p = pathlib.Path.home() / 'my.txt'
foobar(p)
sio = io.StringIO()
foobar(sio)
The output
foobar(): called
re-call myself
foobar(): called
foobar(): called | unknown | |
d6058 | train | vim is waiting a short time to allow for the possibility that the esc key might begin a special key (such as cursor-left or F1).
You can alter this behavior altering these settings: ttimeout, timeoutlen
and ttimeoutlen.
The timeoutlen mode is set by default to 1 second (1000 milliseconds). If you set that to a shorter time (0.1 seconds is fast), it will help.
Some suggest (as in vim's documentation) reducing the timeout, e.g.,
set ttimeout set ttimeoutlen=100
Related discussion:
*
*Vim Command Line Escape Timeout
*Eliminating delays on ESC in vim and zsh | unknown | |
d6059 | train | I would unpivot and aggregate:
select sum(Barcodes)
from t cross apply
(values (color1), (color2), (color3), (color4)) v(color)
where color = 'Red';
If you want this for each color:
select color, sum(Barcodes)
from t cross apply
(values (color1), (color2), (color3), (color4)) v(color)
group by color; | unknown | |
d6060 | train | Unless all your apps are going to be exactly the same, use a new gruntfile for each app. I'm guessing each of your apps will all be different as you've duplicated the src folder within each in anticipation.
I don't recommend a global dependency setup. As time goes on, each of your apps will diverge and the amount of your apps will increase. When it comes time to upgrade jQuery or any other dependency in your root/lib folder... you will be forced to upgrade every single app.
The small convenience now is totally not worth it later.
Instead, just install everything locally and then you can upgrade each app as needed without breaking the other apps. If you come across duplicate code in your apps, consider moving it into a module. Create your module just like another app, then type npm link to make it installable. Then in each of your apps do npm link mymodule to install your module to the node_modules/mymodule folder. Here is an example file tree:
my file tree
|- root
|-- devmodules
|---- mymodule
|------ index.js
|------ package.json
|-- webapps
|---- App-1
|------- src(app src)
|---------- js
|---------- css
|---------- images
|------- node_modules (jQuery & other)
|------- Gruntfile.js
|------- package.json(use to require other modules)
|---- App-2
|------- src(app src)
|---------- js
|---------- css
|---------- images
|------- node_modules (jQuery & other)
|------- Gruntfile.js
|------- package.json(use to require other modules)
... ...
|---- App-n | unknown | |
d6061 | train | The url you give as an example "http://www.sonect.co.uk/Requests.php?accessToken=01XJSK", returns html with a frame that has "http://lvps92-60-123-84.vps.webfusion.co.uk/Requests.php?accessToken=01XJSK" as it's source:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>www.sonect.co.uk</title>
</head>
<frameset rows="100%,*" border="0">
<frame src="http://lvps92-60-123-84.vps.webfusion.co.uk/Requests.php?accessToken=01XJSK" frameborder="0" />
<frame frameborder="0" noresize />
</frameset>
<!-- pageok -->
<!-- 02 -->
<!-- -->
</html>
That source will return 1 as response.
A: You are being returned with iFrame source with
http://lvps92-60-123-84.vps.webfusion.co.uk/Requests.php?accessToken=01XJSK
You have to call this URL and you will get response.
In browser this run perfectly because of iFrame is getting executed. | unknown | |
d6062 | train | Github issue
This happens when using either MANY_TO_MANY or ONE_TO_MANY join in
your query then you cannot iterate over it because it is potentially
possible that the same entity could be in multiple rows.
If you add a distinct to your query then all will work as it will
guarantee each record is unique.
$qb = $this->createQueryBuilder('o');
$qb->distinct()->join('o.manyRelationship');
$i = $qb->iterator;
echo 'Profit!'; | unknown | |
d6063 | train | Dictionaries have a.values() method, and you can use it like so:
for myList in myDict.values():
print(myList) # Do stuff
Keep in mind camelCase isn't a convention in Python.
A: solution of you problem
a={1: [4, 2, 1, 3], 2: [4, 3, 1, 2], 3: [4, 3, 1, 2]}
mylist=a[1]
print(mylist) | unknown | |
d6064 | train | Look at using the Buffering Event and the BufferingProgress Property. According to the MSDN Link:
Use this event to determine when buffering or downloading starts or stops. You can use the same event block for both cases and test IWMPNetwork.bufferingProgress and IWMPNetwork.downloadProgress to determine whether Windows Media Player is currently buffering or downloading content. | unknown | |
d6065 | train | You're actually setting the variable correctly, but the info statement is not being evaluated in the context of the recipe. You can check this by looking at the very lengthy output of make -d rule1. Notice that the info statement is evaluated before any rules.
You can have it print out in the context of the rule by doing something like
rule1: test_var=/path/to/folder/
rule2: test_var=/path/to/another/folder/
rule%:
$(info test_var is $(test_var)) | unknown | |
d6066 | train | Use list
sftp.list(remotePath);
It will (asynchronously) trigger an error if the file doesn't exist | unknown | |
d6067 | train | There are many "easy" ways, depending on your skills.
Maybe: "Write triggers, which are sending the notify on insert/update" is the hint you need? | unknown | |
d6068 | train | I think this is not possible but you can make better UI via progress dialog until web site open.You can use AsyncTask for this. That code works for me ;
public class MainActivity extends AppCompatActivity {
ProgressDialog progressDialog;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
new OpenSite().execute();
}
private class OpenSite extends AsyncTask<Void,Void,Void>{
@Override
protected void onPreExecute() {
super.onPreExecute();
progressDialog=new ProgressDialog(MainActivity.this);
progressDialog.setMessage("Please Wait...");
progressDialog.setIndeterminate(false);
progressDialog.setCancelable(true);
progressDialog.show();
}
@Override
protected Void doInBackground(Void... params) {
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
return null;
}
@Override
protected void onPostExecute(Void aVoid) {
super.onPostExecute(aVoid);
Uri uri = Uri.parse("http://www.google.com");
Intent intent = new Intent(Intent.ACTION_VIEW, uri);
startActivity(intent);
}
}
}
You can delete thread.sleep(2000) if you don't want to wait. | unknown | |
d6069 | train | i have update plunkr plunkr link
Change in dynamic-pipe.ts like this
const dynamicPipe = "";
//i have give one simple logic for example if your dynamic pipe is like
this.dynamicPipe = ['number','uppercase','customPipe']; //pipe,pipe1 ... pipeN
//now create a one variable like 'number' | 'uppercase' | 'customPipe'
for (let i=0;i<this.dynamicPipe.length;i++){
dynamicPipe = dynamicPipe + " | "+this.dynamicPipe[i];
}
@Component({
selector: 'dynamic-comp',
template: '{{ data ' + dynamicPipe + '}}'
}) | unknown | |
d6070 | train | The error is being caused by this line:
public static final Parcelable.Creator CREATOR = new Parcelable.Creator() {
Type parameters need to be added to both the return type and the object being created. The change to add type parameters is this:
public static final Parcelable.Creator<Response> CREATOR =
new Parcelable.Creator<Response>() {
A: try to add
public Response()
{}
above to the below mentioned code.
public Response(Parcel in) { .....
....
}
so it should look like
public Response(){}
public Response(Parcel in) { .....
....
} | unknown | |
d6071 | train | React will only re-render if a value that is a prop or part of the component's state changes. In your case, the deleteIt variable is not a state variable, so even if you change it with confirmationDeleteUser, your component won't re-render to trigger the popup.
Try to define your variable with useState instead, like this:
const [deleteIt, setDeleteIt] = useState(false);
And change your code in confirmationDeleteUser to change deleteIt like this:
setDeleteIt(true);
That should get things heading in the right direction.
A: Right now you are using a stateless component. so, you will never see a change in the front with this type of component.
You should use the stateful component instead, and set the values to change in the state property, and change them using the setState() function.
see State in react | unknown | |
d6072 | train | Bluetooth 4.0 has all backwards compatibility with it's older versions.
BLE is a form of connect using low energy technology.
BLE = Bluetooth Low energy.
They are different technologies with different proposes. BLE tend to be used in heart rate monitors, bike computers, medicinal applications and etc. Whenever the power supply is limited.
BLE intent is not for headsets and similar devices. That's why you see on phone specifications Bluetooh 4.0 + BLE (or LE). Bluetooh is a technology, BLE is a 'protocol of communication'
A: Bluetooth Low Energy is part of the Bluetooth 4.0 specification. Bluetooth 4.0 includes Classic Bluetooth, Bluetooth Low Energy and Bluetooth High Speed.
Bluetooth Low Energy (BLE) uses a different radio protocol with fewer, wider channels and a lower transmission rate and power than Bluetooth Classic (although it uses the same frequencies) and most importantly it implements a different set of profiles.
Classic Bluetooth has profiles such as Serial Port Profile (SPP) and Handsfree Profile (HFP) while the most commonly used profile in BLE is the Generic Attribute profile (GATT). This profile allows for the transfer of small amounts of data at relatively low speeds and is not suitable high-bandwidth time-critical applications such as audio streaming.
Dual-mode Bluetooth chipsets that support Classic Bluetooth and BLE are available although often they can only operate in one mode at a time. Many BLE chipsets are BLE only, however as it reduces cost and complexity.
The short answer is that BLE can't support the classic Bluetooth functions you described. | unknown | |
d6073 | train | The problem could be that the ID you want is not loading before you do your next step. Try waiting for the element to load with the following "wait_for_load" function:
from selenium.webdriver.support import expected_conditions as EC
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
class AppleMusicUploader:
def __init__(self, user):
self.user = user
self.driver = webdriver.Chrome(executable_path=CHROMEDRIVER)
def connect_to_user(self):
self.driver.get(APPLEMUSIC_LOGIN_PAGE)
self.wait_for_load('account_name_text_field')
apple_id = self.driver.find_element_by_id('account_name_text_field')
password = self.driver.find_element_by_id('password_text_field')
apple_id.send_keys(self.user['email'])
self.driver.implicitly_wait(10)
password.send_keys(self.user['password'])
self.driver.implicitly_wait(10)
self.driver.find_element_by_class_name(
"si-button btn fed-ui moved fed-ui-animation-show remember-me ").click()
def wait_for_load(element_id):
timeout = 5 # 5 second time out on loading
element_present = EC.presence_of_element_located((By.ID, element_id))
WebDriverWait(self.driver, timeout).until(element_present) | unknown | |
d6074 | train | TensorFlow tensors are read-only. In order to modify things you need to use variables and .assign (= can not be overriden in Python)
tensor = tf.Variable(tf.ones((3,3)))
sess.run(tf.initialize_all_variables())
sess.run(tensor[1:, 1:].assign(2*tensor[1:,1:]))
print(tensor.eval())
Output
[[ 1. 1. 1.]
[ 1. 2. 2.]
[ 1. 2. 2.]]
A: It took a lot of searching, but the closest functions to theano.tensor.set_subtensor are gather, gather_nd, scatter, and scatter_nd. If you are trying to perform sparse updates to a variable, the other answers could work. But if you're trying to dynamically create a tensor from indexing another tensor, those are the functions to use.
The point of these functions is to be able to dynamically create a tensor (not a variable) from something else. My use-case is I am generating a flat tensor and I am trying to reshape it into various triangular matrices.
gather is what you use if you're trying to create a smaller matrix from a large sparse matrix. scatter is what you use if you're trying to embed a smaller matrix into a large zero matrix.
Some combination of gather, scatter and addition and multiplication can recreate theano.tensor.set_subtensor.
https://www.tensorflow.org/api_docs/python/tf/scatter_nd
https://www.tensorflow.org/api_guides/python/array_ops#Slicing_and_Joining
You could also use a very complicated set of slicing and concatenation, but gather and scatter would be preferred.
Cheers | unknown | |
d6075 | train | love.keyboard.wasPressed is not a standard Love2D API. Should be implemented somewhere else in your code? Check this first.
Then try to put some print("condition xxx is verified") in your conditions where your code is supposed to be executed, then you'll find where the condition is not verified.
Example:
if love.keyboard.wasPressed('up') then
print("up is pressed")
... | unknown | |
d6076 | train | I hope this helps. I wrote this code for myself in a new way. I have used recursion to keep the guess happening and simple used a while loop that will break when max attempts go beyond 3.
import random
elements = ["hydrogen", "magnesium", "cobalt", "mercury", "aluminium", "uranium", "antimony"]
nice_phrases = ["Nice job", "Marvellous", "Wonderful", "Bingo", "Dynamite"]
# I went ahead and created a dictionary of lists for storing the Hints
hints = {
'hydrogen':
[
"Tip 1: It is the most flammable of all the known substances.",
"Tip 2: It reacts with oxides and chlorides of many metals, "
"like copper, lead, mercury, to produce free metals.",
"Tip 3: It reacts with oxygen to form water."
],
'magnesium':
[
"Tip 1: It has the atomic number of 12.",
"Tip 2: It's oxide can be extracted into free metal through electrolysis.",
"Tip 3: It is a type of metal."
],
}
score = 0
def guess_again():
global score
random_element = random.choice(elements)
max_attempts = 3
# this will remove the element from occurring again
for x in elements:
if x == random_element:
elements.remove(x)
while max_attempts > 0:
user_guess = input("Take a Guess").lower()
if user_guess == random_element:
print(f"{random.choice(nice_phrases)}, you got it!")
score += 1
# If the answer is right calling the function again will continue the game
guess_again()
else:
max_attempts -= 1
print("That was a wrong guess. Here is a Hint")
if random_element in hints:
print(hints[random_element])
else:
print("Sorry no hints available at the moment")
if max_attempts == 0:
print("Sorry your out of guesses")
print(f"{random.choice} was the element")
guess_again()
If the Answer is right it will select the next element and the game continues until all the elements in the list are finished. I have coded it to give 3 attempts for each elements. If you want the maximum 3 attempts for the entire duration of the game just declare max_attempts outside the function and give it global scope like done for score | unknown | |
d6077 | train | For your example it works with bootstrap:
confint(model, method = "boot")
# 2.5 % 97.5 %
# .sig01 12.02914066 44.71708844
# .sigma 0.03356588 0.07344978
# (Intercept) -5.26207985 1.28669024
# prop1 1.01574201 6.99804555
Take into consideration that under your proposed model, although your estimation will be always between 0 and 1, it is expected to observe values lower than 0 and greater than 1.
A: You've identified a bug with the current version of lme4, partially fixed on Github now (it works with method="boot"). (devtools::install_github("lme4/lme4") should work to install it, if you have development tools installed.)
library(lme4)
fit_glmer <- glmer(prop2 ~ prop1 + (1|site),
data=df2, family=gaussian(link="logit"))
Profiling/profile confidence intervals still don't work, but with a more meaningful error:
try(profile(fit_glmer))
## Error in profile.merMod(fit_glmer) :
## can't (yet) profile GLMMs with non-fixed scale parameters
Bootstrapping does work. There are lot of warnings, and a lot of refitting attempts failed, but I'm hoping that's because of the small size of the data set provided.
bci <- suppressWarnings(confint(fit_glmer,method="boot",nsim=200))
I want to suggest a couple of other options. You can use glmmADMB or glmmTMB, and these platforms also allow you to use a Beta distribution to model proportions. I would consider modeling the difference between proportion types by "melting" the data (so that there is a prop column and a prop_type column) and including prop_type as a predictor (possibly with an individual-level random effect identifying which proportions were measured on the same individual) ...
library(glmmADMB)
fit_ADMB <- glmmadmb(prop2 ~ prop1 + (1|site),
data=df2, family="gaussian",link="logit")
## warning message about 'sd.est' not defined -- only a problem
## for computing standard errors of predictions, I think?
library(glmmTMB)
fit_TMB <- glmmTMB(prop2 ~ prop1 + (1|site),
data=df2, family=list(family="gaussian",link="logit"))
It sounds like your data might be more appropriate for a Beta model?
fit_ADMB_beta <- glmmadmb(prop2 ~ prop1 + (1|site),
data=df2, family="beta",link="logit")
fit_TMB_beta <- glmmTMB(prop2 ~ prop1 + (1|site),
data=df2,
family=list(family="beta",link="logit"))
Compare results (fancier than it needs to be)
library(broom)
library(plyr)
library(dplyr)
library(dwplot)
tab <- ldply(lme4:::namedList(fit_TMB_beta,
fit_ADMB_beta,
fit_ADMB,
fit_TMB,
fit_glmer),
tidy,.id="model") %>%
filter(term != "(Intercept)")
dwplot(tab) +
facet_wrap(~term,scale="free",
ncol=1)+theme_set(theme_bw()) | unknown | |
d6078 | train | Denormalization is the magic password for your situation.
There are several ways to do this:
For example, store the ids of the last 10 users in the event and group.
Or create a new model NewsFeedItem (belongs_to :parent, :polymorphic => true). When a user attends an event, create a NewsFeedItem with denormalized informations like this users name, his profile pic etc. Saves you from second queries to user_events and users.
A: You should be able to do this with only one query per Event / Group loop. What you'll want to do is: inside your for loop add user ids to a Set, then after the for loop, retrieve all the User records with those ids. Rinse and Repeat. Here is an example:
def newsfeed
user_ids = Set.new
# find out which users i need
... add ids to user_ids
# retrieve those users via DB
users = User.find(user_ids.to_a)
# find out which events happened for these users
# you might want to add a condition
# that limits then events returned to only recent ones
events = Event.find_by_user_id(user_ids.to_a)
user_ids = Set.new
events.each do |event|
user_ids << discover_user_ids_for_event(event)
# retrieve new set of users
users = User.find(user_ids.to_a)
# ... and so on
end
I'm not sure what your method is supposed to return, but you can likely figure out how to use the idea of grouping finds together by working with collections of IDs to minimize DB queries.
A: Do you want to show all the details at once (I mean when the page is loading do you really want to load all of those information) , If not what you can do is, load them on demand
as follows
def newsfeed
*
*find out which users i need
*retrieve those users via DB
*find out which events happened for these users
once you show the events give them a button or something to drill down to other details (on -demand) then load them using AJAX (so that page will not refresh)
use this technique repeatedly when users want to go deep details
By doing this , you will save lots of processing power and will get only the details user needs
I dont know if this is applicable to your situation
If not then you have to find a more optimized way of loading details
cheers,
sameera
A: I understand that you are trying to perform some kind of algorithm on the basis of your data to do some kind of recommendation or similar sort of thing.
I have two suggestions:
1) You reevaluate your algorithm / design on the basis of what you actually want to achieve. For instance, in cases where an application has users who can potentially have lots of posts and the app wants to perform some algorithm on the basis of the number of posts then it will be quite expensive to count their posts every time. To optimise this, a post_count column can be added on the user model and increase that count whenever a user successfully does a post. Similarly, if you can establish some kind of relation like this between your user, events, groups etc, then think of something on those lines.
2) If first solution is not feasible, then for anything like this you must avoid doing multiple queries and then using ruby for crunching data which would obviously be very expensive and is never advisable if you have large data set. So what you need here is to make one sql query using join and get all data in just one go. Also pick only those field names from the database that you need. It really helps in case of large data sets. For instance, if you need user id and event_id from user and events table and nothing else then do something like so
User.find(:all,
:select => 'users.id, users.event_id',
:joins => 'join events on users.id = events.user_id',
:conditions => ['users.id in (your user ids)'])
I hope this will point you in the right direction. | unknown | |
d6079 | train | There is only minor differences these days in DOCTYPE declarations in html email. Although only minor, it is still recommended to test your emails via Email on Acid or Litmus or any other testing software prior to a send to ensure cross client compatibility and to find any unforeseen quirks.
The larger issues comes with if you do not declare a doctype or body tag, it can really screw up different parts of the email on certain clients.
The most popular DOCTYPE nowadays is the HTML 5 doctype (<!DOCTYPE HTML>) which is used with very little hiccups. The most popular/safest doctype used to be the <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> doctype. This can still be used, but may limit different capabilities of your email, as it is referencing an older version of HTML.
See this forum post in litmus for more in-depth information on this: https://litmus.com/community/discussions/39-explanation-of-doctype-html-attributes-and-meta-tags-in-email-head
A: This is what I use.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
I code emails daily and this came as the header that exact target (salesforce) provided us. Our company has sister brands and this what they all use as well so I'm assuming this is standard.
A: More primitive email readers cannot handle DOCTYPE at all, and instead either strip it completely out of the email, or just remove the string "!DOCTYPE" and leave the rest of the HTML intact. You wind up seeing weird things like this at the beginning of the emails:
< HTML PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
...because the word !DOCTYPE was removed, but not the rest, and since it did not remove the space, it displayed the broken code to the reader.
The rest of the email will usually display just fine.
While this obviously is a programming error by the writer of the script, I have seen a lot of scripts with this same exact error. It's usually because they are simply stripping out HTML tags and not allowing !DOCTYPE.
Email clients seem to be able to process your email without it unless you are using a different character set than the user, and sometimes stumble when you declare it. | unknown | |
d6080 | train | Simply put, let the DB API do that formatting:
c.execute("INSERT INTO Data_Output6 VALUES (?, ?)", (xdates[i], Averages_norm[-1]))
And refer to the documentation https://docs.python.org/2/library/sqlite3.html where is mentioned:
Instead, use the DB-API’s parameter substitution. | unknown | |
d6081 | train | You should be able to invoke bcp.exe directly rather than trying to use cmd.exe /c. This should be all you need...
import subprocess
import pyodbc
server = r".\SQLEXPRESS01"
database = "test"
connection = f"driver={{SQL Server Native Client 11.0}};server={server};database={database};trusted_connection=yes;"
dbconn = pyodbc.connect(connection, autocommit=True)
cursor = dbconn.cursor()
cursor.execute("SELECT name FROM sys.tables")
for row in cursor:
table_name = row[0]
out_path = f"D:\\Projects\\ReferenceModel\\DataFiles\\IN_Download\\Format_Files\\{table_name}.fmt"
args = [ "bcp.exe", f"{database}.dbo.{table_name}", "format", "nul", "-c", "-t,", f"-f{out_path}", f"-S{server}", "-T"]
subprocess.call(args) | unknown | |
d6082 | train | I think you got the allow and deny the wrong way around:
order allow,deny
allow from 192.168.1.7
deny from all
which first processes all the allow statements and next the deny statements.
A: I just checked, your above example works fine for me on my Apache 2.
Make sure your IP really is 192.168.1.7. Note that if it's the local machine you're trying to access, your IP will be 127.0.0.1. | unknown | |
d6083 | train | It should be :
public static void add(String title) {
//add the item
return render("anitemtemplate.html", item);
}
or
public static void add(String title) {
//add the item
renderArgs.put("item", newitem);
return render("anitemtemplate.html");
}
And if the anitemtemplate.html is in /app/views/tags/ , like the tag usage suppose to in your main.html, you should use render("tags/anitemtemplate.html") - the first argument of render is the relative path of the template from the /app/views/ directory.
And BTW,
this
#{list items, as:'item'}
<li>#{anitemtemplate item}</li>
#{/list}
should be
#{list items, as:'item'}
<li>#{anitemtemplate item /}</li>
#{/list} | unknown | |
d6084 | train | puppeteer Chromium do the proxy check by default, which is a waste of time if you do not use proxy.
Can disable it by
const browser = await puppeteer.launch({
headless: false,
args: ["--proxy-server='direct://'", '--proxy-bypass-list=*'],
});
then it will as fast as normal Chrome | unknown | |
d6085 | train | See the API for DMatch data type here:
http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html#dmatch | unknown | |
d6086 | train | On Unix, when you launch your process you can pipe it into tail first:
p=subprocess.Popen("your_process.sh | tail --lines=3", stdout=subprocess.PIPE, shell=True)
r=p.communicate()
print r[0]
Usage of shell=True is the key here. | unknown | |
d6087 | train | Finding and showing the maxima/minima for grouped data with a single formula in only one cell can be done with the following formula:
=UNIQUE(FILTER(MyArray,MMULT(((ValueRange>TRANSPOSE(ValueRange))+(ValueRange=TRANSPOSE(ValueRange))-(GroupRange=TRANSPOSE(GroupRange)))*(GroupRange=TRANSPOSE(GroupRange)),SEQUENCE(ROWS(GroupRange),1,1,0))=0),FALSE,FALSE)
For an example, see this screenshot: Link
The output automatically adjusts for any number of groups in the data array.
With the operator > used in the formula, it will return the maximum. By using < it will return the minimum.
Note that the UNIQUE() function will only show distinct rows for each group maximum (see group 'Alpha' in screenshot).
If there is more than one maximum in a group and more than just the group and value column, the UNIQUE() function will show all distinct rows taking into account all columns (as can be seen for group 'Alpha' and 'Gamma' here: Link).
A: For a list without duplicates you can put this in cell G2 ARRAY-FORMULA: CTRL + SHIFT + ENTER
=IFERROR(INDEX(A:A,MATCH(1,(COUNTIF(G$1:G2,A$1:A$99)=0)*(A$1:A$99<>""),0)),"")
This gives you a list with unique ID's. Now you can use the max formula to get the max version number of each ID. ARRAY-FORMULA: CTRL + SHIFT + ENTER
=MAX(IF($A$2:$A$2000=G3,$D$2:$D$2000,0))
The rest can be done with INDEX/MATCH formulas.
A: You can use the Advanced Filter with a formula criteria:
=D9=AGGREGATE(14,6,1/(A9=Table1[ID])*Table1[Version],1)
where D9 is the location of the first entry in the Value Column
Before applying Filter
After applying Filter
A:
Suppose your data is in range A1:E8,
In cell A11, put in the following formula to find unique ID, drag it down until there is a #N/A error:
=INDEX($A$2:$A$8,MATCH(0,INDEX(COUNTIF($A$10:A10,$A$2:$A$8),0),0))
In cell B11, put in the following formula and drag it down to find the latest Version:
=AGGREGATE(14,6,$D$2:$D$8/($A$2:$A$8=A11),1)
In cell C11, D11 and E11, put in the following formulas respectively and drag them down to find the corresponding Category, Animal and Value:
=INDEX($B$2:$B$8,MATCH(1,INDEX(($A$2:$A$8=A11)/($D$2:$D$8=B11),0),0))
=INDEX($C$2:$C$8,MATCH(1,INDEX(($A$2:$A$8=A11)/($D$2:$D$8=B11),0),0))
=INDEX($E$2:$E$8,MATCH(1,INDEX(($A$2:$A$8=A11)/($D$2:$D$8=B11),0),0))
Let me know if there is any question. Cheers :) | unknown | |
d6088 | train | My guess is that it has something to do with what you put into @style/TextLabel.
When you have an error with password or email you request focus programmatically. Which is fine, however when that happens something in your style is looking for a color resource which doesn't exist. That's what's causing the error.
A: The error seems to be for getColor() method of textview, your primary color has some problem, please check that. | unknown | |
d6089 | train | Let's say $key is 'x'. You could then use getElementbyID('x'), because echoing $key is the same as putting id="x".
A: Oh, I see. you have series of rows with the different qty keys.
then try this.
in PHP:
<input type="number" min="0" max="500" value="" name="qty<?php echo $key ?>" id="<?php echo $key ?>" onChange="findTotal('qty<?php echo $key?>')" />
and in JS:
function findTotal(key) {
var arr = document.getElementsByName(key);
....
}
A: You can assign thisas your function params then you can call its name or id from your function when onchange event executed !!
//assign this
<input type="number" min="0" max="500" value="" name="qty<?php echo $key ?>" id="<?php echo $key ?>" onChange="findTotal(this)" />
function findTotal(current) {
var arr = current.name; // current.id for id
alert(arr);
}
A: Since your name is dynamic, you can not get the element by the name unless your JS has some way of knowing that dynamic name (such as if you have a list of them and are running them in a loop).
In your case, you may want to try using data attributes as outlined here:
Mozzila - Using data attributes
You may also be able to take advantage of this as outlined in this answer:
Javascript Get Element Value - Answer By yorick
Let me know if this helps. I may be able to refine this answer if you can give some more information about your specific implementation.
Cheers! | unknown | |
d6090 | train | You can use this code:
var place = $('#foo');
var delay = 3 * 1000; // 3 seconds
var url = 'http://www.somewhere.com/temp/busy.html';
(function recur() {
$.ajax({url: url, success: function(page) {
place.html(page);
setTimeout(function() {
recur();
}, delay);
}, error: function() {
place.hide();
}});
})();
A: Here are the steps you need.
*
*Read JQuery post or get method and see simple example.
*Inside document ready call a post or get request (get would do with your scenario) and url will point to busy.html
*In response you will get data that is your html or text written inside busy.html
*You can then use the last three letters of that text to fetch percentage and show a chart, guage or simple text as you wish.
Its not complicated. | unknown | |
d6091 | train | When I tried running this, I also got the error "TargetName property cannot be set on a Style Setter". Which indicates that you can't set a property of the Border control inside a style setter for the TextBox control (which doesn't honestly surprise me.)
What you can do instead is set it in the style of the border control itself, using a DataTrigger to bind to the IsFocused property of the textbox:
<Border>
<Border.Style>
<Style TargetType="Border">
<Style.Triggers>
<DataTrigger Binding="{Binding Path=IsFocused, ElementName=textBox}" Value="true">
<Setter Property="Background" Value="Red" />
</DataTrigger>
</Style.Triggers>
</Style>
</Border.Style>
<TextBox Name="textBox" Text="something"/>
</Border> | unknown | |
d6092 | train | The .filter() function will always return an array of the same type that was given as the argument. That is why reviewsWithRating is still a Review[], even after you filter it.
To change this, you can add a type guard to the callback:
const reviewsWithRating = reviews.filter(
(review): review is { rating: Required<ComplexRating> } =>
typeof review.rating === 'object' &&
typeof review.rating['ratingAttribute1'] === 'number'
);
Now TypeScript will know that reviewsWithRating is of type { rating: Required<ComplexRating> }[]. | unknown | |
d6093 | train | String in javascript is not formatted. You can only do that when you output to HTML. So basically you must write it like this
var str = "Not less than 30 net ft<sup>2</sup> (2.8 net m<sup>2</sup>)";
document.write(str);
You can do a find and replace for all string contain ft2 and m2 turn them into ft<sup>2</sup> and m<sup>2</sup>
str.replace(/ft2/g,"ft<sup>2</sup>"); //But it not safe... | unknown | |
d6094 | train | Your repository is just a directory/file structure. Go to your local repo, find the path (the group id is the path), and delete from the place where you start to see version numbers. When you rebuild, the artifact should be downloaded/replaced from your server/repo. | unknown | |
d6095 | train | Yes, the CLR is not really smart enough to ignore it but the difference should be negligible in most cases.
A method call is not a big deal and is unlikely to have a meaningful impact on the performance of your application.
A: If your application calls ChangeStatus thousand times per second, maybe it would be a problem. But only profiler can prove this. | unknown | |
d6096 | train | Since you're calling ToDictionary(), your method returns a Dictionary<>, not an IQueryable<>:
public Dictionary<discussion_category, List<discussion_board>>
GetDiscussion_categoriesWithBoards()
{
// ...
}
If you absolutely want to return an IQueryable<> you can write something like:
public IQueryable<Dictionary<discussion_category, List<discussion_board>>>
GetDiscussion_categoriesWithBoards()
{
return new[] {
GetDiscussion_categories().Select(c => new {
Category = c,
Boards = GetDiscussion_boardsByCategory(c.ID).ToList()
}).ToDictionary(i => i.Category, i => i.Boards.ToList())
}.AsQueryable();
}
A: The return type will be:
System.Collections.Generic.Dictionary<discussion_category, List<discussion_board>>
as the return type of ToDictionary is Dictionary object and not a Iqueryable of Dictionary
A: Seems that this function will return:
Dictionary<Category, List<Board>>
A: I found the answer :)
public IQueryable<System.Collections.Generic.Dictionary<discussion_category, List<discussion_board>>> GetDiscussion_categoriesWithBoards()
{
return (GetDiscussion_categories().Select(c =>
new
{
Category = c,
Boards = GetDiscussion_boardsByCategory(c.ID).ToList()
}).ToDictionary(i => i.Category, i => i.Boards.ToList())).AsQueryable() as IQueryable<System.Collections.Generic.Dictionary<discussion_category, List<discussion_board>>>;
}
this is the solution :)
thank you for all your help your questions and answers lead me to solve this :)
A: Are you asking what is the return type of the lambda? It looks to be a Dictionary<Category,List<Board>> | unknown | |
d6097 | train | In Swing, there are two different components. JTextArea and JTextPane. The JTextArea is easy to use, but doesn't allow formatting. If you do not plan on changing the formatting of different words, that is the one to use. The JTextArea is more robust but harder to use.
Check out the Java tutorial for more information.
http://docs.oracle.com/javase/tutorial/uiswing/components/text.html
A: I hope this helps...
TextGenerator.java
package textgenerator;
import java.awt.GridLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.JTextField;
import javax.swing.JTextPane;
public class TextGenerator {
JFrame frame;
JPanel panel;
JTextPane textPane;
JLabel namel;
JLabel agel;
JTextField namef;
JTextField agef;
JButton button;
public TextGenerator() {
frame = new JFrame("My Frame");//Construct the frame
frame.setBounds(200, 100, 1000, 500);//set the size and position
frame.setLayout(new GridLayout(1, 2));//set layout with 1 row and 2 columns
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setResizable(false);//restrict the resixing of the frame
panel = new JPanel();//create a panel
panel.setLayout(null);
textPane = new JTextPane();//create a textpane
textPane.setEditable(false);
frame.add(panel);//add panel to the frame
frame.add(textPane);//add textpane to the frame
//create labels and textfields and add them to the panel
namel = new JLabel("Name : ");
namel.setBounds(20, 200, 150, 20);
agel = new JLabel("Age : ");
agel.setBounds(20, 250, 150, 20);
namef = new JTextField();
namef.setBounds(220, 200, 150, 20);
agef = new JTextField();
agef.setBounds(220, 250, 150, 20);
panel.add(namel);
panel.add(agel);
panel.add(namef);
panel.add(agef);
//create button and add it to the panel
button = new JButton("Done !");
button.setBounds(350, 400, 100, 20);
panel.add(button);
//set the required text to the textfield on button click
button.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
textPane.setText("Hello !\n"
+ "My name is " + namef.getText() + ", and I'm " + agef.getText() + ".\n"
+ "How are you ?");
}
});
frame.setVisible(true);//make the frame visible
}
public static void main(String[] args) {
SwingUtilities.invokeLater(new Runnable() {
@Override
public void run() {
new TextGenerator();
}
});
}
}
Output looks like this | unknown | |
d6098 | train | If you mean visually then the way is put endl or "\n" to the outer loop and remove endl from inner loop.But i do not know anythig about your Holder object and if you have [] operator defined there that is the answer.
vector<Holder> obj(N);
void savedata(string filename, vector<Holder> obj, int M, int N) {
ofstream out(filename);
for(int i = 0; i < M; i++) {
for(int j = 0; j < N; j++) {
out << obj[i][j] << "\t";
}
out<< "\n";
}
}
A: Your method is ok, however, made some minor change so that you have M lines, each lines represent obj[i], i = 0.. M-1. So, each column (jth index) is printed as tab separated in each line
vector<Holder> obj(N);
void savedata(string filename, vector<Holder> obj, int M, int N) {
ofstream out(filename);
for(int i = 0; i < M; i++) {
for(int j = 0; j < N; j++) {
out << obj[i][j] << "\t";
}
out << endl;
}
} | unknown | |
d6099 | train | how about adding a new layer?
yourView.clipsToBounds = YES;
CALayer *topBorder = [CALayer layer];
topBorder.borderColor = [UIColor redColor].CGColor;
topBorder.borderWidth = 1;
topBorder.frame = CGRectMake(0, 0, CGRectGetWidth(self.frame), 2);
[yourView.layer addSublayer:topBorder];
replace yourView with whatever view is contained in your view controller that you want to add the border to.
Also, this related answer might help you out a bit. | unknown | |
d6100 | train | This is more efficient:
required.replicates <- function (delta, sigma, z.alpha, z.beta) {
oo <- 1 / outer(delta, sigma, "/")
ceiling(oo ^ 2 * 2 * (z.alpha + z.beta) ^ 2)
}
practice1 <- required.replicates(delta.vec, sigma.vec, 1.959964, 0.8416212)
Fix to your original code
required.replicates <- function(delta, sigma, z.alpha = 1.959964, z.beta=0.8416212) {
oo <- matrix(0, nrow=length(delta), ncol=length(sigma))
for(i in 1:length(delta))
for(j in 1:length(sigma))
oo[i,j] <- ceiling((2*(z.alpha + z.beta)^2)* (sigma[j]/delta[i])^2)
return(oo)
}
practice1 <- required.replicates(delta.vec, sigma.vec, 1.959964, 0.8416212)
Thanks! One more question, if I want any value in the matrix less than 3 to have a value of 3 and any value more than a 1000 to return as NA what additions should I make?
practice1[practice1 < 3] <- 3
practice1[practice1 > 1000] <- NA
practice1 | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.