_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d3201 | train | I am not sure which library you are using now.
According to my experience, 405 errors are almost about the bad path problem.
So please check the api url is correct or not.
PLEASE PAY ATTENTION TO THE LAST SLASH '/' OF THE API URL. | unknown | |
d3202 | train | You have to allow inbound traffic for your instance.
Security groups enable you to control traffic to your instance, including the kind of traffic that can reach your instance. For example, you can allow computers from only your home network to access your instance using SSH. If your instance is a web server, you can allow all IP addresses to access your instance via HTTP, so that external users can browse the content on your web server.
To enable network access to your instance, you must allow inbound traffic to your instance. To open a port for inbound traffic, add a rule to a security group that you associated with your instance when you launched it.
Source: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html | unknown | |
d3203 | train | The accepted answer is correct and straightforward.
But also marker string should not contain line breaks like \n, otherwise, Ansible will keep adding the block. Sounds like a bug for me
A: You should specify {mark} keyword in the marker parameter:
marker: "## {mark} added by ansible (configuration elasticsearch)"
This will cause Ansible to insert a line at the beginning and at the end of the block replacing {mark} accordingly with BEGIN and END:
## BEGIN added by ansible (configuration elasticsearch)
network.host: 0.0.0.0
path.data: /var/lib
path.logs: /var/log/elasticsearch
path.repo: /home/chris/elastic-backups
## END added by ansible (configuration elasticsearch)
Otherwise Ansible has no clue, where the block starts and where it ends, so on every run it considers the block is not present and inserts a new one. | unknown | |
d3204 | train | To start with you'll need these operations to convert to and from isometric coordinates:
isoX = carX + carY;
isoY = carY - carX / 2.0;
carX = (isoX - isoY) / 1.5;
carY = isoX / 3.0 + isoY / 1.5;
right-angled corners in the top-left and bottom-right become 120 degrees, the other two corners become 60 degrees. the bottom-right corner becomes the bottom corner, the top-left corner becomes the top. this also assumes that y increases going up, and x increases going right (if your system is different, flip the signs accordingly). you can verify via substitution that these operations are eachothers inverse.
for a rectangle you need 4 points transformed - the corners - as they will not be 'rectangular' for the purposes of SDL (it will be a parallelogram). this is easier to see numerically.
first, assign the corners names of some sort. i prefer clockwise starting with the bottom-left - this coordinate shall be known as C1, and has an associated X1 and Y1, the others will be C2-4.
C2 - C3
| |
C1 - C4
then compute their cartesian coordinates...
X1 = RECT.X;
Y1 = RECT.Y;
X2 = X1; // moving vertically
Y2 = RECT.Y + RECT.HEIGHT;
X3 = RECT.X + RECT.WIDTH;
Y3 = Y2; // moving horizontally
X4 = X3; // moving vertically
Y4 = RECT.Y;
and lastly apply the transform individually to each coordinate, to get I1, I2, I3, I4 coordinates...
iX1 = X1 + Y1;
iY1 = Y1 - X1 / 2.0;
// etc
and what you end up with is on-screen coordinates I1-4, that take this shape:
I2
/ \
I1 I3
\ /
I4
But unlike this shoddy depiction, the angles for I4 and I2 will be ~127 deg, and for I1 and I3 it should be ~53 deg. (this could be fine-tuned to be exactly 60/120, and depends on the 2.0 factor for carX when computing isoY - it should be sqrt(3) rather than 2.0 but meh, close enough)
if you use the inverse transform, you can turn back the I1-4 coordinates into C1-4, or locate a world coordinate from a screen coordinate etc.
implementing a camera / viewport gets a little tricky if only at first but it's beyond what was asked so i won't go there (without further prodding)...
(Edit) Regarding SDL...
SDL does not appear to "play nice" with generalized transforms. I haven't used it but its interface is remarkably similar to GDI (windows) which I've played around with for a game engine before and ran into this exact issue (rotating + scaling textures).
There is one (looks to be non-standard) SDL function that does both scaling and rotating of textures, but it does it in the wrong order so it always maintains the perspective of the image, and that's not what is needed here.
Basic geometry will be fine, as you've seen, because it's fills and lines which don't need to be scaled, only positioned. But for textures... You're going to have to either write code to render the texture one pixel at a time, or use a combination of transforms (scale after rotate), or layering (drawing an alpha-masked isometric square and rendering a pre-computed texture) and so on...
Or of course, if it's an option for you, use something suited for raw geometry and texture data like OpenGL / Direct3D. Personally I'd go with OpenGL / SFML for something like this.
A: Unfortunately I cannot comment to ask for clarification so I must answer with a question: can you not convert all four points then from those points calculate the width and height from the transformed points?
X=20, Y=10, Width=16, Height=16
as you've said
isometricX = cartX - cartY;
isometricY = (cartX + cartY) / 2;
so
isometricX1 = cartX1 - cartY1;
isometricY1 = (cartX1 + cartY1) / 2;
and
isometricWidth = std::abs(isometricY - isometricY1)
There's probably a more efficient route though but as I do not know Cartesian geometry I can't find that solution
EDITisometricWidth found the distance between the two points not the width and hieght
another note is you'd need opposite corners (yes I realize the other guy probably as a much better answer :P) | unknown | |
d3205 | train | ClassCastException - if narrowFrom cannot be cast to narrowTo.
Seems that corba.object and itestejbrremoteinterface are not related by inheritance
A: The relevant lines from the dumpNameSpace are:
168 (top)/nodes/CLEVDICM-143Node01/servers/server1/ejb3.test.ITestEJBRemoteInterface
168 ejb3.test.ITestEJBRemoteInterface
192 (top)/nodes/CLEVDICM-143Node01/servers/server1/ejb/TestEJB(1)_jar/TestEJB(1).jar/TestEJB#ejb3.test.ITestEJBRemoteInterface
192 ejb3.test.ITestEJBRemoteInterface
The root of the server context is (top)/nodes/CLEVDICM-143Node01/servers/server1/, which means you should use one of these strings:
*
*ejb3.test.ITestEJBRemoteInterface
*ejb/TestEJB(1)_jar/TestEJB(1).jar/TestEJB#ejb3.test.ITestEJBRemoteInterface | unknown | |
d3206 | train | Personally I would ignore CVS for a new product. My feeling would be that the enormous extra effort to coerce it into looking like SVN would be better spent on other other stuff. I don't know your market, so I might be wrong, but that's got to be worth thinking about.
A: The MSSCCI API does something very similar:
http://alinconstantin.homeip.net/webdocs/scc/msscci.htm
The MSSCCI tries to make all source controls look the same from the perspective of the IDE.
A: viewvc lets you browse svn and cvs repositories. maybe there is an existing product which will already do what you want? | unknown | |
d3207 | train | Ok I solved this problem.
So basically here are the option fields that must be true and we need to place the below script before </head> tag.
Script:
<script>
$(function() {
$("#mobile-number").intlTelInput({
allowExtensions: true,
autoFormat: false,
autoHideDialCode: false,
autoPlaceholder: false,
defaultCountry: "auto",
ipinfoToken: "yolo",
nationalMode: false,
numberType: "MOBILE",
//onlyCountries: ['us', 'gb', 'ch', 'ca', 'do'],
//preferredCountries: ['cn', 'jp'],
preventInvalidNumbers: true,
utilsScript: "lib/libphonenumber/build/utils.js"
});
});
</script>
Place it before closing head tag and remember to call $("#mobile-number").intlTelInput(); as it is important.
A: You just forget to put your jquery code inside de document.ready.
see below:
$(document).ready(function(){
$("#mobile-number").intlTelInput({
//allowExtensions: true,
//autoFormat: false,
//autoHideDialCode: false,
//autoPlaceholder: false,
//defaultCountry: "auto",
//ipinfoToken: "yolo",
//nationalMode: false,
//numberType: "MOBILE",
//onlyCountries: ['us', 'gb', 'ch', 'ca', 'do'],
//preferredCountries: ['cn', 'jp'],
//preventInvalidNumbers: true,
utilsScript: "lib/libphonenumber/build/utils.js"
});
});
A: Country code will always be in input field with this code
$("#mobile-number").on("blur keyup change", function() {
if($(this).val() == '') {
var getCode = $("#mobile-number").intlTelInput('getSelectedCountryData').dialCode;
$(this).val('+'+getCode);
}});
$(document).on("click",".country",function(){
if($("#phone").val() == '') {
var getCode = $("#mobile-number").intlTelInput('getSelectedCountryData').dialCode;
$("#mobile-number").val('+'+getCode);
}});
A:
$(function() {
$("#mobile-number").intlTelInput({
autoHideDialCode: false,
autoPlaceholder: false,
nationalMode: false
});
});
A: Why country flag and country code are not stable on page post back.
Here is what I have tried so far:
<script>
$(document).ready(function () {
$("#Phone").intlTelInput({
allowDropdown: true,
// autoHideDialCode: false,
// autoPlaceholder: "off",
// dropdownContainer: "body",
// excludeCountries: ["us"],
defaultCountry: "auto",
// formatOnDisplay: false,
//geoIpLookup: function (callback) {
// $.get("http://ipinfo.io", function () { }, "jsonp").always(function (resp) {
// var countryCode = (resp && resp.country) ? resp.country : "";
// callback(countryCode);
// });
//},
//initialCountry: "auto",
// nationalMode: false,
// onlyCountries: ['us', 'gb', 'ch', 'ca', 'do'],
// placeholderNumberType: "MOBILE",
// preferredCountries: ['in','pk', 'np','bd', 'us','bt','sg','lk','ny','jp','hk','cn'],
// separateDialCode: true,
utilsScript: "build/js/utils.js"
});
$("#Phone").on("countrychange", function (e, countryData) {
$("#hdnPhone").val(countryData.dialCode);
});
});
</script>
<script>
$(document).ready(function () {
$("#Phone").val('');
var HdnVal = $("#hdnPhone").val();
if (HdnVal != '') {
var countryData = $("#Phone").intlTelInput("getSelectedCountryData");
$("#hdnPhone").val(countryData.dialCode);
}
});
</script>
A: try this code
$("#mobile-number").intlTelInput({
// allowExtensions: true,
//autoFormat: false,
autoHideDialCode: false,
// autoPlaceholder: false,
// defaultCountry: "auto",
//ipinfoToken: "yolo",
nationalMode: false,
// numberType: "MOBILE",
// onlyCountries: ['us', 'gb', 'ch', 'ca', 'do'],
//preferredCountries: ['cn', 'jp'],
//preventInvalidNumbers: true,
utilsScript: "lib/libphonenumber/build/utils.js"
});
});
A: //if you want to get coutry code value in input box then set false
nationalMode: false,
separateDialCode: false,
Pease set this two lines in your function property it will run | unknown | |
d3208 | train | PowerShell sends empty JSON payload
This error states that the PowerShell that the command is wanting the Json and the input provided is empty.
I have reproduced your requirement in my environment, and I have faced similar issue as you.
So, when I displayed the body i got the below as output:
And if I send this as the Json in invoke command, I get similar error as you got.
Then I used ConvertTo-Json and I got expected results as below:
Here what I observed is when I converted \n are added and it worked as expected.
References:
*
*"Empty Payload. JSON content expected" when uploading an item to OneDrive from a URL · Issue #745 · OneDrive/onedrive-api-docs · GitHub | unknown | |
d3209 | train | Can you use "egit". It is the git provider for eclipse
A: As stated in the user manual for EGit, the question mark next to a file denotes it is untracked by the GIT repository, and will not be version controlled until explicitly added. | unknown | |
d3210 | train | Order only matters if there is a clash, with the last import winning (redefining), e.g. tkinter.Image redefines PIL.Image because it comes after. You can avoid this by keeping the tkinter import in a namespace, e.g. import tkinter as tk and then tk.XXX for any call in that module. It is generally best to avoid * all imports.
The answer is provided by the comment here. | unknown | |
d3211 | train | I dont think that sorting by random can be "optimised out" in any way as sorting is N*log(N) operation. Sorting is avoided by query analyzer by using indexes.
The ORDER BY RAND() operation actually re-queries each row of your table, assigns a random number ID and then delivers the results. This takes a large amount of processing time for table of more than 500 rows. And since your table is containing approx 25000 rows then it will definitely take a good amount of time.
A: so to get something like this I would use a subquery.. that way you are only putting the RAND() on the outer query which will be much less taxing.
From what I understood from your question you want 200 males from the table with the highest score... so that would be something like this:
SELECT *
FROM table_name
WHERE age = 'male'
ORDER BY score DESC
LIMIT 200
now to randomize 5 results it would be something like this.
SELECT id, score, name, age, sex
FROM
( SELECT *
FROM table_name
WHERE age = 'male'
ORDER BY score DESC
LIMIT 200
) t -- could also be written `AS t` or anything else you would call it
ORDER BY RAND()
LIMIT 5 | unknown | |
d3212 | train | nglview is a part of NinevehGL framework .
NinevehGL is a 3D engine forged with pure Obj-C
Here is the community forum for beginners
You can find the basic lessons here | unknown | |
d3213 | train | you can install each component of Material-UI via bit.dev collection:
https://bit.dev/mui-org/material-ui
Here is the Button component for example:
https://bit.dev/mui-org/material-ui/button
I exported the project to bit.dev and I'm trying to keep it up to date as much as possible.
A: You can install and use the isolated material components here:
bit.dev
A: Unfortunately, you can't install the separate components from the Material UI. The only way is to install the @material-ui/core directly. | unknown | |
d3214 | train | There are security vulnerabilities in the way you are creating that query. But to specifically respond to your issue, get rid of the ' around 'book_name'.
A: You shouldn't have a ' character in the list of column_names .... column names are not string literals, they're column names. If they absolutely have to be quoted (e.g. if you have a column name that is a MySQL reserved word, then you use backticks(`) not quotes (')
$query = "insert into pages (";
$query .= " subject_id, book_name, position, visible";
$query .= " ) values ( ";
$query .= "$subject_id, '$book_name', $position, $visible ";
$query .= ")";
Now please learn about prepared statements and bind variables | unknown | |
d3215 | train | You can get the user_id from a Request object, you just need to inject it in the index method:
public function index(Request $request)
{
$user_id = $request->get('user_id') ?: Auth::id();
$events = Event::where('events.user_id','=','$user_id')->get();
$users = User::all();
return view('events.index')->with(['events' => $events])->with(['users' => $users]);
} | unknown | |
d3216 | train | Try this
SELECT
id
, SUM(AMOUNT) AS AMOUNT
FROM
Payment
GROUP BY
id;
This might help if you want other columns.
WITH cte (
SELECT
id
, ROW_NUMBER() OVER (PARTITION BY ID ORDER BY AMOUNT DESC ) AS RowNum
-- other row
)
SELECT *
FROM
cte
WHERE
RowNum = 1;
A: It sounds like you need to partition your results per customer.
SELECT TOP 1 WITH TIES
ID,
DATEDUE,
AMOUNT
ORDER BY ROW_NUMBER() OVER (PARTITION BY ID ORDER BY AMOUNT DESC)
WHERE DATEDUE BETWEEN '2016-11-01' AND '2016-11-30'
PS: The BETWEEN operator is frowned upon by some people. For clarity it might be better to avoid it:
*
*What do BETWEEN and the devil have in common?
A: To calculate the rate, you can use explicit division:
select 1 - count(distinct case when amount > 0 then id end) / count(*)
from payment
where . . .;
Or, in a way that is perhaps easier to follow:
select avg(flag * 1.0)
from (select id, (case when max(amount) > 0 then 0 else 1 end) as flag
from payment
where . . .
group by id
) i | unknown | |
d3217 | train | CompilerOptions.types allow you to restrict the typings you want to be available in the scope(folder)
You can try the following:
Create a top level tsconfig.json with
CompilerOptions.types = []
Inside test folder create tsconfig.json and choose jest typings
CompilerOptions.types = ['jest']
Similarly inside integration folder, create tsconfig.json and choose mocha typings
CompilerOptions.types = ['mocha'] | unknown | |
d3218 | train | Above code work fine if you change the following things.
Replace
kern<<<1, 64>>>(..., ..)
to
dim3 blockPerGrid(1, 1)
dim3 threadPerBlock(8, 8)
kern<<<blockPerGrid, threadPerBlock>>>(....)
here in place of Xdim change it to pitch
o[j*pitch + i] = A[threadIdx.x][threadIdx.y];
And change cudaFilterModeLinear to cudaFilterModePoint .
For the compilation you need to specify the computing capability, suppose your compute capability ie 3.0 then it would be
nvcc -arch=sm_30 file.cu
A: If your code contained error checking, you would realise that your kernel launch is failing with an invalid filter mode. It isn't legal in CUDA to use a cudaFilterModeLinear with non-float types, so nothing is actually running. If you change the filter mode to cudaFilterModePoint, you might find things start working. | unknown | |
d3219 | train | Your ProgramViewModel contains fields. Change them to properties.
public class ProgramViewModel
{
public int Id { get; set; }
public string SystemId { get; set; }
}
The DefaultModelBinder uses reflection and binds only the properties and not fields.
A: If you have a List of a object, you should be performing a foreach instruction to get all of them:
<% foreach(var x in values) { %>
<div>hello <%= x.name %></div>
<% } %> | unknown | |
d3220 | train | Somehow i managed to work it out... If anyone will have a simmiliar problem:
The Autofac assembly I added to the references in my project, was somehow impossible to find by visual studio, despite the fact that file existed in my project (I'll be gratefull if someone will explain me why did it happen). The solution to it was adding this assembly to the GAC via Developer Command Prompt by this command:
gacutil /i <path to the assembly> /f | unknown | |
d3221 | train | No, you can only specify one path for the site, that is the path for loading the site's default document and configuration file (web.config) etc. But you can add multiple virtual directory for the website. | unknown | |
d3222 | train | Your endpoint is going to be the root of your deployed web application instance, plus the route that your bot is listening on.
For example, one of my bots is deployed to the free version of Azure Web Sites. The URL for a site such as this is https://APPLICATION_NAME.azurewebsites.net and the route that the bot listens on is the default /api/messages. This makes the endpoint https://APPLICATION_NAME.azurewebsites.net/api/messages.
If you connect directly to your app's endpoint, you should at least get a JSON dump with an error message. To make sure your site is getting deployed, drop an HTML file into the root of EC2 and see if you can access this. | unknown | |
d3223 | train | You're not the only one who has hit compatablity issues with tooltips between these DLLS.
I too have had nothing but trouble with the new tooltips in the themable common controls. We have already been monkeying with mouse messages and active/deactivating the tips before adding the manifest and theming our application - so it sounds like what your doing isn't too crazy.
We're still living with problems with TTN_NEEDTEXT messages being send constantly as the mouse moves (not just when hovering), positioning problems with large tips (maybe not something new), and odd unicode messages being sent instead of the ANSI versions (which I plan to post as a question at some point).
A: I don't know, but this sounds like a really "hard" problem (in the sense that all real-world) problems are really hard. I bet the underlying problem is something to do with the setting of the focus. Windows that manually do that are evil and generally suffer from all manner of bugs. | unknown | |
d3224 | train | You can define a function to traverse the tree structure whilst accumulating the path along the way.
function getLevels(list) {
const levels = [];
const searchForLevels = ({ name, children }, path) => {
levels.push([...path, name]);
path.push(name);
children.forEach(child => searchForLevels(child, path));
path.pop();
};
list.forEach(child => searchForLevels(child, []));
return levels;
}
let list = [
{
name: "Level 1",
children: [
{
name: "Level 2",
children: [
{
name: "Level 3A",
children: [
{
name: "Level 4A",
children: []
}
],
},
{
name: "Level 3B",
children: [],
},
{
name: "Level 3C",
children: [],
},
],
},
],
},
];
console.log(getLevels(list));
A: You could do it by writing a recursive function.
Though on SO, usually, people would suggest you try to solve it yourself first. But I myself find that writing recursive functions is not an easy task (in terms of understanding how it works). So I'm happy to give you a hand.
const recursion = ({ name, children }, accumulator = []) => {
if (name) accumulator.push(name);
res.push(accumulator);
children.forEach((element) => recursion(element, [...accumulator]));
// have to store accumulator in new reference,
// so it would avoid override accumulators of other recursive calls
};
let list = [
{
name: "Level 1",
children: [
{
name: "Level 2",
children: [
{
name: "Level 3A",
children: [
{
name: "Level 4A",
children: [],
},
],
},
{
name: "Level 3B",
children: [],
},
{
name: "Level 3C",
children: [],
},
],
},
],
},
];
const res = [];
const recursion = ({ name, children }, accumulator = []) => {
if (name) accumulator.push(name);
res.push(accumulator);
children.forEach((element) => recursion(element, [...accumulator]));
};
list.forEach((element) => recursion(element));
console.log(res);
A: This can be done by traversing the tree and printing the path taken at each step
let list = [
{
name: "Level 1",
children: [
{
name: "Level 2",
children: [
{
name: "Level 3A",
children: [
{
name: "Level 4A",
children: []
}
],
},
{
name: "Level 3B",
children: [],
},
{
name: "Level 3C",
children: [],
},
],
},
],
},
];
let root = list[0]
function Traverse (root, path){
console.log(path + " " + root.name)
root.children.forEach(child => Traverse (child, path + " " + root.name));
}
Traverse(root, "");
A: You could take flatMap with a recursive callback.
const
getPathes = ({ name, children }) => children.length
? children.flatMap(getPathes).map(a => [name, ...a])
: [[name]],
list = [{ name: "Level 1", children: [{ name: "Level 2", children: [{ name: "Level 3A", children: [{ name: "Level 4A", children: [] }] }, { name: "Level 3B", children: [] }, { name: "Level 3C", children: [] }] }] }],
pathes = list.flatMap(getPathes);
console.log(pathes);
.as-console-wrapper { max-height: 100% !important; top: 0; } | unknown | |
d3225 | train | This is unsafe and incorrect. It deallocates the memory as the function ends which means the pointer that you return is immediately invalid.
You need to tie the lifetime of the memory to the lifetime of a Python object (as in the example you linked to, the memory is freed in the destructor). The simplest and recommended way of doing this is to use a numpy array or a standard library array array (or other library of your choice).
If you can't avoid using malloc one option is to use the cython.view.array class. You can assign it a callback function of your choice to use on destruction and it can be happily assigned to memoryviews:
cdef double[:,:] mview
cdef double* mPtr = <double *>PyMem_Malloc(N * M * sizeof(double))
a = cython.view.array(shape=(N, M), itemsize=sizeof(double), format="d",
mode="C", allocate_buffer=False)
a.data = <char *> mPtr
a.callback_free_data = PyMem_Free
mview = a
You can safely return either a or mview (if you return a then there's no need to bother with mview) and they will get correctly at the right time. | unknown | |
d3226 | train | You can't. Or should not.
You can't do it, because iframe is a window context, in theory it should not know about its parent (even if in practice it does).
*
*What should happen if you open the contents of the iframe in an independent window?
If it's a simple action like: "I don't need this workflow", then don't use an iframe, you don't need it.
*Are parent and iframe in the same domain? If the answer is no, you can't do it.
If you need it in any case, and they are both in the same domain, and you can't implement it without using an iframe...
*
*Write the code of your dialog in the parent window window.showDialog = function (){}
*Call this code with top.showDialog() or parent.showDialog()
top in most browsers related to most parent window...
top will be broken if your parent also will be shown in iframe... It's better to use parent instead... | unknown | |
d3227 | train | hi i will pass to you a function that works for me with 3 i2c sensors sh21 with same adress
#include <Wire.h>
#include "SHT2x.h"
uint32_t start;
uint32_t stop;
SHT2x sht;
float tempN1;
float humN1;
float dwn1;
float tempN2;
float humN2;
float dwn2;
float tempN3;
float humN3;
float dwn3;
int flip = 0;
void sht21read(){
if (flip == 0)
{
Wire.begin(21, 22); // 2
delay(100);
// myHTU21D.begin();
sht.begin(21, 22);
start = micros();
sht.read();
stop = micros();
delay(250);
tempN1 = sht.getTemperature();
humN1 = sht.getHumidity();
// dwn1 = SHT2x.GetDewPoint();
delay(250);
Wire.end();
flip = 1;
}
else if (flip == 1)
{
Wire.begin(32, 22); // 4
delay(100);
// myHTU21D.be sht.begin();gin();
sht.begin(32, 22);
start = micros();
sht.read();
stop = micros();
delay(250);
tempN2 = sht.getTemperature();
humN2 = sht.getHumidity();
// dwn2 = SHT2x.GetDewPoint();
delay(250);
Wire.end();
flip = 2;
}
else if (flip == 2)
{
Wire.begin(27, 22); // 13
delay(100);
// myHTU21D.begin();
sht.begin(27, 22);
start = micros();
sht.read();
stop = micros();
delay(250);
tempN3 = sht.getTemperature();
humN3 = sht.getHumidity();
// dwn3 = SHT2x.GetDewPoint();
delay(250);
Wire.end();
flip = 3;
}
else if (flip == 3)
{
flip = 0;
Serial.print("TEMPERATURA N1= ");
Serial.print(tempN1);
Serial.print("");
Serial.print("HUMEDAD N1= ");
Serial.print(humN1);
Serial.print("");
Serial.print("||||");
Serial.print("TEMPERATURA N2= ");
Serial.print(tempN2);
Serial.print("");
Serial.print("HUMEDAD N2= ");
Serial.print(humN2);
Serial.print("");
Serial.print("||||");
Serial.print("TEMPERATURA N3= ");
Serial.print(tempN3);
Serial.print("");
Serial.print("HUMEDAD N3= ");
Serial.print(humN3);
Serial.println("");
delay(5000);
}
}
then you run the function sht21read(); (or with your own name) on void loop and uala all working | unknown | |
d3228 | train | As of right now, in the latest version of discord.py, there is no client.me
Here's something you can do though (using discord.ext's commands):
member = ctx.guild.get_member(client.user.id)
top_role = member.top_role
top_role will return discord.Role, so you can do top_role.name, top_role.id, etc.
You can check out the documentation here: https://discordpy.readthedocs.io/en/latest/api.html#discord.Member.top_role
You can also join the discordpy discord for more help: https://discord.com/invite/r3sSKJJ | unknown | |
d3229 | train | Solved, for those who having similar issues:
I Was using:
*
*gcloud app deploy --project=MY_PROJECT
But it works if you specify the version flag (which is optional according to Google's documentation)
*
*gcloud app deploy --project=MY_PROJECT --version=1 | unknown | |
d3230 | train | I've spent a lot of time on this issue, and the best method for me was to remove everything.
1 - Create a .ptettierrc.json file the root of your project.
2 - Run yarn remove eslint-plugin-promise eslint-plugin-node eslint-plugin-import eslint-config-standard eslint-config-prettier
3 - Change your ESLint config to the one below:
module.exports = {
env: {
browser: false,
es2021: true,
mocha: true,
node: true
},
plugins: ['@typescript-eslint'],
extends: ['plugin:prettier/recommended'],
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaVersion: 12
},
rules: {
'node/no-unsupported-features/es-syntax': [
'error',
{ ignores: ['modules'] }
]
}
}
Keep in mind this is for a fresh config, if you've already changed your config, just remove any mentions of the package we removed on step 2. | unknown | |
d3231 | train | If you want to be able to copy and paste an icon from Font Awesome after installing the font you need to do it from this page.
Font Awesome Cheat Sheet | unknown | |
d3232 | train | Last part should be
WHERE winner IS NOT NULL group by winner order by total DESC LIMIT 5
Because you just missed the ORDER BY | unknown | |
d3233 | train | Your explain plan that you gave:
id , select_type , table , type , possible_keys , key , key_len , ref , rows , Extra
1 , SIMPLE , a , ref , systemId idx_time) , systemId , 14 , const , 735310 , Using where
1 , SIMPLE , b , ref , PRIMARY , PRIMARY , 66 , gwreports2.a.msgId , 2270405 ,
1 , SIMPLE , c , ref , PRIMARY , PRIMARY , 66 , gwreports2.a.msgId , 2238701 ,
shows that you are hitting: 735310 * 2270405 * 2238701 = 3T rows!!!!!!
Effectively your not using your indexes to their fullest potential.
How to interpret your 'explain plan':
For every row in table 'a' (735310 ), you hit table 'b' 2270405 times.
For every row you hit in table 'b', you hit table 'c' 2238701 times.
As you can see, this is an exponential problem.
Yes, the 8MB of InnoDb Buffer space is small, but getting your explain plan down to xxxx * 1 * 1 will result in incredible speeds, even for 8MB of Buffer Space.
Given your Query:
SELECT a.msgId,a.senderId,a.destination,a.inTime,a.status as InStatus,b.status as SubStatus,c.deliverTime,substr(c.receipt,82,7) as DlvStatus
FROM inserted_history a
LEFT JOIN submitted_history b ON b.msgId = a.msgId -- USES 1 column of PK
LEFT JOIN delivered_history c ON a.msgId = c.msgId -- USES 1 column of PK
WHERE a.inTime BETWEEN '2010-08-10 00:00:00' AND '2010-08-010 23:59:59' -- NO key
AND a.systemId='ND_arber' -- Uses non-unique PK
Here are the problems I see:
A) Your _history tables are partitioned on the columns with 'Timestamp' datatype, YET you are NOT those columns in your JOIN/WHERE criteria. The engine must hit EVERY partition without that information.
B) Access to submitted_history and delivered_history is using only 1 column of a 2-column PK. You are only getting partial benefit of the PK. Can you get more columns to be part of the JOIN? You must get the # of rows found for this table as close to '1' as possible.
C) msgID = varchar(64) and this is the 1st column of the PK for each table. Your Keys on each table are ** HUGE **!!
- Try to reduce the size of columns for the PK, or use different columns.
Your data patterns of the other keys shows that you have LOTS of disk/ram space tied up in non-PK keys.
Question 1) What does "Show Indexes FROM " (Link) for each of the tables report?? The column 'Cardinality' will show you how effective each of your keys really are. The smaller the cardinality is, the WORST/Less effective that index is. You want cardinality as close to "total rows" as possible for ideal performance.
Question 2) Can you re-factor the SQL such that the JOIN'd columns of each table are those with the highest cardinality for that table?
Question 3) Is the columns of 'timestamp' datatype really the best column for the partitioning? If your access patterns always use 'msgId', and msgId is the 1st column of the PK, then .
Question 4) Is msgId unique? My guess is yes, and the 2nd column of the PK is not really necessary.
Read up on Optimizing SQL (Link) and have the index cardinality reports of your tables. This is the path to figure out how to optimize an query. You want the 'rows' of the explain plan to be N * 1 * 1.
SIDE NOTE:InnoDb & MyISAM engines does NOT automatically update table cardinality for non-unique columns, the DBA needs to manually run 'Analyze Table' periodically to ensure its accuracy.
Good Luck.
A: Would it be possible to alter the index of inserted_history,
systemId (systemId)
to be
systemId (systemId, inTime). Or add an additional index
My logic being that this should help to speed up the selection of the inserted_history (a) rows which forms the basis of the join.
The where clause "where a.inTime between '2010-08-10 00:00:00' and '2010-08-010 23:59:59' and a.systemId='ND_arber'" would all be selectable by index. At present, rows are selectable by systemId but then all those rows need to be scanned for the time.
Just as a matter of interest, how many records would there be (on average) for each system id. Also as msgid is not unique on its own, how many records (on average) in the other tables will have teh same msgid.
A: Main Idea
Are you using InnoDB? It looks like your buffer pool is only 8MB. That could easily be the problem, you're dealing with a lot of data and InnoDB doesn't have much memory. Can you bump the innodb_buffer_pool_size up? You'll have to restart MySQL, but I'm betting that would make a HUGE difference, even if you only give it 256 or 512MB.
Update: I see your storage engine and table format seem to default to MyISAM, so unless you specified otherwise this wouldn't apply. I wonder if the myisam_sort_buffer_size would help? We don't use MyISAM so I'm not familiar with tuning it.
Random Thought
I wonder if having having the primary key be alphanumeric (especially VARCHAR) has anything to do with it. I remember we had problems with performance on non-numeric primary keys, but that database dated from 4.0 or 4.1, so that may not apply (or ever have been true).
Secondary Idea
After the memory thing above, my best guess would be to give MySQL more hints. When I have a query that's running slow, I often find giving it more information helps it out. You have messageId/time indexes on each table. Maybe something more like this would work better:
select a.msgId,a.senderId,a.destination,a.inTime,a.status as InStatus,
b.status as SubStatus,c.deliverTime,substr(c.receipt,82,7) as DlvStatus
from inserted_history a left join submitted_history b on b.msgId = a.msgId
left join delivered_history c on a.msgId = c.msgId
where a.inTime between '2010-08-10 00:00:00' and '2010-08-010 23:59:59'
and a.systemId='ND_arber' AND c.inTime between b.inTime >= a.inTime
and c.inTime >= b.inTime
I'm guessing things get inserted into A, then B, then C. If you have better limits (say when something goes in A, it's always sent out and submitted within one day) add that information could help.
I wonder about this both because I've seen it help my query performance in some situations, but also because you have the data partitioned on the datetime. That may help the optimizer.
My other suggestion would be to run your query for a short amount of time, say 10 minutes instead of a full day, and make sure the results are right. Then try 30. Increase it and see when it falls off into "come back tomorrow" territory. That may tell you something. | unknown | |
d3234 | train | You have an array of uninitialized pointers or null pointers if the array is declared in the file scope
char *urls[MAX_WORD + 1];
So this call
strcpy(urls[index], url);
invokes undefined behavior.
It seems what you need is to declare a two-dimensional array like for example
char urls[MAX_WORD + 1][MAX_WORD + 1];
Or in the original array allocate dynamically memory for the stored string. Something like
urls[index] = malloc( strlen( url ) + 1 );
if ( urls[index] != NULL ) strcpy(urls[index], url);
else /* some error processing */; | unknown | |
d3235 | train | The best thing for a non .Net Application is to use the Dynamics 365 WebApi
It supports all common types of CRM Instances (On-Premise, Online) and authentication Methods:
*
*OAuth (2)
*Office365
*AD
*etc...
You can than look for existing projects on the web. (Like this guide for example) | unknown | |
d3236 | train | You can try the lync: protocol to activate the app:
Windows.System.Launcher.LaunchUriAsync(new Uri("lync:<sip:[email protected]>")); | unknown | |
d3237 | train | You can use Reduce with accumulate = TRUE argument as follows,
sapply(Reduce(c, 1:(ncol(df)-1), accumulate = TRUE)[-1], function(i) rowMeans(df[i]))
Or to get the exact output,
setNames(data.frame(df[1],sapply(Reduce(c, 1:(ncol(df)-1),accumulate = TRUE)[-1], function(i)
rowMeans(df[i]))), paste0('dia', seq(from = 10, to = ncol(df[-1])*10, by = 10)))
Or as @A5C1D2H2I1M1N2O1R2T1 suggests in comments,
do.call(cbind, setNames(lapply(1:6, function(x) rowMeans(df[1:x])),
paste0("dia", seq(10, 60, 10)))
Both giving,
dia10 dia20 dia30 dia40 dia50 dia60
1 0.1221060 0.1221060 0.1221060 0.1221060 0.1221060 0.1221060
2 0.4084525 0.4056268 0.3909976 0.3581247 0.3123880 0.2806583
3 0.4087809 0.4065162 0.3947134 0.3740339 0.3440639 0.3082257
4 0.4088547 0.4067164 0.3955460 0.3771151 0.3539278 0.3236136
5 0.4088770 0.4067829 0.3958531 0.3782787 0.3574178 0.3325359
6 0.4088953 0.4068301 0.3960262 0.3788645 0.3590561 0.3371009
Or to add it to the original data frame, then,
cbind(df, setNames(lapply(1:6, function(x) rowMeans(df[1:x])),
paste0("dia", seq(10, 60, 10))))
A: Here is an alternative method with apply and cumsum. Using rowMeans is almost surely preferable, but this method runs through the calculation in one pass.
setNames(data.frame(t(apply(dat[1:6], 1, cumsum) / 1:6)),
paste0("dia", seq(10, 60, 10)))
dia10 dia20 dia30 dia40 dia50 dia60
1 0.1221060 0.1221060 0.1221060 0.1221060 0.1221060 0.1221060
2 0.4084525 0.4056268 0.3909976 0.3581247 0.3123880 0.2806583
3 0.4087809 0.4065162 0.3947134 0.3740339 0.3440639 0.3082257
4 0.4088547 0.4067164 0.3955460 0.3771151 0.3539278 0.3236136
5 0.4088770 0.4067829 0.3958531 0.3782787 0.3574178 0.3325359
6 0.4088953 0.4068301 0.3960262 0.3788645 0.3590561 0.3371009
Using the smarter Reduce("+" with accumulate suggested by @alexis-laz, we could do
mapply("/", Reduce("+", dat[1:6], accumulate = TRUE), 1:6)
or to get a data.frame with the desired names
setNames(data.frame(mapply("/", Reduce("+", dat[1:6], accumulate = TRUE), 1:6)),
paste0("dia", seq(10, 60, 10)))
The uglier code below follows the same idea, without mapply
setNames(data.frame(Reduce("+", dat[1:6], accumulate = TRUE)) /
rep(1:6, each=nrow(dat)), paste0("dia", seq(10, 60, 10))) | unknown | |
d3238 | train | Don't turn off noImplicityAny. You are right, you shouldn't!
What you should do is, declare the type of the parameters, which is ActionsObservable<T>. Where T should be the type of the action.
Example:
export enum SettingsActionTypes {
FETCH: "settings/fetch",
FETCH_SUCCESS: "settings/fetchSuccess"
}
export function fetch(): IFetchAction {
return {
type: SettingsActionTypes.FETCH
};
}
export interface IFetchAction {
type: SettingsActionTypes.FETCH;
}
export interface IFetchSuccessAction {
type: SettingsActionTypes.FETCH_SUCCESS;
}
export type SettingsAction = IFetchAction | IFetchSuccessAction;
Then in your epics, you can write something like this:
import {
ActionsObservable,
StateObservable
} from 'redux-observable';
export const fetchSettingsEpic = (action$: ActionsObservable<SettingsAction>) =>
action$.ofType(SettingsActionTypes.FETCH).mergeMap(...do your stuff here...)
Also, if you need to access state in your epics, you might have to use the second parameter state$ whose type is StatesObservable<T>. Where T is the interface that defines the structure of your entire redux state. | unknown | |
d3239 | train | Thanks to this website I learned that my problem was the scope of my AVAudioPlayer object.
Here is the working code:
class GameScene: SKScene {
var songPlayer:AVAudioPlayer?
override func didMove(to view: SKView) {
if let path = Bundle.main().pathForResource("Test Song", ofType: "wav") {
let filePath = NSURL(fileURLWithPath:path)
songPlayer = try! AVAudioPlayer.init(contentsOf: filePath as URL)
songPlayer?.numberOfLoops = 0 //This line is not required if you want continuous looping music
songPlayer?.prepareToPlay()
songPlayer?.play()
}
}
} | unknown | |
d3240 | train | Remove the / from the curl command, we use them in the API Reference to better display the curl commands but they don't work in Windows.
curl -u username:password -X POST
--header "Content-Type: audio/flac"
--header "Transfer-Encoding: chunked"
--data-binary @/tmp/0001.flac
"https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true"
The command will work in unix based system like Ubuntu or Mac. | unknown | |
d3241 | train | You are overriding the list you have. You need to append new data into your list.
Your logic should be like this:
// Declare a list
List<Herbslist> herbslist = [];
// Update the list
herblist.add(Herbslist.fromJson(json.decode(response.body)));
// Return the updated list
return herblist;
Without further information about your Herblist class, I can't guarantee that this would work. You probably need to flat you List of Lists.
Update
Your data structure seems unconvinient for this situation. There are better ways to structure your data and respresent it on the UI. You wrapped iterable data into a class and whenever you get new instance from that class you end up with having two different data stores. Since you pass the newest instance to the UI, only the latest results will show up on the list.
You should have a single data store (eg. a list or map for your API results) and append new data into it. You only have model classes now. They should declare how is your data structured. They shouldn't store any real data. | unknown | |
d3242 | train | That is very old - and quite unreliable - syntax for a ternary if. In modern Python it should be:
query = '?' + url.query if url.query else ''
and in Java:
query = url.query == '' ? '' : '?' + url.query | unknown | |
d3243 | train | Better approach would be to create a laravel provider and register the provider in app providers.
For Example:
In your case
php artisan make:provider EPaymentProvider
It will create a provider file EPaymentProvider.php in providers directory.
Now modify your Library/EPayment.php file like this
<?php
class EPayment {
private static $_instance = 'null';
public $credentials = [
'PAYPAL_USERNAME'=>'xxx',
'PAYPAL_PASSWORD'=>'xxx',
'PAYPAL_SIGNATURE'=>'xxxx',
'PAYPAL_CONNECTIONTIMEOUT'=>'3333',
'PAYPAL_RETRY'=>'true',
'PAYPAL_OGENABLED'=>'true',
'PAYPAL_FILENAME'=>'foo/bar',
'PAYPAL_LOGLEVEL'=>'5',
];
/**
* @param array $array
*/
public function setPayPalCredential(array $array){
$this->credentials = $array;
}
/**
* @return EPayment|string
*/
public static function PayPal(){
if(self::$_instance === 'null')
self::$_instance = new self;
return self::$_instance;
}
/**
* @param $key
* @return mixed
*/
public function getPayPalCredential($key){
return $this->credentials[$key];
}
}
and in register method of EPaymentProvider.php add Libraries/EPayment.php
<?php
namespace App\Providers;
use Illuminate\Support\ServiceProvider;
class HelperServiceProvider extends ServiceProvider
{
/**
* Bootstrap the application services.
*
* @return void
*/
public function boot()
{
//
}
/**
* Register the application services.
*
* @return void
*/
public function register()
{
require base_path().'/app/Libraries/EPayment.php';
}
}
Now add EPaymentProvider in config/app.php Provider array
Now you can use
Epayment::PayPal()->setPayPalCredential(['PAYPAL_USERNAME' => 'New Username']);
and
Epayment::PayPal()->getPayPalCredential('PAYPAL_USERNAME')
let me know if it worked.
A: First the file is outside the config folder so it wont be possible to set or get using the Config facade. To still use the providers file move it to the config directory and everything will work for you.
A: To retrieve config values using dot notation, you can do the following in providers.php:
$paypalArray = ['paypal' =>
[
'PAYPAL_USERNAME'=>'xxx',
'PAYPAL_PASSWORD'=>'xxx',
'PAYPAL_SIGNATURE'=>'xxxx',
'PAYPAL_CONNECTIONTIMEOUT'=>'3333',
'PAYPAL_RETRY'=>'true',
'PAYPAL_OGENABLED'=>'true',
'PAYPAL_FILENAME'=>'foo/bar',
'PAYPAL_LOGLEVEL'=>'5']
];
config($paypalArray);
Now you can retrieve values like config('paypal.PAYPAL_USERNAME'). | unknown | |
d3244 | train | Backend problem
You are outputting invalid JSON.
PHP provides json_encode to save you having to manually create json:
$response=array();
$response['success']=false;
$response['result']=array();
$response['message']='Welcome '.$username;
$msg = json_encode($response);
If you really don't want to use this you should add double quotes to your keys, and change to double quotes for your string properties too:
$msg = '{"success":true, "result":{"message":"Welcome, '.$username.'!"}}';
Front end problem
You are using success and failure methods, but I can't see anything in your back end code to send status headers.
The failure method will only get called when a response returns with a non 200 status code. So you may need to either add this to your back end code, and/or also decode the response inside your success method to make sure that you have sent success:true as part of your json before redirecting.
To send the header in PHP 5.4 or newer:
http_response_code(401);
in 5.3 or older you have to use header method instead - but if you are running this version you should upgrade immediately so I wont include an example. | unknown | |
d3245 | train | The RecordID (RECID) of the _file table is stored in a field in the _filed table.
FOR EACH _file NO-LOCK, EACH _field NO-LOCK WHERE _field._file-recid = RECID(_file):
DISPLAY _file._file-name _field._field-name.
END.
Or utilize the primary index in the query using the "OF" operator:
FOR EACH _file NO-LOCK, EACH _field NO-LOCK OF _file:
DISPLAY _file._file-name _field._field-name.
END.
A: It is linked to the _File table through the _File-recid field. | unknown | |
d3246 | train | The Python OrderedDict collection will help you here:
"dict subclass that remembers the order entries were added" | unknown | |
d3247 | train | Since you're using Rich Text, to fetch the content your GraphQL query should look as follow:
{
blogCollection {
items {
title
slug
cover {
title
description
url
}
content {
json
}
}
}
}
json in content will return the Rich Text as a JSON object. You can then use the Rich Text Render to render the content on your site.
Here's the Next.js and Contentful starter guide that might be useful: https://www.contentful.com/nextjs-starter-guide/ | unknown | |
d3248 | train | You may use
^@(\w+):(\w+)(?:.*?\|b=(\d+))?(?:.*?\|d=(\d+))?
See the regex demo
Details
*
*^ - start of string
*@ - a @ char
*(\w+) - Group 1: one or more word chars
*: - a colon
*(\w+) - Group 2: one or more word chars
*(?:.*?\|b=(\d+))? - an optional non-capturing group matching any 0+ chars other than line break chars, as few as possible, then |b= and then capturing 1+ digits into Group 3
*(?:.*?\|d=(\d+))? - an optional non-capturing group matching any 0+ chars other than line break chars, as few as possible, then |d= and then capturing 1+ digits into Group 4 | unknown | |
d3249 | train | You would create your tables based on your Object structure and relationships.
It seems what you have is an Authors(table) that has many Series(table). Series have many Books(table). Correct me if I'm wrong, I didn't understand your last sentence that well.
If I am correct then you would need foreign keys as follows:
an author_id on your series table, and a series_idon your books table to link back to the parent table for queries. Let me know if it helps! | unknown | |
d3250 | train | Just change the order of adding items and use flex like this:
.rotated {
display: flex;
height: 300px;
flex-direction: column-reverse;
}
<div class="rotated">
<span>1000</span>
<span>2000</span>
<span>3000</span>
<span>5000</span>
</div>
A: You can use flexbox (display: flex) with align-items: flex-start and justify-content: flex-end.
.rotated {
display: flex;
align-items: flex-start;
justify-content: flex-end;
flex-direction: column;
/* For demo */
border: 1px solid black;
height: 200px;
width: 200px;
}
<div class="rotated">
<span>5000</span><br>
<span>3000</span><br>
<span>2000</span><br>
<span>1000</span>
</div> | unknown | |
d3251 | train | You need to add
xmlns:app="http://schemas.android.com/apk/res-auto"
to your main xml element
A: I am assuming you are using drawer layout at your root layout. If that is the case then Add below line of code to your drawer layout
xmlns:app="http://schemas.android.com/apk/res-auto" | unknown | |
d3252 | train | As it appears, it was a silly mistake.
while (new_socket = accept(s, (struct sockaddr*)&client, &c) != INVALID_SOCKET)
Since I didn't put another bracket over the new_socket = accept(s, (struct sockaddr*)&client, &c after initializing new_socket, the inequality was being applied on the accept function return.
The correct syntax would be
while ((new_socket = accept(s, (struct sockaddr*)&client, &c)) != INVALID_SOCKET) | unknown | |
d3253 | train | I figured it out: all I had to do was add v-model to v-dialog. I thought it was unnecessary because I already had a v-if that wrapped the component containing the v-dialog. I assumed that with this requirement fulfilled, it should render the child component, but it didn't because I didn't have v-model in v-dialog. | unknown | |
d3254 | train | You have two problems:
*
*You forgot to add the command for execution
*You're exiting too early, because execFile is an asynchronous function.
Try:
casper.start('http://www.google.com', function() {
this.echo('Home page opened');
this.echo(this.getTitle());
childProc.execFile('C:\\Google Drive\\nodejs\\push.js', [], null, function (err, stdout, stderr){
utils.dump(arguments);
casper.echo("Exiting..."):
casper.exit();
});
}).run(function(){/* This prevents CasperJS from exiting */}); | unknown | |
d3255 | train | After getting the hint from this post "http://marc.info/?l=tomcat-user&m=137183130517812&w=2
Christopher Schultz wrote:
"I would expect this kind of thing if you used a current BCEL against a
newer .class file generated for example by Java 8, which BCEL might
not yet support (or at least the version Tomcat uses)."
I checked the POM file and also project properties in Eclipse.
I noticed even though I was using JDK 1.7 but Eclipse was compiling the code for 1.5 because I had forgot to set the correct compile settings at
Project properties -> Java Compiler-> JDK Compliance
I changed it from 1.5 to 1.7 and built the jar file, everything worked fine. :-) | unknown | |
d3256 | train | Just install the Android SDK platform package through SDK Manager in Android Studio, relevant to your Compile SDK version. It will prompt you to install the package as well as to accept the license. After that just sync the gradle, it will resolve the issue.
A: Above gradle file code seems to be perfect. Probably its nothing to do with app/build.gradle (Module:app). Just open other build.gradle (Project:Android) file in Project window and verify your Android Studio version it must be same as yours.
I replaced from:
dependencies {
classpath 'com.android.tools.build:gradle:3.2.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
to my Android Studio v3.0.1 in my case:
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
Press "Try Again" to Sync gradle file. This resolved my problem with a successful build.
A: For me, this issue appeared when I updated Android Studio to version 3.3.
Disabling experimental feature "Only sync the active variant" fixes it:
A: Try use command line to run gradlew tasks to see why the build would fail.
In my case:
FAILURE: Build failed with an exception.
* What went wrong:
A problem occurred configuring project ':app'.
> Failed to install the following Android SDK packages as some licences have not been accepted.
platforms;android-27 Android SDK Platform 27
build-tools;27.0.3 Android SDK Build-Tools 27.0.3
So I just ran Go to Android\sdk\tools\bin sdkmanager --licenses to accept licenses and then the build passed in command line.
I think Android Studio is crying for the wrong issue. So you may check the real output in command line and find what is happening.
A: You must update your dependencies in build.grade in the buildscript block:
classpath 'com.android.tools.build:gradle:3.1.0'
to:
classpath 'com.android.tools.build:gradle:3.4.2'
A: If this error occurs for other module than 'app'
Remove gradle variable from that module's build.gradle like
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
& replace it actual value
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:1.2.71"
Clean Project -> Rebuild Project
A: I had this issue in my library project. I solved it by adding
buildscript {
dependencies {
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
}
}
To the beginning of build.gradle | unknown | |
d3257 | train | This is rather simple but comprehensive example. After analysing it you should be able to implement your solution.
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.control.Label;
import javafx.scene.control.TreeCell;
import javafx.scene.control.TreeItem;
import javafx.scene.control.TreeView;
import javafx.scene.layout.HBox;
import javafx.scene.layout.VBox;
import javafx.stage.Stage;
public class TreeViewCellApp extends Application {
public static void main(String[] args) {
launch(args);
}
@Override
public void start(Stage stage) throws Exception {
TreeItem<Employee> leaf1Item = new TreeItem<Employee>(new Employee("Anne Burnes", "Employee"));
TreeItem<Employee> leaf2Item = new TreeItem<Employee>(new Employee("Ronan Jackson", "Employee"));
TreeItem<Employee> rootItem = new TreeItem<Employee>(new Employee("Jack Shields", "Head"));
rootItem.getChildren().add(leaf1Item);
rootItem.getChildren().add(leaf2Item);
Label label = new Label();
TreeView<Employee> treeView = new TreeView<>(rootItem);
treeView.setCellFactory(param -> new TreeCell<Employee>() {
@Override
protected void updateItem(Employee employee, boolean empty) {
super.updateItem(employee, empty);
if (employee == null || empty) {
setGraphic(null);
} else {
EmployeeControl employeeControl = new EmployeeControl(employee);
employeeControl.setOnMouseClicked(mouseEvent -> label.setText(employee.getName()));
setGraphic(employeeControl);
}
}
});
VBox vBox = new VBox(label, treeView);
stage.setScene(new Scene(vBox));
stage.show();
}
}
class Employee {
private final String name;
private final String capacity;
public Employee(String name, String capacity) {
this.name = name;
this.capacity = capacity;
}
public String getName() {
return name;
}
public String getCapacity() {
return capacity;
}
}
class EmployeeControl extends HBox {
private final Label nameLabel = new Label();
private final Label capacityLabel = new Label();
{
getChildren().addAll(nameLabel, capacityLabel);
}
public EmployeeControl(Employee employee) {
nameLabel.setText(employee.getName());
capacityLabel.setText(employee.getCapacity());
}
} | unknown | |
d3258 | train | I am sure there are hundreds of ways to do this, but since the data is only around 100MB, a simple for loop is very capable and very flexible to modify and extend in this case, so here it is (done in seconds):
raw_data = read.csv("201511-citibike-tripdata.csv")
bikeid <-22075
onebike <- raw_data[ which(raw_data$bikeid== bikeid), ]
output <- data.frame("bikeid"= integer(0), "end.station.id"= integer(0), "start.station.id" = integer(0), "diff.time" = numeric(0))
for(i in 2:nrow(onebike)) {
if(onebike[i-1,"end.station.id"] != onebike[i,"start.station.id"]){
diff_time <- as.double(difftime(strptime(onebike[i-1,"stoptime"], "%m/%d/%Y %H:%M:%S"),
strptime(onebike[i,"starttime"], "%m/%d/%Y %H:%M:%S"),units = "mins"))
new_row <- c(bikeid, onebike[i-1,"end.station.id"], onebike[i,"start.station.id"], diff_time)
output[nrow(output) + 1,] = new_row
}
}
output
bikeid end.station.id start.station.id diff.time
1 22075 514 520 181.5667
2 22075 356 502 628.8833
Edit: This is to further answer the question in the comments.
It is an easy extension to include all the bikeids:
raw_data = read.csv("201511-citibike-tripdata.csv")
unique_id = unique(raw_data$bikeid)
#bikeid <-22075
output <- data.frame("bikeid"= integer(0), "end.station.id"= integer(0), "start.station.id" = integer(0), "diff.time" = numeric(0), "stoptime" = character(),"starttime" = character(), stringsAsFactors=FALSE)
for (bikeid in unique_id)
{
onebike <- raw_data[ which(raw_data$bikeid== bikeid), ]
if(nrow(onebike) >=2 ){
for(i in 2:nrow(onebike )) {
if(is.integer(onebike[i-1,"end.station.id"]) & is.integer(onebike[i,"start.station.id"]) &
onebike[i-1,"end.station.id"] != onebike[i,"start.station.id"]){
diff_time <- as.double(difftime(strptime(onebike[i,"starttime"], "%m/%d/%Y %H:%M:%S"),
strptime(onebike[i-1,"stoptime"], "%m/%d/%Y %H:%M:%S")
,units = "mins"))
new_row <- c(bikeid, onebike[i-1,"end.station.id"], onebike[i,"start.station.id"], diff_time, as.character(onebike[i-1,"stoptime"]), as.character(onebike[i,"starttime"]))
output[nrow(output) + 1,] = new_row
}
}
}
}
dim(output)
[1] 32589 6
head(output)
bikeid end.station.id start.station.id diff.time stoptime starttime
1 22545 520 529 24.8166666666667 11/2/2015 08:38:22 11/2/2015 09:03:11
2 22545 520 517 537.483333333333 11/2/2015 09:39:19 11/2/2015 18:36:48
3 22545 2004 3230 563.066666666667 11/2/2015 22:06:27 11/3/2015 07:29:31
4 22545 296 3236 471.783333333333 11/4/2015 23:40:29 11/5/2015 07:32:16
5 22545 520 449 43.4166666666667 11/9/2015 08:24:06 11/9/2015 09:07:31
6 22545 359 519 30.7166666666667 11/9/2015 09:14:46 11/9/2015 09:45:29 | unknown | |
d3259 | train | Here is a recursive solution. You can test it, save it in a file, run node yourfile.js /the/path/to/traverse.
const fs = require('fs');
const path = require('path');
const util = require('util');
const traverse = function(dir, result = []) {
// list files in directory and loop through
fs.readdirSync(dir).forEach((file) => {
// builds full path of file
const fPath = path.resolve(dir, file);
// prepare stats obj
const fileStats = { file, path: fPath };
// is the file a directory ?
// if yes, traverse it also, if no just add it to the result
if (fs.statSync(fPath).isDirectory()) {
fileStats.type = 'dir';
fileStats.files = [];
result.push(fileStats);
return traverse(fPath, fileStats.files)
}
fileStats.type = 'file';
result.push(fileStats);
});
return result;
};
console.log(util.inspect(traverse(process.argv[2]), false, null));
Output looks like this :
[
{
file: 'index.js',
path: '/stackoverflow/test-class/index.js',
type: 'file'
},
{
file: 'message.js',
path: '/stackoverflow/test-class/message.js',
type: 'file'
},
{
file: 'somefolder',
path: '/stackoverflow/test-class/somefolder',
type: 'dir',
files: [{
file: 'somefile.js',
path: '/stackoverflow/test-class/somefolder/somefile.js',
type: 'file'
}]
},
{
file: 'test',
path: '/stackoverflow/test-class/test',
type: 'file'
},
{
file: 'test.c',
path: '/stackoverflow/test-class/test.c',
type: 'file'
}
] | unknown | |
d3260 | train | PHAsset contains only metadata about image. In order to fetch image data you need to use PHImageManager.
func requestImageData(for asset: PHAsset,
options: PHImageRequestOptions?,
resultHandler: @escaping (Data?, String?, UIImageOrientation, [AnyHashable : Any]?) -> Void) -> PHImageRequestID
You can use CFReadStreamCreateWithBytesNoCopy to create CFReadStreamRef with data. | unknown | |
d3261 | train | Its quite simple really. Let's suppose:
*
*Your DocumentRoot is /var/www
*You have defined Options Indexes or +Indexes for /var/www
*Your DocumentRoot has this file list: a,b,c,d,d1,d2,f,g
*You want to list files starting with d.
In this case all you have to do is request this:
http://example.com/?P=d*
The pattern to use is similar or like used since DOS, ? for a caracter * for matching lots of characters. So if you wanted to match files which third character is a "n" you would use this pattern ??n*
and it will list only files matching that pattern. Try it out. | unknown | |
d3262 | train | It is probably problem with CurrentUserService itself. You are instantiating user in its constructor at which point you maybe don't have user authenticated.
I would try to change CurrentUserService like this:
public class CurrentUserService : ICurrentUserService
{
private IHttpContextAccessor httpContextAccessor;
public CurrentUserService(IHttpContextAccessor httpContextAccessor)
{
this.httpContextAccessor = httpContextAccessor;
}
private ClaimsPrincipal User => httpContextAccessor.HttpContext?.User;
public int UserId => User != null && User.Identity.IsAuthenticated ? int.Parse(User.FindFirstValue(ClaimTypes.PrimarySid)) : 0;
public bool IsAuthenticated => User != null && User.Identity.IsAuthenticated;
} | unknown | |
d3263 | train | Zookeeper Server is considered a MASTER component in Ambari terminology. Kafka has the requirement that Zookeeper Server be installed on at least one node in the cluster. Thus the only requirement you have is to install Zookeeper server on one of the nodes in your cluster for Kafka to function. Kafka does not require Zookeeper clients on each Kafka node.
You can determine all this information by looking at the Service configurations for KAFKA and ZOOKEEPER. The configuration is specified in the metainfo.xml file for each component under the stack definition. The location of the definitions will differ based on the version of Ambari you have installed.
On newer versions of Ambari this location is:
/var/lib/ambari-server/resources/common-services/<service name>/<service version>
On older version of Ambari this location is:
/var/lib/ambari-server/resources/stacks/HDP/<stack version>/services/<service name> | unknown | |
d3264 | train | I've never really drilled down that rabbit hole (ie why this is), but there is a persistent rumour around here that NSTimer and cocos2d do not mix well. Instead, I use cocos' own methods
[self schedule:@selector(CountTimeBonus:) interval:.01];
// and to invalidate this
[self unschedule:@selector(CountTimeBonus:)];
the CountTimeBonus signature will be :
-(void) CountTimeBonus:(ccTime) dt {
}
A: Thanks YvesLeBorg that worked for me.
There are a few more things to consider,
1) the cocos2d code:
[self schedule:@selector(myTimer:) delay:.01];
has a syntax problem, i used this instead (below):
[self schedule:@selector(myTimer:) interval:.01];
2) I got the NSTimer working again, but when i place the NStimer call in a do-while loop it won't work.
Thanks again and best of luck
Johan | unknown | |
d3265 | train | I guess I'm an idiot, because the solution was the opposite of what I thought: adding a newline to the input: stdout_data = p.communicate(input="2+2\n") makes the script print ('4\n', '') as it should, rather than give an error. | unknown | |
d3266 | train | Incoming requests may use headers or parameters to indicate to Rails what format, called "MIME type", the response should have. For instance, a typical GET request from entering a URL into your browser will ask for an HTML (or default) response. Other types of common responses return JSON or XML.
In your case your "about" action does not have any explicit responders, and because of that Rails can't match the requested format (which is what the error message is trying to convey). You will probably just want to add an HTML template app/views/help/about.html.erb with your content. Rails should identify the HTML template and handle things from there.
More info
In Rails you need to respond with a specific format, and it is easy to setup your controller actions to handle a variety of formats.
Here is a snippet you might find in a controller which can respond in 3 different ways.
respond_to do |format|
format.html { render "foo" } # renders foo.html.erb
format.json { render json: @foo }
format.xml { render xml: @foo }
end
You can see more examples and deeper explanations in the documentation here.
ActiveRecord helps because it comes with serializers out of the box which can create JSON and XML representations of your objects. | unknown | |
d3267 | train | Don't try and design a new language just for your application, instead embed another, well-established language in there. Take a look at the mess it has caused for other applications trying to implement their own scripting language (mIRC is a good example). It will mean users will have to learn another language just to script your application. Also, if you design your own language it will probably end up not as useful as other languages. Don't try to reinvent the wheel.
You might want to look at Lua as it is light-weight, popular, well-established, and is designed to be used by games (users of it include EA, Blizzard, Garry's Mod, etc.), and has a very minimal core library (it is designed to be a modular language).
A: SO everybody's urging you not to reinvent the wheel and that's a great idea, I have a soft spot for Python (which would allow your scripting users to also install and use plenty of other useful math libs &c), but LUA's no doubt even easier to integrate (and other scripting languages such as Ruby would also no doubt be just fine). Not writing your own ad-hoc scripting language is excellent advice.
But reading your question suggests to me that your issue is more about -- what attributes of the objects of my engine (and what objects -- particles, sure, but, what else besides) should I expose to whatever scripting language I ember (or, say via MS COM or .NET, to whatever scripting or non-scripting language my users prefer)?
Specific properties of each particle such as those you list are no doubt worthwhile. Do you have anything else in your engine besides point-like particles, such as, say, surfaces and other non-pointlike entities, off which particles might bounce? What about "forces" of attraction or repulsion? Might your particles have angular momentum / spin?
A great idea would be to make your particles' properties "expando", to use a popular term (other object models express the same idea differently) -- depending on the app using your engine, other programmers may decide to add to particles whatever properties they need... maybe mass, say -- maybe electric charge -- maybe cost in eurocents, for all you know or care;-). This makes life easier for your users compared to just offering a "particle ID" (which you should anyway of course;-) for them to use in hash tables or the like to keep track of the specific attributes they care about! Allowing them to add "methods" and "triggers" (methods that you call automatically if and when certain conditions hold, e.g. two particles get closer than a certain distance) would be awesome, but maybe a bit harder.
Don't forget, BTW, to allow a good method to "snapshot" the current state of the particle system (INCLUDING user-added expando properties) to a named stream or file and restore from such a snapshot -- that's absolutely crucial in many uses.
Going beyond specific particles (and possibly other objects such as surfaces if you have them) you should probably have a "global environment" with its own properties (including expando ones) and ideally methods and triggers too. E.g., a force field acting on all particles depending on their position (and maybe their charge, mass, etc...!-)...
Hope some of these ideas strike you as interesting -- hard for me to tell, with little idea of your intended field of application!-)
A: Just embed Lua. It's a great language design, excellent performance, widely used by game developers, and small enough that you can master it in a few days.
Then you can get on with your game design. | unknown | |
d3268 | train | You are never calling the function SelectSeat().
In order to run the function, you have to 'active' it somehow, for instance
when the page is loaded: window.onload=function(){SelectSeat()};
or when you click on something: <div onclick="SelectSeat()">click me</div>'; | unknown | |
d3269 | train | Pleae check with below code
foreach(var gvItem in GridView1.Items)
{
CheckBox chkItem = (CheckBox) gvItem.FindControl("Poslano");
if (chkItem.Checked)
{
//Do stuff
}
} | unknown | |
d3270 | train | So long as your "contactPanel" is not larger than the body (or viewport) then the body won't scroll. But you can set overflow:hidden just to make sure of it. I'm guessing you actually only want to scroll the contactPanel vertically as well, and not on both axis? Use overflow-y:scroll;
I'd also recommend moving text styling rules onto to the text objects themselves... but that's just me.
body {
overflow:hidden;
}
#contactPanel {
overflow-y:scroll;
width:100%;
height:100%;
position: absolute;
top:-100%;
left:0;
z-index: 6;
background: #000000;
}
#contactPanel p {
color:#fff;
} | unknown | |
d3271 | train | Answer from Mohfooj can be found in Tableau forum here: https://community.tableau.com/message/900181#900181 | unknown | |
d3272 | train | You have to use notIn and not contain maybe then it will work:
Official Docs: https://sequelize.org/master/manual/model-querying-basics.html
where: {
arr1: {
[Op.notIn]: someValueArray
},
arr2: {
[Op.notIn]: someValueArray
}
},
A: Apparently the second option is the correct one, but what was incorrect was the types of sequelize, @ts-ignore fixes the problem | unknown | |
d3273 | train | In the data source setting, can you remove the existing SQL server source connection and try again?
You can set permission when creating the data source. | unknown | |
d3274 | train | Since you're dealing with a small number of values, and since the performance benefits of symbols are evident from your testing, just go with symbols.
BTW, you can use map(&:to_sym) instead of map {|x| x.to_sym}. | unknown | |
d3275 | train | The above addMethod by Lod Lawson is not completely correct. It's $.validator and not $.validate and the validator method name cb_selectone requires quotes. Here is a corrected version that I tested:
$.validator.addMethod('cb_selectone', function(value,element){
if(element.length>0){
for(var i=0;i<element.length;i++){
if($(element[i]).val('checked')) return true;
}
return false;
}
return false;
}, 'Please select at least one option');
A: Here is the a quick solution for multiple checkbox validation using jquery validation plugin:
jQuery.validator.addMethod('atLeastOneChecked', function(value, element) {
return ($('.cbgroup input:checked').length > 0);
});
$('#subscribeForm').validate({
rules: {
list0: { atLeastOneChecked: true }
},
messages: {
list0: { 'Please check at least one option' }
}
});
$('.cbgroup input').click(function() {
$('#list0').valid();
});
A: $('#subscribeForm').validate( {
rules: {
list: {
required: true,
minlength: 1
}
}
});
I think this will make sure at least one is checked.
A: This script below should put you on the right track perhaps?
You can keep this html the same (though I changed the method to POST):
<form method="POST" id="subscribeForm">
<fieldset id="cbgroup">
<div><input name="list" id="list0" type="checkbox" value="newsletter0" >zero</div>
<div><input name="list" id="list1" type="checkbox" value="newsletter1" >one</div>
<div><input name="list" id="list2" type="checkbox" value="newsletter2" >two</div>
</fieldset>
<input name="submit" type="submit" value="submit">
</form>
and this javascript validates
function onSubmit()
{
var fields = $("input[name='list']").serializeArray();
if (fields.length === 0)
{
alert('nothing selected');
// cancel submit
return false;
}
else
{
alert(fields.length + " items selected");
}
}
// register event on form, not submit button
$('#subscribeForm').submit(onSubmit)
and you can find a working example of it here
UPDATE (Oct 2012)
Additionally it should be noted that the checkboxes must have a "name" property, or else they will not be added to the array. Only having "id" will not work.
UPDATE (May 2013)
Moved the submit registration to javascript and registered the submit onto the form (as it should have been originally)
UPDATE (June 2016)
Changes == to ===
A: if (
document.forms["form"]["mon"].checked==false &&
document.forms["form"]["tues"].checked==false &&
document.forms["form"]["wed"].checked==false &&
document.forms["form"]["thrs"].checked==false &&
document.forms["form"]["fri"].checked==false
) {
alert("Select at least One Day into Five Days");
return false;
}
A: How about this:
$(document).ready(function() {
$('#subscribeForm').submit(function() {
var $fields = $(this).find('input[name="list"]:checked');
if (!$fields.length) {
alert('You must check at least one box!');
return false; // The form will *not* submit
}
});
});
A: Good example without custom validate methods, but with metadata plugin and some extra html.
Demo from Jquery.Validate plugin author
A: How about this
$.validate.addMethod(cb_selectone,
function(value,element){
if(element.length>0){
for(var i=0;i<element.length;i++){
if($(element[i]).val('checked')) return true;
}
return false;
}
return false;
},
'Please select a least one')
Now you ca do
$.validate({rules:{checklist:"cb_selectone"}});
You can even go further a specify the minimum number to select with a third param in the callback function.I have not tested it yet so tell me if it works.
A: I had to do the same thing and this is what I wrote.I made it more flexible in my case as I had multiple group of check boxes to check.
// param: reqNum number of checkboxes to select
$.fn.checkboxValidate = function(reqNum){
var fields = this.serializeArray();
return (fields.length < reqNum) ? 'invalid' : 'valid';
}
then you can pass this function to check multiple group of checkboxes with multiple rules.
// helper function to create error
function err(msg){
alert("Please select a " + msg + " preference.");
}
$('#reg').submit(function(e){
//needs at lease 2 checkboxes to be selected
if($("input.region, input.music").checkboxValidate(2) == 'invalid'){
err("Region and Music");
}
});
A: I had a slighlty different scenario. My checkboxes were created in dynamic and they were not of same group. But atleast any one of them had to be checked. My approach (never say this is perfect), I created a genric validator for all of them:
jQuery.validator.addMethod("validatorName", function(value, element) {
if (($('input:checkbox[name=chkBox1]:checked').val() == "Val1") ||
($('input:checkbox[name=chkBox2]:checked').val() == "Val2") ||
($('input:checkbox[name=chkBox3]:checked').val() == "Val3"))
{
return true;
}
else
{
return false;
}
}, "Please Select any one value");
Now I had to associate each of the chkbox to this one single validator.
Again I had to trigger the validation when any of the checkboxes were clicked triggering the validator.
$('#piRequest input:checkbox[name=chkBox1]').click(function(e){
$("#myform").valid();
});
A: I checked all answers and even in other similar questions, I tried to find optimal way with help of html class and custom rule.
my html structure for multiple checkboxes are like this
$.validator.addMethod('multicheckbox_rule', function (value, element) {
var $parent = $(element).closest('.checkbox_wrapper');
if($parent.find('.checkbox_item').is(':checked')) return true;
return false;
}, 'Please at least select one');
<div class="checkbox_wrapper">
<label for="checkbox-1"><input class="checkbox_item" id="checkbox-1" name="checkbox_item[1]" type="checkbox" value="1" data-rule-multicheckbox_rule="1" /> Checkbox_item 1</label>
<label for="checkbox-2"><input class="checkbox_item" id="checkbox-2" name="checkbox_item[2]" type="checkbox" value="1" data-rule-multicheckbox_rule="1" /> Checkbox_item 1</label>
</div> | unknown | |
d3276 | train | You can implement the AdListener interface to listen for AdMob events.
public interface AdListener {
public void onReceiveAd(Ad ad);
public void onFailedToReceiveAd(Ad ad, AdRequest.ErrorCode error);
public void onPresentScreen(Ad ad);
public void onDismissScreen(Ad ad);
public void onLeaveApplication(Ad ad);
}
Then you will want your AdView to listen to the AdListener.
// Assuming AdView is named adView and this class implemented AdListener.
adView.setAdListener(this);
In particular, you will be interested in the the onFailedToReceiveAd callback. This called if AdMob failed to load an ad. If you implement this method, you can take appropriate action in your application when an ad is not returned. | unknown | |
d3277 | train | Placemark is a class that contains information like place's name, locality, postalCode, country and other properties. See Properties in the documentation.
placemarkFromCoordinates is a method that returns a list of Placemark instances found for the supplied coordinates.
Placemark place = p[0] just gets the first Placemark from the list you got from placemarkFromCoordinates method.
The code inside the setState method just updates the _currentAddress to the place info you got from the Placemark place and then passes its value to the startAddressController.text and _startAddress.
A: Placemark() class helps you to get certain information like city name, country name, local code based on google map api.
Before you use Placemark() in your app, you need to get decoded string info from google map api
https://maps.googleapis.com/maps/api/geocode/json?latlng='.$request->lat.','.$request->lng.'&key='."your api key here"
From your server side code should return json response and then
_placeMark = Placemark(name: _address)
Now _placeMark would help you get access to city, country, local code etc.
For more go there
https://www.dbestech.com/tutorials/flutter-google-map-geocoding-and-geolocator | unknown | |
d3278 | train | If you just want to convert example.com to www.example.com then you just need to use:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^example.com [NC]
RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=302,NC]
You can also lay it out like this:
RewriteEngine On
RewriteCond %{HTTP_HOST} !^www\.
RewriteRule ^(.*)$ http://www.%{HTTP_HOST}%{REQUEST_URI} [R=302,L,NE]
Make sure you clear your cache before testing this. You will notice I've just the flag R=302. This is a temporary redirect, use this while you're testing. If you're happy with the RewriteRule and everything is working, change these to R=301, which is a permanent redirect.
A: solved by using this
RewriteEngine on
RewriteCond %{HTTP_HOST} ^example.com$ [NC]
RewriteRule (.*)$ http://www.example.com/$1 [R=301]
RedirectMatch 301 ^/web/$ http://www.example.com/ | unknown | |
d3279 | train | finally found at How do I reference a component in an inline web service?
<%@ Assembly Name="MyAssembly" %>
or
<%@ Assembly Src="path/myFile.cs" %>
The name syntax is for compiled DLLs and the src syntax is for open code. The dll seems to need to be in a bin directory under the root. | unknown | |
d3280 | train | In WPF it should be Children. In WPF you need to add items as Childrens of layout panels like your main Grid. For example if you have a Grid set it's name to grid1 and then in the code you can:
grid1.Children.Add(fdfdf)
A: You can add a component like your WebBrowser directly to the content of the Window.
In WPF you are doing it like this:
public partial class MainWindow : Window
{
private void Window_Loaded(object sender, RoutedEventArgs e)
{
WebBrowser wb = new WebBrowser();
this.Content = wb;
}
}
But I suggest to do this via the XAML. | unknown | |
d3281 | train | Using a lock can solve your concurrency problem and thus avoid the IOException, but you must remember to use the same object either on SaveToDisk and ReadFromDisk (i assume this is the reading function), otherwise it's totally useless to lock only when you read.
private static readonly object syncLock = new object();
public void SaveToDisk()
{
lock(syncLock)
{
... write code ...
}
}
public void ReadFromDisk()
{
lock(syncLock)
{
... read code ...
}
}
A: A static lock should do the job quickly and simply:
private static readonly object syncLock = new object();
then...
public void SaveToDisk()
{
lock(syncLock)
{
...your code...
}
}
You can also use [MethodImpl(MethodImplOptions.Synchronized)] (on a static method that accepts the instance as an argument - for example, an extension method), but an explicit lock is more versatile.
A: I'd actually use a ReaderWriterLock to maximise concurrency. You can allow multiple readers but only one writer at a time.
private ReaderWriterLock myLock = new ReaderWriterLock();
public void SaveToDisk()
{
myLock.AcquireWriterLock();
try
{
... write code ...
}
finally
{
myLock.ReleaseWriterLock();
}
}
public void ReadFromDisk()
{
myLock.AcquireReaderLock();
try
{
... read code ...
}
finally
{
myLock.ReleaseReaderLock();
}
}
Just make sure to open the file with FileShare.Read so that subsequent reads don't fail. | unknown | |
d3282 | train | This turned out to be a .NET version issue. Once I applied the 2.0 Service Pack 2 on the server my problems went away.
A: Do the validators work at all on the production machine? That is, do they prevent you from entering invalid data?
I have a vague recollection of something like this happening to me. It may have been an issue of the JavaScript file needed by the validators not being sent from the server. Do a View Source, or turn on debugging (FireBug or IE8's F12 command). See if you're maybe getting JavaScript errors you didn't know about. | unknown | |
d3283 | train | Did you find the answer to your question? I saw that you posted over on the Sensu forums as well.
In any case, the easiest thing to do in this case would be to stop the cluster, blow out /var/lib/sensu/sensu-backend/etcd/ and reconfigure the cluster. As it stands, the behavior you're seeing seems like the cluster members were started individually first, which is what is potentially causing the issue and would be the reason for blowing the etcd dir away. | unknown | |
d3284 | train | Use GET-INTERNAL-RUN-TIME (or GET-INTERNAL-REAL-TIME):
(setf a
(let ((start (get-internal-run-time)))
(+ 1 1) ;This is the computation you want to time.
(- (get-internal-run-time) start)))
Divide by INTERNAL-TIME-UNITS-PER-SECOND if you want the result in seconds.
You would probably want to make a function or macro if you do this a lot.
A: See the answer of Lars.
TIME writes implementation dependent information to the trace output. If you want its output as a string:
(with-output-to-string (*trace-output*)
(time (+ 1 1))) | unknown | |
d3285 | train | I think you are missing transition property in the input, it will be like this:
input {
height: 30px;
width: 300px;
outline: none;
transition: all 0.5s ease;
}
input:hover {
width: 500px;
}
Read more of the CSS transition
A: Firstly your code in text and link are different. Secondly you can't use transition on display cause it's not animatable. You cna use opacity and visibility instead of display. If we come to our question..
You want to get bigger input place when user get hover it right? So in your example your input does not has any width and when you hover it you want to get a transition. For this you need to add a width value to your input so it will know where the change began and what you want at the end. Meanwhile the time you passed is your time attribute on your transition. So
input {
height: 30px;
width: 300px;
outline: none;
transition: 0.3s;
}
input:hover {
width: 400px;
}
It will know that when user hover on it, it will grow to 400px from 300px in 0.3s. That's it. Hope you get it.
A: It's because you didn't specify the width of your input | unknown | |
d3286 | train | This looks like a ReSharper warning and as such you can ask ReSharper to be silent about these things.
You can either configure ReSharper to stop complaining about this overall, you do this simply by hitting Alt+Enter on the squiggly in question and use the bottom menu item that usually allows you to configure the inspection severity.
You can opt to save this in your global settings, which means it will affect every project you open from now on, or you can save it to a team-shared settings file which you can then check into source control alongside your project, to make it only count for this one solution.
Now, if you want to keep the warning overall but ask it to stop complaining about one or more particular types, methods, properties or the likes, you can use the attributes that ReSharper provides.
You have several ways of bringing these attributes into your project:
*
*Add a reference to the Nuget package "JetBrains ReSharper annotations"
*Use the options dialog for ReSharper and find the page where it allows you to grab a copy of the source for those attributes onto the clipboard, then simply paste this into a file in your project.
*Define just the one or two attributes you want, even in your own namespace (which you then have to tell ReSharper about)
The recommended way is option 1, use the nuget package.
Assuming you now have the attributes available you can use either PublicAPIAttribute or the UsedImplicitlyAttribute.
Either one should suffice but they may have different connotations. Since you're flagging objects being transferred to or from clients I would go with the PublicAPIAttribute first.
Since you say in a comment that the PublicAPIAttribute didn't work but UsedImplicitlyAttribute did then I guess they do have different meanings. | unknown | |
d3287 | train | I thought I'd share the workaround that I ended up using.
I just added an index.d.ts file in the node_modules/@ionic/angular/ directory, with the following contents:
export * from './dist';
Of course it isn't ideal to modify the contents of your dependencies, but this simple fix keeps my IDE from driving me crazy... :-) | unknown | |
d3288 | train | You must avoid SQL Injection with parameter binding:
$dbh->do( qq{
INSERT INTO $Stable(Date, RouteID)
VALUES (?, ?)
ON DUPLICATE KEY UPDATE Seats=Seats-?
},
undef,
$Tdate, $Rid, $tickettotal
); | unknown | |
d3289 | train | my sample code for an out of cluster config
var kubeconfig *string
kubeconfig = flag.String("kubeconfig", "./config", "(optional) relative path to the kubeconfig file")
flag.Parse()
// kubernetes config loaded from ./config or whatever the flag was set to
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
// instantiate our client with config
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err)
}
// get a list of our CRs
pl := PingerList{}
d, err := clientset.RESTClient().Get().AbsPath("/apis/pinger.hel.lo/v1/pingers").DoRaw(context.TODO())
if err != nil {
panic(err)
}
if err := json.Unmarshal(d, &pl); err != nil {
panic(err)
}
PingerList{} is an object generated from Kubebuilder that I unmarshal to later in the code. However, you could just straight up println(string(d)) to get that json.
The components in the AbsPath() are "/apis/group/verison/plural version of resource name"
if you're using minikube, you can get the config file with kubectl config view
Kubernetes-related imports are the following
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/kubernetes"
A: Refer this page to get information on how to access the crd using this repo
and for more information refer this document
document
A: Either you need to use the Unstructured client, or generate a client stub. The dynamic client in the controller-runtime library is a lot nicer for this and I recommend it. | unknown | |
d3290 | train | I think you can use the macro variable parameters (variadics) :
#include <stdio.h>
#define DBG_ALOGD(fmt, ...) ALOGD("%s:%d: " fmt, __FUNCTION__, __LINE__, __VA_ARGS__ );
#define DBG_MSG(fmt, ...) do { if (debuggable ) {DBG_ALOGD(fmt, __VA_ARGS__ );} } while (0)
int main(void)
{
int debuggable = 1;
DBG_MSG("%s(%d)\n", "test", 0);
DBG_MSG("%s", "hello");
return 0;
}
I tested it with printf function instead of ALOGD, but I think that the result will be the same.
Warning, you will not be able to call directly DBG_MSG("simple string") since the compiler will wait for something not empty in ... | unknown | |
d3291 | train | The two functions you are looking for are next() and prev(). Native PHP functions for doing exactly what you are after:
$previousPage = prev($array);
$nextPage = next($array);
These functions move the internal pointer so, for example if you are on $array['two'] and use prev($array) then you are now on $array['one']. What I'm getting at is if you need to get one and three then you need to call next() twice.
A: $array = array(
'one' => 'first',
'two' => 'second',
'three' => '3rd',
'four' => '4th'
);
function getPrevNext($haystack,$needle) {
$prev = $next = null;
$aKeys = array_keys($haystack);
$k = array_search($needle,$aKeys);
if ($k !== false) {
if ($k > 0)
$prev = array($aKeys[$k-1] => $haystack[$aKeys[$k-1]]);
if ($k < count($aKeys)-1)
$next = array($aKeys[$k+1] => $haystack[$aKeys[$k+1]]);
}
return array($prev,$next);
}
var_dump(getPrevNext($array,'two'));
var_dump(getPrevNext($array,'one'));
var_dump(getPrevNext($array,'four'));
A: You can try to whip something up with an implementation of a SPL CachingIterator.
A: You could define your own class that handles basic array operations:
Here is an example posted by adityabhai [at] gmail com [Aditya Bhatt] 09-May-2008 12:14 on php.net
<?php
class Steps {
private $all;
private $count;
private $curr;
public function __construct () {
$this->count = 0;
}
public function add ($step) {
$this->count++;
$this->all[$this->count] = $step;
}
public function setCurrent ($step) {
reset($this->all);
for ($i=1; $i<=$this->count; $i++) {
if ($this->all[$i]==$step) break;
next($this->all);
}
$this->curr = current($this->all);
}
public function getCurrent () {
return $this->curr;
}
public function getNext () {
self::setCurrent($this->curr);
return next($this->all);
}
public function getPrev () {
self::setCurrent($this->curr);
return prev($this->all);
}
}
?>
Demo Example:
<?php
$steps = new Steps();
$steps->add('1');
$steps->add('2');
$steps->add('3');
$steps->add('4');
$steps->add('5');
$steps->add('6');
$steps->setCurrent('4');
echo $steps->getCurrent()."<br />";
echo $steps->getNext()."<br />";
echo $steps->getPrev()."<br />";
$steps->setCurrent('2');
echo $steps->getCurrent()."<br />";
echo $steps->getNext()."<br />";
echo $steps->getPrev()."<br />";
?> | unknown | |
d3292 | train | After checking Windows's group policy settings this turned out to be
an anti-virus blocking problem.
Group Policy?: Does the log contain something like this:
Error 0x800704ec: Failed to launch clean room process: "C:\WINDOWS\Temp\{AB10C981-0D7D-4AA6-857F-CC37696DB4BE}\.cr\Bundle.exe" -burn.clean.room="C:\Test\Bundle.exe" -burn.filehandle.attached=652 -burn.filehandle.self=656 -log "C:\Test\bundle.log"
Error 0x800704ec: Failed to run untrusted mode.
Or does it say something else? There is a group policy that can cause similar issues. See WiX issue 5856.
Anti-Virus Grace Period?: If you are administrator, there should be a possiblity to get a temporary grace period from your anti virus I would think. So you can perform your testing. I would give your own support desk a call first and then hit the Kaspersky user forums if unsuccessful. Perhaps you have a Kaspersky support agreement with priority support available?
False Positives: I also insist that you upload your binaries to virustotal.com to test for false positives. That you should do no matter what. Antivirus Whitelisting Pains by Bogdan Mitrache.
False positives can actually be worse than actual malware at times (so far as the malware isn't devastating)
because you cannot just tell the user to rebuild their machine(s). Instead you actually have
to fix the problem for them in a general sense. Not only does the user have a problem to fix, you have one as the vendor as well. How do you whitelist your product with 60+ anti-malware suites? You try virustotal.com first I think (not affiliated) - to check if you actually have such a problem. | unknown | |
d3293 | train | *
*First Conver your Gif image to png Slice image sequence.
*Declare Your Progress bar as Image view.
<ImageView
android:id="@+id/main_progress"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center"
android:visibility="visible" />
*Create .xml file in drawable folder using your .png sequence image those are generated from gif. In this case, Loading_web_animation.xml
<?xml version="1.0" encoding="utf-8"?>
<animation-list xmlns:android="http://schemas.android.com/apk/res/android"
android:oneshot="false">
<item
android:drawable="@mipmap/wblod_0"
android:duration="40" />
<item
android:drawable="@mipmap/wblod_1"
android:duration="40" />
<item
android:drawable="@mipmap/wblod_2"
android:duration="40" />
<item
android:drawable="@mipmap/wblod_3"
android:duration="40" />
<item
android:drawable="@mipmap/wblod_4"
android:duration="40" />
<item
android:drawable="@mipmap/wblod_5"
android:duration="40" />
<item
android:drawable="@mipmap/wblod_6"
android:duration="40" />
<item
android:drawable="@mipmap/wblod_7"
android:duration="40" />
<item
android:drawable="@mipmap/wblod_8"
android:duration="40" />
<item
android:drawable="@mipmap/wblod_9"
android:duration="40" />
<item
android:drawable="@mipmap/wblod_10"
android:duration="40" />
<item
android:drawable="@mipmap/wblod_11"
android:duration="40" />
</animation-list>
*In Main Activity set the code like,
private AnimationDrawable animationDrawable;
private ImageView mProgressBar;
mProgressBar.setBackgroundResource(R.drawable.loading_web_animation);
animationDrawable = (AnimationDrawable)mProgressBar.getBackground();
mProgressBar.setVisibility(View.VISIBLE);
animationDrawable.start();
mProgressBar.setVisibility(View.GONE);
animationDrawable.stop();`
A: I think I'm late to answer this, But you can try this also.
XML
<FrameLayout
android:id="@+id/progress_container"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true">
<ProgressBar
android:id="@+id/circular_progress"
android:layout_width="100dp"
android:layout_height="100dp"
android:indeterminateDrawable="@drawable/my_progress_indeterminate"
/>
<TextView
android:id="@+id/circular_progress_counter"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center"
android:textSize="@dimen/text_size_big"
android:textColor="@color/white"
android:text="10"/>
</FrameLayout>
my_progress_indeterminate.xml
<?xml version="1.0" encoding="utf-8"?>
<animated-rotate xmlns:android="http://schemas.android.com/apk/res/android"
android:drawable="@drawable/process_icon"
android:pivotX="50%"
android:pivotY="50%" />
In Java File login to show Timer. here I used 10 second timer.
private void progressTimer() {
handler = new Handler();
if (maxCount >= 0) {
handler.postDelayed(new Runnable() {
@Override
public void run() {
/*
Logic to set Time inside Progressbar
*/
mCircularProgressCounter.setText(maxCount+"");
maxCount = maxCount - 1;
pickupTimer();
}
}, 1000);
}
}
Result
A: Put your gif image in /res/raw folder
In your class declare mProgressDialog
TransparentProgressDialog mProgressDialog;
then use following code to show progress dialog
if (mProgressDialog == null)
mProgressDialog = new TransparentProgressDialog(this);
if (mProgressDialog.isShowing())
mProgressDialog.dismiss();
mProgressDialog.setTitle(getResources().getString(R.string.title_progress_dialog));
mProgressDialog.setCancelable(false);
mProgressDialog.show();
Create a class TransparentProgressDialog where .gif can be loaded using Glide library.
public class TransparentProgressDialog extends Dialog {
private ImageView iv;
public TransparentProgressDialog(Context context) {
super(context, R.style.TransparentProgressDialog);
WindowManager.LayoutParams wlmp = getWindow().getAttributes();
wlmp.gravity = Gravity.CENTER_HORIZONTAL;
getWindow().setAttributes(wlmp);
setTitle(null);
setCancelable(false);
setOnCancelListener(null);
LinearLayout layout = new LinearLayout(context);
layout.setOrientation(LinearLayout.VERTICAL);
LinearLayout.LayoutParams params = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT);
iv = new ImageView(context);
GlideDrawableImageViewTarget imageViewTarget = new GlideDrawableImageViewTarget(iv);
Glide.with(context).load(R.raw.gif_loader).into(imageViewTarget);
layout.addView(iv, params);
addContentView(layout, params);
}
@Override
public void show() {
super.show();
}
}
A: I solved it before on this post easily:
Custom progress bar with GIF (animated GIF)
Use an ImageView that shows an Animated GIF and when need to show waiting make it visible and when works all will be done, make it Gone!
A: My solution is to use an Animated icon.
1°) Create an XML "animated" in Drawable :
<?xml version="1.0" encoding="utf-8"?>
<animation-list xmlns:android="http://schemas.android.com/apk/res/android">
<item android:drawable="@drawable/micro_1" android:duration="200" />
<item android:drawable="@drawable/micro_2" android:duration="200" />
<item android:drawable="@drawable/micro_3" android:duration="200" />
<item android:drawable="@drawable/micro_4" android:duration="200" />
<item android:drawable="@drawable/micro_3" android:duration="200" />
<item android:drawable="@drawable/micro_2" android:duration="200" />
</animation-list>
2°) Put an imageview in your layout
3°) Put the following code in your activity :
import androidx.appcompat.app.AppCompatActivity;
import android.app.ProgressDialog;
import android.graphics.drawable.AnimationDrawable;
import android.os.Bundle;
import android.os.Handler;
import android.view.View;
import android.widget.ImageView;
public class MainActivity extends AppCompatActivity {
ProgressDialog progressBar;
private int progressBarStatus = 0;
private Handler progressBarHandler = new Handler();
private ImageView micButton;
//-- for testing progress bar
private long fileSize = 0;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
//-- declare animation
final AnimationDrawable[] rocketAnimation = new AnimationDrawable[1];
final ImageView micButton = findViewById(R.id.mic_button);
micButton.setBackgroundResource(R.drawable.micro_1);
//-- button listener
micButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
//-- init button with animation
micButton.setBackgroundResource(R.drawable.mic_image);
rocketAnimation[0] = (AnimationDrawable) micButton.getBackground();
startprogress(rocketAnimation[0]);
}
});
}
public void startprogress(final AnimationDrawable rocketAnimation) {
rocketAnimation.start();
progressBar = new ProgressDialog(MainActivity.this);
//--reset filesize for demo
fileSize = 0;
//-- thread for demo
new Thread(new Runnable() {
public void run() {
while (progressBarStatus < 100) {
// process some tasks
progressBarStatus = doSomeTasks();
// your computer is too fast, sleep 1 second
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// Update the progress bar
progressBarHandler.post(new Runnable() {
public void run() {
progressBar.setProgress(progressBarStatus);
}
});
}
// ok, file is downloaded,
if (progressBarStatus >= 100) {
// sleep 2 seconds, so that you can see the 100%
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// close the progress bar dialog
progressBar.dismiss();
if(rocketAnimation.isRunning()){
rocketAnimation.stop();
}
}
}
}).start();
}
// file download simulator... a really simple
public int doSomeTasks() {
while (fileSize <= 1000000) { //1000000
fileSize++;
if (fileSize == 100000) {
return 10;
} else if (fileSize == 200000) {
return 20;
} else if (fileSize == 300000) {
return 30;
} else if (fileSize == 400000) {
return 40;
} else if (fileSize == 500000) {
return 50;
} else if (fileSize == 600000) {
return 60;
}
}
return 100;
}
} | unknown | |
d3294 | train | Use np.minimum:
In [341]:
df['MinNote'] = np.minimum(1,df['note'])
df
Out[341]:
session note minValue MinNote
0 1 0.726841 0.726841 0.726841
1 2 3.163402 3.163402 1.000000
2 3 2.844161 2.844161 1.000000
3 4 NaN NaN NaN
Also min doesn't understand array-like comparisons hence your error
A: The preferred way to do this in pandas is to use the Series.clip() method.
In your example:
import pandas
df = pandas.DataFrame({'session': [1, 2, 3, 4],
'note': [0.726841, 3.163402, 2.844161, float('NaN')]})
df['minVaue'] = df['note'].clip(upper=1.)
df
Will return:
note session minVaue
0 0.726841 1 0.726841
1 3.163402 2 1.000000
2 2.844161 3 1.000000
3 NaN 4 NaN
numpy.minimum will also work, but .clip() has some advantages:
*
*It is more readable
*You can apply simultaneously lower and upper bounds: df['note'].clip(lower=0., upper=10.)
*You can pipe it with other methods: df['note'].abs().clip(upper=1.).round() | unknown | |
d3295 | train | What kind of JavaScript syntax is this.
Anything starting with a // is a Javascript comment.
How is it able to process it ?
Sprockets on the server side scans the JS file for directives. //= is a special Sprocket directive. When it encounters that directive it asks the Directive Processor to process the command, require in this example. In the absence of Sprockets the //= require .. line would be a simple JS comment.
Ruby require vs Sprockets require
These are two completely different things. The one you link to is Ruby's require.
Why not use script tags to load JS files.
Usually, you want to concatenate all your app JS files and then minify them into 1 master JS file and then include that. I recommend reading the YSlow best practices on this.
I also recommend watching the Railscasts on Asset Pipline - http://railscasts.com/episodes/279-understanding-the-asset-pipeline
Cheers! | unknown | |
d3296 | train | offset could be what you want
$('#drop_1').on('click', function(){
var offset = $(this).offset();
alert('top - ' + offset.top + "\n left - " + offset.left);
});
This will alert the position of the element from the top and left of the document
jQuery offset()
Here is a
Demo
A: Relatively parent element:
$('#drop_1').position().left // x coord
$('#drop_1').position().top // y coord
Relatively page:
$('#drop_1').offset().left // x coord
$('#drop_1').offset().top // y coord | unknown | |
d3297 | train | $(document).ready(function() {
//code here
});
will run a script when the document structure is ready, but before all of the images have loaded.
if you want to run script before the document structure is ready, just put your code anywhere.
A: Sometimes if you only use $(document).ready(), there will be a flash of content.
To avoid the flash, you can hide the body with css then show it after the page is loaded.
*
*Add the line below to your CSS:
html { visibility:hidden; }
*And these to your JS:
$(document).ready(function() {
//your own JS code here
document.getElementsByTagName("html")[0].style.visibility = "visible";
});
Then the page will go from blank to showing all content when the page is loaded, no flash of content, no watching images load etc.
Inspired by this, thanks to the author.
A: Load the images after the page loads?
It may depends on what kind of jquery code you're trying to run--what specifically are you trying to do?
A: Use
$(document).ready(function() {
//code here
});
and put the jQuery-Script tags at the end of the document.
.ready fires when the DOM is ready (which does not mean that the images are already loaded).
A: If your JavaScript does any work on your Dom elements, then you have to wait until the page loads.
If you need to run the scripts before the image are loaded, you can always lazy load the images. That way you don't have to wait for the images to load.
Lazy loading is basically loading the images through JavaScript, so you can control when they load. | unknown | |
d3298 | train | The general guidance by Henry is right, but it lacks some necessary details.
To get your expected result the following steps are required:
*
*rename code in df2 to name,
*melt df2 on name, setting var_name to description,
*merge df1 with the above melt result on name and description.
The code to do it is:
result = pd.merge(df1, df2.rename(columns={'code': 'name'}).melt(
'name', var_name='description'), on=['name', 'description'])
The result is:
id name description value
0 1121 F.01 r1 1
1 1122 F.01 r2 2
2 1123 F.02 l1 3
3 1124 F.02 l2 4
4 1125 F.02 l3 5 | unknown | |
d3299 | train | I got the same error and tracked it down a little bit inside clr.dll.
The function internally calls GetSystemInfo (kernel32) to check the allocation granularity.
A quick and dirty fix for this issue: Detour GetSystemInfo
see example code here ( I used Process.NET for the detour, it’s quick and easy)
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Reflection;
using System.Runtime.InteropServices;
using System.IO;
using ProcessDotNet;
using ProcessDotNet.Applied.Detours;
namespace Bootstrap
{
[UnmanagedFunctionPointer(CallingConvention.Winapi)]
public delegate void GetSystemInfoDelegate(ref SYSTEM_INFO info);
[StructLayout(LayoutKind.Sequential)]
public struct SYSTEM_INFO
{
public ushort processorArchitecture;
ushort reserved;
public uint pageSize;
public IntPtr minimumApplicationAddress;
public IntPtr maximumApplicationAddress;
public IntPtr activeProcessorMask;
public uint numberOfProcessors;
public uint processorType;
public uint allocationGranularity;
public ushort processorLevel;
public ushort processorRevision;
}
class Loader
{
static void GetSystemInfoDetoured(ref SYSTEM_INFO info)
{
info = new SYSTEM_INFO();
GetSystemInfoDetour.Disable();
GetSystemInfo(ref info);
info.allocationGranularity = 1000000;
GetSystemInfoDetour.Enable();
}
static Detour GetSystemInfoDetour;
static GetSystemInfoDelegate GetSystemInfo;
static void Main(string[] args)
{
TestDomainCreation();
}
static void TestDomainCreation()
{
AppDomainSetup setup = new AppDomainSetup();
ProcessSharp s = new ProcessSharp(Process.GetCurrentProcess(), ProcessDotNet.Memory.MemoryType.Local);
GetSystemInfo = s["kernel32"]["GetSystemInfo"].GetDelegate<GetSystemInfoDelegate>();
DetourManager dtm = new DetourManager(s.Memory);
GetSystemInfoDetour = dtm.CreateAndApply(GetSystemInfo, new GetSystemInfoDelegate(GetSystemInfoDetoured), "GetSystemInfoDetour");
var tempDomain = AppDomain.CreateDomain("HappyDomain", null, setup);
GetSystemInfoDetour.Disable();
GetSystemInfoDetour.Dispose();
}
}
}
Regards | unknown | |
d3300 | train | Since the OP wants the answer regardless, I will use PHP for this.
iOS client side:
NSString *phpURLString = [NSString stringWithFormat:@"%@/getFile.php", serverAddress];
NSURL *phpURL = [NSURL URLWithString:phpURLString];
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:phpURL];
NSString *post = [NSString stringWithFormat:@"filePath=%@", filePath];
NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding];
NSString *postLength = [NSString stringWithFormat:@"%d", [post length]];
[request setHTTPMethod:@"POST"];
[request setValue:postLength forHTTPHeaderField:@"Content-Length"];
[request setHTTPBody:postData];
NSData *responseData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]:
For the PHP side:
<?php
$filePath = htmlspecialchars($_POST['filePath']);
$fileData = file_get_contents($filePath);
echo $fileData;
?>
This is very basic. Also for the iOS side you would want to wrap that entire request in a code block that is run asynchronously in the background. You could use GCD for that. Once you have the file as responseData in iOS you can save the file to the local container and then do many things with it. | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.