id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
11aa03ee5a2357eec964eafc62d13b351f36af03
|
Stackoverflow Stackexchange
Q: Macros in the Airflow Python operator Can I use macros with the PythonOperator? I tried following, but I was unable to get the macros rendered:
dag = DAG(
'temp',
default_args=default_args,
description='temp dag',
schedule_interval=timedelta(days=1))
def temp_def(a, b, **kwargs):
print '{{ds}}'
print '{{execution_date}}'
print 'a=%s, b=%s, kwargs=%s' % (str(a), str(b), str(kwargs))
ds = '{{ ds }}'
mm = '{{ execution_date }}'
t1 = PythonOperator(
task_id='temp_task',
python_callable=temp_def,
op_args=[mm , ds],
provide_context=False,
dag=dag)
A: In my opinion a more native Airflow way of approaching this would be to use the included PythonOperator and use the provide_context=True parameter as such.
t1 = MyPythonOperator(
task_id='temp_task',
python_callable=temp_def,
provide_context=True,
dag=dag)
Now you have access to all of the macros, airflow metadata and task parameters in the kwargs of your callable
def temp_def(**kwargs):
print 'ds={}, execution_date={}'.format((str(kwargs['ds']), str(kwargs['execution_date']))
If you had some custom defined params associated with the task you could access those as well via kwargs['params']
|
Q: Macros in the Airflow Python operator Can I use macros with the PythonOperator? I tried following, but I was unable to get the macros rendered:
dag = DAG(
'temp',
default_args=default_args,
description='temp dag',
schedule_interval=timedelta(days=1))
def temp_def(a, b, **kwargs):
print '{{ds}}'
print '{{execution_date}}'
print 'a=%s, b=%s, kwargs=%s' % (str(a), str(b), str(kwargs))
ds = '{{ ds }}'
mm = '{{ execution_date }}'
t1 = PythonOperator(
task_id='temp_task',
python_callable=temp_def,
op_args=[mm , ds],
provide_context=False,
dag=dag)
A: In my opinion a more native Airflow way of approaching this would be to use the included PythonOperator and use the provide_context=True parameter as such.
t1 = MyPythonOperator(
task_id='temp_task',
python_callable=temp_def,
provide_context=True,
dag=dag)
Now you have access to all of the macros, airflow metadata and task parameters in the kwargs of your callable
def temp_def(**kwargs):
print 'ds={}, execution_date={}'.format((str(kwargs['ds']), str(kwargs['execution_date']))
If you had some custom defined params associated with the task you could access those as well via kwargs['params']
A: Macros only get processed for templated fields. To get Jinja to process this field, extend the PythonOperator with your own.
class MyPythonOperator(PythonOperator):
template_fields = ('templates_dict','op_args')
I added 'templates_dict' to the template_fields because the PythonOperator itself has this field templated:
PythonOperator
Now you should be able to use a macro within that field:
ds = '{{ ds }}'
mm = '{{ execution_date }}'
t1 = MyPythonOperator(
task_id='temp_task',
python_callable=temp_def,
op_args=[mm , ds],
provide_context=False,
dag=dag)
|
stackoverflow
|
{
"language": "en",
"length": 221,
"provenance": "stackexchange_0000F.jsonl.gz:855627",
"question_score": "24",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513308"
}
|
396e184acd28488e26bc269807a5ba97c5445280
|
Stackoverflow Stackexchange
Q: Enable Azure diagnostics logging using a CLI I am trying to achieve achieve this, however I would like to enable diagnostics logging using whatever form of non-human interaction possible. I would like to incorporate something that enables diagnostics logging during my build step, by default these are turned off it seems. Any Powsershell (using Azure RM) or visual studio team service build tasks available to make this happen?
A: You could do it with Azure CLI 2.0. You could use the following command.
az webapp log config --name
--resource-group
[--application-logging {false, true}]
[--detailed-error-messages {false, true}]
[--failed-request-tracing {false, true}]
[--level {error, information, verbose, warning}]
[--slot]
[--web-server-logging {filesystem, off, storage}]
Get help from az help command az webapp log config -h or the official article.
#an example
az webapp log config --name shui -g shuiapp --application-logging true --web-server-logging filesystem
|
Q: Enable Azure diagnostics logging using a CLI I am trying to achieve achieve this, however I would like to enable diagnostics logging using whatever form of non-human interaction possible. I would like to incorporate something that enables diagnostics logging during my build step, by default these are turned off it seems. Any Powsershell (using Azure RM) or visual studio team service build tasks available to make this happen?
A: You could do it with Azure CLI 2.0. You could use the following command.
az webapp log config --name
--resource-group
[--application-logging {false, true}]
[--detailed-error-messages {false, true}]
[--failed-request-tracing {false, true}]
[--level {error, information, verbose, warning}]
[--slot]
[--web-server-logging {filesystem, off, storage}]
Get help from az help command az webapp log config -h or the official article.
#an example
az webapp log config --name shui -g shuiapp --application-logging true --web-server-logging filesystem
|
stackoverflow
|
{
"language": "en",
"length": 138,
"provenance": "stackexchange_0000F.jsonl.gz:855641",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513358"
}
|
e9414930e1355f7ecbcc48b579d4476a93c90e70
|
Stackoverflow Stackexchange
Q: How to add multiple statements inside a when statement in kotlin I am trying to have multiple statements for one condition. For example: this is a sample code for when statement.
when (x) {
1 -> print("x == 1")
2 -> print("x == 2")
else -> { // Note the block
print("x is neither 1 nor 2")
}
}
When x is 1, I also want to have an extra statement like x += 10, how can I do it?
A: You have the solution in your question with the "Note the block" comment. Branches of when can be blocks that can contain any number of statements:
when(x) {
1 -> {
println("x == 1")
x += 10
println("x == 11")
}
2 -> { ... }
else -> { ... }
}
Writing a single statement branch just has a simplified syntax so that you don't need to surround it with {}.
|
Q: How to add multiple statements inside a when statement in kotlin I am trying to have multiple statements for one condition. For example: this is a sample code for when statement.
when (x) {
1 -> print("x == 1")
2 -> print("x == 2")
else -> { // Note the block
print("x is neither 1 nor 2")
}
}
When x is 1, I also want to have an extra statement like x += 10, how can I do it?
A: You have the solution in your question with the "Note the block" comment. Branches of when can be blocks that can contain any number of statements:
when(x) {
1 -> {
println("x == 1")
x += 10
println("x == 11")
}
2 -> { ... }
else -> { ... }
}
Writing a single statement branch just has a simplified syntax so that you don't need to surround it with {}.
|
stackoverflow
|
{
"language": "en",
"length": 154,
"provenance": "stackexchange_0000F.jsonl.gz:855642",
"question_score": "26",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513360"
}
|
6b30f35e621109579dd49c9071ce666fa921b0a3
|
Stackoverflow Stackexchange
Q: Get names of *args in a python method I am not sure if it is possible. But suppose I have some python class with constructor as follows:
class SomeClass(object):
def __init__(self, *args):
pass
# here I want to iterate over args
# get name of each arg
Suppose I use this class somewhere and create an instance of it:
some_var = SomeClass(user, person, client, doctor)
What I mean by get name of arg, is to get names ('user', 'person', 'client' and 'doctor')
I really mean just get string name of a argument. Where user, person etc. are some python objects with their attributes etc, but I only need the name of how these variables (objects) are named.
A: *
**args should be used when you are unsure how many arguments will be passed to your function
***kwargs lets you to handle named arguments that you have not defined in advance (kwargs = keyword arguments)
So **kwargs is a dictionary added to the parameters.
https://docs.python.org/3/tutorial/controlflow.html#arbitrary-argument-lists
|
Q: Get names of *args in a python method I am not sure if it is possible. But suppose I have some python class with constructor as follows:
class SomeClass(object):
def __init__(self, *args):
pass
# here I want to iterate over args
# get name of each arg
Suppose I use this class somewhere and create an instance of it:
some_var = SomeClass(user, person, client, doctor)
What I mean by get name of arg, is to get names ('user', 'person', 'client' and 'doctor')
I really mean just get string name of a argument. Where user, person etc. are some python objects with their attributes etc, but I only need the name of how these variables (objects) are named.
A: *
**args should be used when you are unsure how many arguments will be passed to your function
***kwargs lets you to handle named arguments that you have not defined in advance (kwargs = keyword arguments)
So **kwargs is a dictionary added to the parameters.
https://docs.python.org/3/tutorial/controlflow.html#arbitrary-argument-lists
A: Use **kwargs and setattr like this:
class SomeClass(object):
def __init__(self, **kwargs):
for key, value in kwargs.items():
setattr(self, key, value)
and you'll get access to the keywords and the values as well, no matter which type they are.
|
stackoverflow
|
{
"language": "en",
"length": 203,
"provenance": "stackexchange_0000F.jsonl.gz:855666",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513452"
}
|
95306fe4d2fadcc255a1d7db5529584a881996b0
|
Stackoverflow Stackexchange
Q: How to remove multilevel index in pandas pivot table I have a dataframe as given:
df = {'TYPE' : pd.Series(['Advisory','Advisory1','Advisory2','Advisory3']),
'CNTRY' : pd.Series(['IND','FRN','IND','FRN']),
'VALUE' : pd.Series([1., 2., 3., 4.])}
df = pd.DataFrame(df)
df = pd.pivot_table(df,index=["CNTRY"],columns=["TYPE"]).reset_index()
After pivoting, how can I get the dataframe having columns and df to be like the below; removing the multilevel index, VALUE
Type|CNTRY|Advisory|Advisory1|Advisory2|Advisory3
0 FRN NaN 2.0 NaN 4.0
1 IND 1.0 NaN 3.0 NaN
A: You can use set_index with unstack
df.set_index(['CNTRY', 'TYPE']).VALUE.unstack().reset_index()
TYPE CNTRY Advisory Advisory1 Advisory2 Advisory3
0 FRN NaN 2.0 NaN 4.0
1 IND 1.0 NaN 3.0 NaN
|
Q: How to remove multilevel index in pandas pivot table I have a dataframe as given:
df = {'TYPE' : pd.Series(['Advisory','Advisory1','Advisory2','Advisory3']),
'CNTRY' : pd.Series(['IND','FRN','IND','FRN']),
'VALUE' : pd.Series([1., 2., 3., 4.])}
df = pd.DataFrame(df)
df = pd.pivot_table(df,index=["CNTRY"],columns=["TYPE"]).reset_index()
After pivoting, how can I get the dataframe having columns and df to be like the below; removing the multilevel index, VALUE
Type|CNTRY|Advisory|Advisory1|Advisory2|Advisory3
0 FRN NaN 2.0 NaN 4.0
1 IND 1.0 NaN 3.0 NaN
A: You can use set_index with unstack
df.set_index(['CNTRY', 'TYPE']).VALUE.unstack().reset_index()
TYPE CNTRY Advisory Advisory1 Advisory2 Advisory3
0 FRN NaN 2.0 NaN 4.0
1 IND 1.0 NaN 3.0 NaN
A: You can add parameter values:
df = pd.pivot_table(df,index="CNTRY",columns="TYPE", values='VALUE').reset_index()
print (df)
TYPE CNTRY Advisory Advisory1 Advisory2 Advisory3
0 FRN NaN 2.0 NaN 4.0
1 IND 1.0 NaN 3.0 NaN
And for remove columns name rename_axis:
df = pd.pivot_table(df,index="CNTRY",columns="TYPE", values='VALUE') \
.reset_index().rename_axis(None, axis=1)
print (df)
CNTRY Advisory Advisory1 Advisory2 Advisory3
0 FRN NaN 2.0 NaN 4.0
1 IND 1.0 NaN 3.0 NaN
But maybe is necessary only pivot:
df = df.pivot(index="CNTRY",columns="TYPE", values='VALUE') \
.reset_index().rename_axis(None, axis=1)
print (df)
CNTRY Advisory Advisory1 Advisory2 Advisory3
0 FRN NaN 2.0 NaN 4.0
1 IND 1.0 NaN 3.0 NaN
because pivot_table aggregate duplicates by default aggregate function mean:
df = {'TYPE' : pd.Series(['Advisory','Advisory1','Advisory2','Advisory1']),
'CNTRY' : pd.Series(['IND','FRN','IND','FRN']),
'VALUE' : pd.Series([1., 4., 3., 4.])}
df = pd.DataFrame(df)
print (df)
CNTRY TYPE VALUE
0 IND Advisory 1.0
1 FRN Advisory1 1.0 <-same FRN and Advisory1
2 IND Advisory2 3.0
3 FRN Advisory1 4.0 <-same FRN and Advisory1
df = df.pivot_table(index="CNTRY",columns="TYPE", values='VALUE')
.reset_index().rename_axis(None, axis=1)
print (df)
TYPE Advisory Advisory1 Advisory2
CNTRY
FRN 0.0 2.5 0.0
IND 1.0 0.0 3.0
Alternative with groupby, aggregate function and unstack:
df = df.groupby(["CNTRY","TYPE"])['VALUE'].mean().unstack(fill_value=0)
.reset_index().rename_axis(None, axis=1)
print (df)
CNTRY Advisory Advisory1 Advisory2
0 FRN 0.0 2.5 0.0
1 IND 1.0 0.0 3.0
|
stackoverflow
|
{
"language": "en",
"length": 296,
"provenance": "stackexchange_0000F.jsonl.gz:855676",
"question_score": "19",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513488"
}
|
b7cbb5b2e33e448d453f4a5212e7cf972b3e7145
|
Stackoverflow Stackexchange
Q: Why an additional array of arrays create inside the JSON object array in javascript I add JSON objects to an array using push command. After adding that array shows only two JSON objects. Then I add the whole array to another array that contains 7 other arrays. Finally, when I access the JSON object array it shows two JSON objects and one additional array that contain same objects and the array. Here I attached the code and result of the outcome. How can I resolve this?
prevArrhythBeats.push(
{
x: annotationPacket[k].timestamp,
title: annotationPacket[k].annotBeat.a_type,
text: annotationPacket[k].annotBeat.a_desc
}
);
dataFactory.setPrevData(prevChOne, prevChTwo, prevGrid, prevLeadChange, prevMotion, prevArrhythBeats)
dataFactory.setPrevData = function (cOne, cTwo, grid, lead, mot, beat) {
prevData.push(cOne);
prevData.push(cTwo);
prevData.push(grid);
prevData.push(lead);
prevData.push(mot);
prevData.push(beat);
}
before add to dataFactory.setPrevData
After dataFactory.setPrevData method
A: You are pushing array to itself.
|
Q: Why an additional array of arrays create inside the JSON object array in javascript I add JSON objects to an array using push command. After adding that array shows only two JSON objects. Then I add the whole array to another array that contains 7 other arrays. Finally, when I access the JSON object array it shows two JSON objects and one additional array that contain same objects and the array. Here I attached the code and result of the outcome. How can I resolve this?
prevArrhythBeats.push(
{
x: annotationPacket[k].timestamp,
title: annotationPacket[k].annotBeat.a_type,
text: annotationPacket[k].annotBeat.a_desc
}
);
dataFactory.setPrevData(prevChOne, prevChTwo, prevGrid, prevLeadChange, prevMotion, prevArrhythBeats)
dataFactory.setPrevData = function (cOne, cTwo, grid, lead, mot, beat) {
prevData.push(cOne);
prevData.push(cTwo);
prevData.push(grid);
prevData.push(lead);
prevData.push(mot);
prevData.push(beat);
}
before add to dataFactory.setPrevData
After dataFactory.setPrevData method
A: You are pushing array to itself.
|
stackoverflow
|
{
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:855704",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513567"
}
|
63d71fda1007d88940074db5d568e677186f9222
|
Stackoverflow Stackexchange
Q: Powershell how to allow blocked content in internet explorer
Have any idea using powershell how to enable and disable "Alloww active content to run in files on my computer" in IE properties..
A: This worked for me !
When the HTML page is loaded from local machine (local file path), the ActiveX control can be blocked by the Local Machine Lockdown feature. You should be able to workaround it by adding a mark-of-web
(<!-- saved from url=(0014)about:internet -->) at the beginning of your HTML page.
|
Q: Powershell how to allow blocked content in internet explorer
Have any idea using powershell how to enable and disable "Alloww active content to run in files on my computer" in IE properties..
A: This worked for me !
When the HTML page is loaded from local machine (local file path), the ActiveX control can be blocked by the Local Machine Lockdown feature. You should be able to workaround it by adding a mark-of-web
(<!-- saved from url=(0014)about:internet -->) at the beginning of your HTML page.
A: You can achieve the same from registry Key. First create the registry key iexplore.exe (Dword)
in the path HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_LOCALMACHINE_LOCKDOWN
If you set the value to 1 the 'Allow active content to run in files on my computer' will be unchecked.
If you set the value to 0 the 'Allow active content to run in files on my computer' will be checked.
|
stackoverflow
|
{
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:855711",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513580"
}
|
07b7a612afc7f0862cb81338e48b2703e8c28290
|
Stackoverflow Stackexchange
Q: laravel validate with user function blade.php
......
<tr>
<td>{{ Form::label('cameraMac', 'Mac: ') }}</td>
<td>{{ Form::text('cameraMac')}}</td>
</tr>
......
controller.php
$data = Input::all();
function transform($cameraMac) {
return strtoupper($cameraMac);
}
//validation here
$user = User::find($data['user_id']);
if($data["cameraMac"])
{
$user->camera_id = transform($data["cameraMac"]);
Cloud_camera::create(['id' => transform($data["cameraMac"]), 'user_id' => $user->id]);
}
$user->save();
I need to transform the cameraMac to primary key to store in db. How can I use function transform() in the validation rule 'nullable|unique:cloud_cameras,id|size:6'. That's to say, how can I call this function in the validation process.
A: If you need to somehow transform the input data you can use the merge method:
Input::merge(['cameraMac' => transform(Input::get('cameraMac')]);
$this->validate(Input::getFacadeRoot(), [
'cameraMac' => 'nullable|unique:cloud_cameras,id|size:6'
]);
As a personal preference I would instead type-hint Illuminate\Http\Request $request in the controller method and then
$request->merge(['cameraMac' => transform($request->cameraMac)]);
$this->validate($request, ['cameraMac' => 'nullable|unique:cloud_cameras,id|size:6'
]);
|
Q: laravel validate with user function blade.php
......
<tr>
<td>{{ Form::label('cameraMac', 'Mac: ') }}</td>
<td>{{ Form::text('cameraMac')}}</td>
</tr>
......
controller.php
$data = Input::all();
function transform($cameraMac) {
return strtoupper($cameraMac);
}
//validation here
$user = User::find($data['user_id']);
if($data["cameraMac"])
{
$user->camera_id = transform($data["cameraMac"]);
Cloud_camera::create(['id' => transform($data["cameraMac"]), 'user_id' => $user->id]);
}
$user->save();
I need to transform the cameraMac to primary key to store in db. How can I use function transform() in the validation rule 'nullable|unique:cloud_cameras,id|size:6'. That's to say, how can I call this function in the validation process.
A: If you need to somehow transform the input data you can use the merge method:
Input::merge(['cameraMac' => transform(Input::get('cameraMac')]);
$this->validate(Input::getFacadeRoot(), [
'cameraMac' => 'nullable|unique:cloud_cameras,id|size:6'
]);
As a personal preference I would instead type-hint Illuminate\Http\Request $request in the controller method and then
$request->merge(['cameraMac' => transform($request->cameraMac)]);
$this->validate($request, ['cameraMac' => 'nullable|unique:cloud_cameras,id|size:6'
]);
A: I'd consider defining a middleware.
The middleware will perform the transformation and merge the result back to the request before hitting the controller.
class TransformId {
public function handle(Request $request, Closure $next) {
// shout out to @alepeino
$request->merge(['cameraMac' => transform($request->cameraMac)]);
return $next($request);
}
}
A: Are you shure, that you want the field cameraMac be nullable as a (more or less) primary key?
You should also use the int validation for using the size validation.
The third parameter of unique validation is the except which will ignore the given ID.
Your validation in the controller could look like this
$except_id = $request->input('cameraMac', null);
$this->validate($request, [
'cameraMac' => 'nullable|unique:cloud_cameras,id,'.$except_id.'|int|size:6'
]);
One example how you can show errors of your validation in the views will be this
@if (count($errors) > 0)
<div class="alert alert-danger">
<ul>
@foreach ($errors->all() as $error)
<li>{{ $error }}</li>
@endforeach
</ul>
</div>
@endif
A: For now I trans the input data first, then validate it in the below way.
But I'm looking a way to call this funciton in the validation.
$data = Input::all();
$data['cameraMac'] = transform($data['cameraMac']);
$rules = array(
'id' => 'required|unique:cloud_printers,id|size:6',
'cameraMac' => 'unique:cloud_cameras,id|size:6',
);
$validator = Validator::make($data, $rules);
if ($validator->fails())
return Redirect::back()->withInput()->withErrors($validator);
A: You can define a custom validation rule:
Validator::extend('transformUpper', function ($attribute, $value, $parameters, $validator) {
return strtoupper($value);
});
Then use the new rule in the Request as usual:
$this->validate($request, [
'cameraMac' => 'required|transformUpper'
]);
|
stackoverflow
|
{
"language": "en",
"length": 364,
"provenance": "stackexchange_0000F.jsonl.gz:855722",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513613"
}
|
0e8b7db4c9d98f64892f0db18f560c599b734368
|
Stackoverflow Stackexchange
Q: How to config cron value of @Scheduled in application.properties I am using spring-schedule like this.
@Component
@EnableScheduling
public class ScheduledTasks {
@Autowired
private ISomeJob someJob;
/**
* do a Job every 5 minutes.
*/
@Scheduled(cron = "0 0/5 * * * ?")
public void foo(){
someJob.doSomething();
}
}
It worked. But there is a problem.
I have two profiles named debug and release.
I want do this job every 5 minutes in debug but per hour in release.
So is there any way to config the value of cron in application.properties.
A: Just add an expression @Scheduled(cron = "${some.profile.cron}") to swap the cron depending on selected profile.
|
Q: How to config cron value of @Scheduled in application.properties I am using spring-schedule like this.
@Component
@EnableScheduling
public class ScheduledTasks {
@Autowired
private ISomeJob someJob;
/**
* do a Job every 5 minutes.
*/
@Scheduled(cron = "0 0/5 * * * ?")
public void foo(){
someJob.doSomething();
}
}
It worked. But there is a problem.
I have two profiles named debug and release.
I want do this job every 5 minutes in debug but per hour in release.
So is there any way to config the value of cron in application.properties.
A: Just add an expression @Scheduled(cron = "${some.profile.cron}") to swap the cron depending on selected profile.
|
stackoverflow
|
{
"language": "en",
"length": 108,
"provenance": "stackexchange_0000F.jsonl.gz:855736",
"question_score": "16",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513666"
}
|
fcf90d371fd627663d68407fdfeb536e6952b096
|
Stackoverflow Stackexchange
Q: Perform a maven plugin execution only when a file changed since last build I want to execute a maven plugin during mvn clean install whenever a file is changed since the last build. If the file is not changed since the last build then plugin execution should be skipped during mvn clean install.
Is it possible to achieve this in maven 3.5.0?
A: Maven doesn't keep record of all modules it ever built. However, this would be necessary if Maven would have to know if some (source) files changed.
Some plugins, like the maven-compiler-plugin, compare timestamps of source-files with timestamps of corresponding, generated class-files, which allows to skip compilation if classfile is newer. However, if you execute mvn clean (as mentioned in the question), class files are removed and compilation thus has to be executed anyway.
So to conculde: your request cannot be fulfilled by maven without major changes in maven itself.
|
Q: Perform a maven plugin execution only when a file changed since last build I want to execute a maven plugin during mvn clean install whenever a file is changed since the last build. If the file is not changed since the last build then plugin execution should be skipped during mvn clean install.
Is it possible to achieve this in maven 3.5.0?
A: Maven doesn't keep record of all modules it ever built. However, this would be necessary if Maven would have to know if some (source) files changed.
Some plugins, like the maven-compiler-plugin, compare timestamps of source-files with timestamps of corresponding, generated class-files, which allows to skip compilation if classfile is newer. However, if you execute mvn clean (as mentioned in the question), class files are removed and compilation thus has to be executed anyway.
So to conculde: your request cannot be fulfilled by maven without major changes in maven itself.
|
stackoverflow
|
{
"language": "en",
"length": 153,
"provenance": "stackexchange_0000F.jsonl.gz:855742",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513683"
}
|
7d3f254a776e161ea010cd0f0416aa7a0994e7e8
|
Stackoverflow Stackexchange
Q: Browser bars (address & bottom bar) on IOS safari auto hide not getting enabled on flexbox layout Working on a flexbox layout for a mobile website. Since the layout stays fit (flexible to 100%) the address bar and bottom bar stay visible always (no auto hide). Is there any trick to make it auto hide? Thanks in advance
|
Q: Browser bars (address & bottom bar) on IOS safari auto hide not getting enabled on flexbox layout Working on a flexbox layout for a mobile website. Since the layout stays fit (flexible to 100%) the address bar and bottom bar stay visible always (no auto hide). Is there any trick to make it auto hide? Thanks in advance
|
stackoverflow
|
{
"language": "en",
"length": 59,
"provenance": "stackexchange_0000F.jsonl.gz:855743",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513684"
}
|
83d8121cd21cc11fde4a422a9abce46fe3551f7e
|
Stackoverflow Stackexchange
Q: Nginx in Docker terminates directly I created a docker image based on nginx and included the config in the image it self.
ThenI start the container (docker run -p 2000:80 nginx_be:0.1).
When I don't check for running containers with docker ps there are none.
What am I missing?
My Dockerfile
FROM nginx
ADD /conf/nginx.conf /etc/nginx/nginx.conf:ro
WORKDIR /etc/nginx
EXPOSE 80
CMD ["nginx", "-c", "/etc/nginx/nginx.conf"]
A: The latest nginx Dockerfile does end with:
CMD ["nginx", "-g", "daemon off;"]
You should do the same in order to avoid nginx launching as a daemon (background), which would make the container stop immediately (because of the lack of a foreground process).
See also "How to Keep Docker Container Running After Starting Services?".
You can also see this docker run command which override your Dockerfile CMD:
docker run -t -d --name my-nginx nginx /usr/sbin/nginx -g "daemon off;"
|
Q: Nginx in Docker terminates directly I created a docker image based on nginx and included the config in the image it self.
ThenI start the container (docker run -p 2000:80 nginx_be:0.1).
When I don't check for running containers with docker ps there are none.
What am I missing?
My Dockerfile
FROM nginx
ADD /conf/nginx.conf /etc/nginx/nginx.conf:ro
WORKDIR /etc/nginx
EXPOSE 80
CMD ["nginx", "-c", "/etc/nginx/nginx.conf"]
A: The latest nginx Dockerfile does end with:
CMD ["nginx", "-g", "daemon off;"]
You should do the same in order to avoid nginx launching as a daemon (background), which would make the container stop immediately (because of the lack of a foreground process).
See also "How to Keep Docker Container Running After Starting Services?".
You can also see this docker run command which override your Dockerfile CMD:
docker run -t -d --name my-nginx nginx /usr/sbin/nginx -g "daemon off;"
|
stackoverflow
|
{
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:855754",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513705"
}
|
38e3a8bfc1360ff1d23a255c07ff2dc049c48489
|
Stackoverflow Stackexchange
Q: shell script to check if a glob pattern matches any file I wrote a shell script to check if a binary is present:
#!/usr/local/bin/bash
if [ -e "/path/to/a/binary/file/app.bin" ]; then
echo "INFO: application has been built successfully"
else
echo "ERROR: application couldn't be built".
fi
It works fine and gives expected results. However if there are many applications with similar name (say app1.bin, app2.bin etc) and I want to test for "app*.bin" the if condition fails:
#!/usr/local/bin/bash
if [ -e "/path/to/a/binary/file/app*.bin" ]; then
echo "INFO: application has been built successfully"
else
echo "ERROR: application couldn't be built".
fi
How can i correct the if condition to search for existence of any binary with app as name i.e. app*.bin
A: An alternative is to use compgen -G: it will generate a list of matching files or it will exit with an error status if nothing matches. With compgen you need quotes to prevent expansion.
if compgen -G "/path/to/a/binary/file/app*.bin" > /dev/null; then
echo "INFO: application has been built successfully"
else
echo "ERROR: application couldn't be built".
fi
|
Q: shell script to check if a glob pattern matches any file I wrote a shell script to check if a binary is present:
#!/usr/local/bin/bash
if [ -e "/path/to/a/binary/file/app.bin" ]; then
echo "INFO: application has been built successfully"
else
echo "ERROR: application couldn't be built".
fi
It works fine and gives expected results. However if there are many applications with similar name (say app1.bin, app2.bin etc) and I want to test for "app*.bin" the if condition fails:
#!/usr/local/bin/bash
if [ -e "/path/to/a/binary/file/app*.bin" ]; then
echo "INFO: application has been built successfully"
else
echo "ERROR: application couldn't be built".
fi
How can i correct the if condition to search for existence of any binary with app as name i.e. app*.bin
A: An alternative is to use compgen -G: it will generate a list of matching files or it will exit with an error status if nothing matches. With compgen you need quotes to prevent expansion.
if compgen -G "/path/to/a/binary/file/app*.bin" > /dev/null; then
echo "INFO: application has been built successfully"
else
echo "ERROR: application couldn't be built".
fi
A: glob matching does not work inside quotes. Even if you did remove it, the condition will fail if multiple files are returned from the glob match, because the -e will work for single file only.
In bash you can do this,
# the nullglob will let the unmatched glob to handled properly
shopt -s nullglob
fileList=(/path/to/a/binary/file/app*.bin)
if [ "${#fileList[@]}" -ge 1 ]; then
echo "INFO: application has been built successfully"
else
echo "ERROR: application couldn't be built".
fi
# This line is optional. This is just to ensure the nullglob option is
# reset after our computation
shopt -u nullglob
|
stackoverflow
|
{
"language": "en",
"length": 277,
"provenance": "stackexchange_0000F.jsonl.gz:855755",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513710"
}
|
1779508c63aa084f5aee4aa7cada0663a86508f5
|
Stackoverflow Stackexchange
Q: Empty slots in JavaScript objects? Lately I started seeing this in firefox' console
Object [ <6 empty slots>, false, <3 empty slots>, 1 more… ]
When I have an object like
{
6: false,
10: true
}
I simply want an object with numeric keys that I can access, but I am worried by this because if it keeps track of empty slots then this must mean that some memory is wasted?
Are my concerns valid and if yes what would be a correct way to define such an object?
A: Javascript uses sparse arrays. "Since an array's length can change at any time, and data can be stored at non-contiguous locations in the array, JavaScript arrays are not guaranteed to be dense; this depends on how the programmer chooses to use them." (source)
If the objects are of type Array, then the memory used is an implementation detail of the engine. In your case, the objects are objects, so it only takes the memory for the object itself, and to store the property names and references to property values.
|
Q: Empty slots in JavaScript objects? Lately I started seeing this in firefox' console
Object [ <6 empty slots>, false, <3 empty slots>, 1 more… ]
When I have an object like
{
6: false,
10: true
}
I simply want an object with numeric keys that I can access, but I am worried by this because if it keeps track of empty slots then this must mean that some memory is wasted?
Are my concerns valid and if yes what would be a correct way to define such an object?
A: Javascript uses sparse arrays. "Since an array's length can change at any time, and data can be stored at non-contiguous locations in the array, JavaScript arrays are not guaranteed to be dense; this depends on how the programmer chooses to use them." (source)
If the objects are of type Array, then the memory used is an implementation detail of the engine. In your case, the objects are objects, so it only takes the memory for the object itself, and to store the property names and references to property values.
A: The problem might be caused at how Firefox' console.log has interpreted the input object. Somehow, it got evaluated as an array instead of a simple object. Chrome does it right. If you look deeper into how an array is managed in Javascript, you can find the following:
Arrays cannot use strings as element indexes (as in an associative array), but must use integers. Setting or accessing via non-integers using bracket notation (or dot notation) will not set or retrieve an element from the array list itself, but will set or access a variable associated with that array's object property collection. The array's object properties and list of array elements are separate, and the array's traversal and mutation operations cannot be applied to these named properties. src
A better comprehending for this is to tinker with Array's length property. Especially when you have constructed your array by using []. To add elements to the array, we have to use .push(...). This function uses the length property (check 15.4.4.7 Array.prototype.push). So in short (interactive example is at the bottom)
const arr = []; // length = 0
arr.push('1stEl', '2ndEl', '3thEl'); // length = 3
// this isn't allowed, but you can do this
arr[7] = '7thEl'; // length = 8
You see that the length is now 8 and not 4. The indices 3..6 are reserved, but undefined. Here below is a console output.
[
"1stEl",
"2ndEl",
"3thEl",
undefined,
undefined,
undefined,
undefined,
"7thEl"
]
If you use a .push method again, it will place the new element after the '7thEl' element (so on index 8).
To check the keys that is used by this object, we can use Object.keys() on the array. You will get
[
"0",
"1",
"2",
"7"
]
You see that numeric values are used as keys. Like your object, which is
{
6: false,
10: true
}
Using Object.keys on this object gives ["6", "10"]. It has a similar output as the above. So the console.log from firefox has interpret your object as an array, thus displaying it as an array. In order to display the array correctly, it starts (logically seen, need to check the source code yet) at key 0 and ends at key array.length - 1. But the indexes 0,1..5 and 7..9 aren't "defined". Thus it leads to this output
Object [ <6 empty slots>, false, <3 empty slots>, 1 more… ]
I'm not sure if I have to qualify this as a bug or glitch at Firefox's console API... Or that the console input (when initializing a variable) has read the object incorrectly.
--- live example --
const a = new Array(3);
console.log('using "new Array(...)" reserves memory space: ' + a.length);
console.log('---');
// using brackets
const b = [];
console.log('but what with [] ? At initial, we have ' + b.length);
b.push('1stEl', '2ndEl', '3thEl');
console.log('After push(\'1stEl\', \'2ndEl\', \'3thEl\'), we have ' + b.length);
// add to random index
b[7] = '7thEl';
console.log('After b[7] = \'7thEl\', we have ' + b.length);
console.log('displaying gives ', b);
console.log('using Object.keys: ', Object.keys(b));
// adding again
b.push('newEl');
console.log('After b.push(\'newEl\'), we have ' + b.length);
// object
const obj = {
6: false,
10: true
};
console.log('obj defined as {6: false, 10: true }');
console.log('using Object.keys: ', Object.keys(obj));
console.log('obj: ', obj);
|
stackoverflow
|
{
"language": "en",
"length": 723,
"provenance": "stackexchange_0000F.jsonl.gz:855793",
"question_score": "16",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513811"
}
|
7476bda9b4acf12afd624ad0c4dc46d656955f55
|
Stackoverflow Stackexchange
Q: Expression using ternary operator a+=b>=300?b=100:a==100;
If a and b are initialized to 100 and 200 respectively,
what will be the values of a and b after executing the ternary operator?
The answer was a=101, b=200.
How is this possible?
A: Just add some parentheses and spaces to make it more readable and it should be obvious:
a += ((b >= 300) ? (b = 100) : (a == 100));
(Refer to a C operator precedence table to see why the parentheses can be placed where they are in the above expression.)
So this is essentially just:
a += 1;
|
Q: Expression using ternary operator a+=b>=300?b=100:a==100;
If a and b are initialized to 100 and 200 respectively,
what will be the values of a and b after executing the ternary operator?
The answer was a=101, b=200.
How is this possible?
A: Just add some parentheses and spaces to make it more readable and it should be obvious:
a += ((b >= 300) ? (b = 100) : (a == 100));
(Refer to a C operator precedence table to see why the parentheses can be placed where they are in the above expression.)
So this is essentially just:
a += 1;
A: The conditional operator has nothing to do with it, basically it just adds clutter here. Your program is equivalent to a += a==100. Which gives a += 1, since the result of == is a boolean 1=true.
A: First add some spaces to make this statement expression easier to parse visually:
a += b >= 300 ? b = 100 : a == 100;
Then parse it according to the C grammar (which is subtly different from the java or javascript grammars in this particular case):
a +=
(b >= 300) ?
b = 100 :
a == 100
;
Since b = 200, the test b >= 300 evaluates to false and the first branch of the ternary operator is not evaluated, but the second branch is and a == 100 evaluates to 1 as a is indeed equal to 100. The result of the ternary operator, 1, is added to a, hence the new value for a is 101. b is unchanged.
|
stackoverflow
|
{
"language": "en",
"length": 265,
"provenance": "stackexchange_0000F.jsonl.gz:855825",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513926"
}
|
bb9aa3596a87423d33cbebdbacaf8f57d04f92cc
|
Stackoverflow Stackexchange
Q: Unsupported element in unpacked struct datatype in formal argument I'm having trouble passing a structure object from SV to C through SV-C DPI.
The code:
SV side:
/*svFile.sv*/
typedef struct {
int a;
int b;
} struct_sv;
import "DPI-C" function void reciever(input struct_sv a);
and on the C side
/*cFile.c*/
void reciever(const struct_sv *x){
printf("%d %d", x->a, x->b);
}
But when I compile and run, I get the following error:
ncvlog: *E,UNUSAG unsupported element in unpacked struct datatype in formal argument.
|
Q: Unsupported element in unpacked struct datatype in formal argument I'm having trouble passing a structure object from SV to C through SV-C DPI.
The code:
SV side:
/*svFile.sv*/
typedef struct {
int a;
int b;
} struct_sv;
import "DPI-C" function void reciever(input struct_sv a);
and on the C side
/*cFile.c*/
void reciever(const struct_sv *x){
printf("%d %d", x->a, x->b);
}
But when I compile and run, I get the following error:
ncvlog: *E,UNUSAG unsupported element in unpacked struct datatype in formal argument.
|
stackoverflow
|
{
"language": "en",
"length": 82,
"provenance": "stackexchange_0000F.jsonl.gz:855834",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513951"
}
|
ad862cec631229fe30462fe18d8c969491dcbfaa
|
Stackoverflow Stackexchange
Q: Main menu NSMenuItem key equivalent not working until menu has been viewed I have a list of user-configurable things that show in a main menu submenu. The first 9 items get the shortcuts ⌘1--⌘9 assigned:
let item = theMenu.addItem(
withTitle: title,
action: #selector(itemSelected(_:)),
keyEquivalent: "1")
item.target = self
item.keyEquivalentModifierMask = [.command]
The shortcut ⌘1 doesn't work until you open the menu once. After that, it works as expected. This setup code is called on launch, by the way.
Could this be an issue of menu item validation? Or is this approach just inferior to a menu with a delegate?
|
Q: Main menu NSMenuItem key equivalent not working until menu has been viewed I have a list of user-configurable things that show in a main menu submenu. The first 9 items get the shortcuts ⌘1--⌘9 assigned:
let item = theMenu.addItem(
withTitle: title,
action: #selector(itemSelected(_:)),
keyEquivalent: "1")
item.target = self
item.keyEquivalentModifierMask = [.command]
The shortcut ⌘1 doesn't work until you open the menu once. After that, it works as expected. This setup code is called on launch, by the way.
Could this be an issue of menu item validation? Or is this approach just inferior to a menu with a delegate?
|
stackoverflow
|
{
"language": "en",
"length": 100,
"provenance": "stackexchange_0000F.jsonl.gz:855841",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513971"
}
|
9f12bfa942553c947822554bd1606b4c6f777e62
|
Stackoverflow Stackexchange
Q: RXJS Observable Transform Array To Multiple Values I'm making an Angular 2 HTTP get request and in return I get
Observable<Message[]> I want to transform this observable into multiple emit.
So let's say the server returned Message array with length 3.
I would like to get 3 notification in my subscribe call (on for each value in the array) instead of getting one call with the array.
e.g :
['Hello','Hey','Howdy'] - > 'Hello', 'Hey','Howdy'
I found an operator that does transform the array (Observable.for), but this operator takes as an argument an array and not an Observable.
A: You can use the concatAll operator to flatten the array (mergeAll will work as well).
Observable.of(['Hello','Hey','Howdy'])
.concatAll()
.subscribe(console.log)
See demo: https://jsbin.com/gaxajex/3/edit?js,console
|
Q: RXJS Observable Transform Array To Multiple Values I'm making an Angular 2 HTTP get request and in return I get
Observable<Message[]> I want to transform this observable into multiple emit.
So let's say the server returned Message array with length 3.
I would like to get 3 notification in my subscribe call (on for each value in the array) instead of getting one call with the array.
e.g :
['Hello','Hey','Howdy'] - > 'Hello', 'Hey','Howdy'
I found an operator that does transform the array (Observable.for), but this operator takes as an argument an array and not an Observable.
A: You can use the concatAll operator to flatten the array (mergeAll will work as well).
Observable.of(['Hello','Hey','Howdy'])
.concatAll()
.subscribe(console.log)
See demo: https://jsbin.com/gaxajex/3/edit?js,console
A: Try this:
Observable.from(yourRequest())
.flatMap(msgList => Observable.from(msgList))
.subscribe(msg => console.log(msg));
yourRequest() in this case should return array.
If yourRequest() returns Observable then:
yourRequest().flatMap(msgList => Observable.from(msgList))
.subscribe()
A: If I understand you well, you want to use something like concatMap. By using concatMap, you can map values(Message array in your case) to inner observable, and then you can subscribe these values in order.
yourRequest.concatMap(i: Message => Observable.of(i)).subscribe(i => {
//sendNotification(i);
});
You can also take a look at this page.
I hope it helps!
|
stackoverflow
|
{
"language": "en",
"length": 203,
"provenance": "stackexchange_0000F.jsonl.gz:855850",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44513990"
}
|
f3b437701b843a11183eeb7e13135b755304fdc0
|
Stackoverflow Stackexchange
Q: Magento 2: How to run CLI command from another CLI command class? I'm working on a custom CLI command & I was wondering what's the best way to call other commands from the PHP code (without shell_exec() or similar).
For example:
When running "php bin/magento my:custom:command", it'll do it's thing & in the end will run "php bin/magento cache:flush".
Any Ideas?
Thanks.
A: The Magento CLI is built on top of Symfony Console. You can load up and run other commands with this component as such:
$arguments = new ArrayInput(['command' => 'my:custom:command']);
$this->getApplication()->find('my:custom:command')->run($arguments, $output);
$arguments = new ArrayInput(['command' => 'cache:flush']);
$this->getApplication()->find('cache:flush')->run($arguments, $output);
More information here. Although it's unlikely to be a problem for you, please note that the documentation suggests this is not always the best idea:
Most of the times, calling a command from code that is not executed on the command line is not a good idea. The main reason is that the command's output is optimized for the console and not to be passed to other commands.
|
Q: Magento 2: How to run CLI command from another CLI command class? I'm working on a custom CLI command & I was wondering what's the best way to call other commands from the PHP code (without shell_exec() or similar).
For example:
When running "php bin/magento my:custom:command", it'll do it's thing & in the end will run "php bin/magento cache:flush".
Any Ideas?
Thanks.
A: The Magento CLI is built on top of Symfony Console. You can load up and run other commands with this component as such:
$arguments = new ArrayInput(['command' => 'my:custom:command']);
$this->getApplication()->find('my:custom:command')->run($arguments, $output);
$arguments = new ArrayInput(['command' => 'cache:flush']);
$this->getApplication()->find('cache:flush')->run($arguments, $output);
More information here. Although it's unlikely to be a problem for you, please note that the documentation suggests this is not always the best idea:
Most of the times, calling a command from code that is not executed on the command line is not a good idea. The main reason is that the command's output is optimized for the console and not to be passed to other commands.
|
stackoverflow
|
{
"language": "en",
"length": 171,
"provenance": "stackexchange_0000F.jsonl.gz:855857",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514026"
}
|
6bf91a133a2fac4c5731547f2fb24a8f8f81e8d8
|
Stackoverflow Stackexchange
Q: Is there any work around to avoid calling /refresh to reload the properties in client spring boot? I learnt about spring-boots config-server and config-client approach. But here every time, I have to call refresh POST API when I do some changes in my properties to reflect in client which I want to avoid it. Can we call the refresh internally in code but not externally?.
Please help.
A: You can save this problem via Spring Cloud Bus as described in official documentation. Also, you could use this blog entry as a step by step guide.
Another solution is less exotic but still valid. You can configure your service to call RefreshEndpoint.refresh() periodically as discussed in this topic.
|
Q: Is there any work around to avoid calling /refresh to reload the properties in client spring boot? I learnt about spring-boots config-server and config-client approach. But here every time, I have to call refresh POST API when I do some changes in my properties to reflect in client which I want to avoid it. Can we call the refresh internally in code but not externally?.
Please help.
A: You can save this problem via Spring Cloud Bus as described in official documentation. Also, you could use this blog entry as a step by step guide.
Another solution is less exotic but still valid. You can configure your service to call RefreshEndpoint.refresh() periodically as discussed in this topic.
A: You can do it manually with ContextRefresh when you want to reload properties from spring cloud server.
@Autowired
ContextRefresher contextRefresher;
public void yourMethod() {
contextRefresher.refresh();
}
Just call yourMethod whenever you want.
|
stackoverflow
|
{
"language": "en",
"length": 151,
"provenance": "stackexchange_0000F.jsonl.gz:855860",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514052"
}
|
0665e58fbae16bc68c5f125869cbcd06402bad3f
|
Stackoverflow Stackexchange
Q: Permission denied error when installing gcloud components I following the Google Pub/Sub quickstart guide. When I try to run gcloud components install beta I get the error below.
ERROR: (gcloud.components.install) Permission denied:
[/usr/local/google-cloud-sdk.staging]
Ensure you have the permissions to access the file and that the file is not in use.
How can I fix this?
A: Have you tried to sudo the command? You might not have permission to write to that folder. sudo gcloud components install beta.
|
Q: Permission denied error when installing gcloud components I following the Google Pub/Sub quickstart guide. When I try to run gcloud components install beta I get the error below.
ERROR: (gcloud.components.install) Permission denied:
[/usr/local/google-cloud-sdk.staging]
Ensure you have the permissions to access the file and that the file is not in use.
How can I fix this?
A: Have you tried to sudo the command? You might not have permission to write to that folder. sudo gcloud components install beta.
|
stackoverflow
|
{
"language": "en",
"length": 79,
"provenance": "stackexchange_0000F.jsonl.gz:855898",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514177"
}
|
ec9a5f625088f4009603054777aa711d935df420
|
Stackoverflow Stackexchange
Q: Decode ByteArray with spring 5 WebFlux framework I'm trying to use new Spring WebFlux framework with kotlin. And I can not find where I am wrong with this code (myService):
fun foo(): Flux<ByteArray> {
val client = WebClient.create("http://byte-array-service")
return client
.get()
.uri("/info")
.accept(MediaType.APPLICATION_OCTET_STREAM)
.exchange()
.flatMapMany {
r -> r.bodyToFlux(ByteArray::class.java)
}
}
This method returns Flux with 7893 bytes and I know there are not all bytes sent by byte-array-service. If I use old-style rest template all is ok
fun foo(): Flux<ByteArray> {
val rt = RestTemplate()
rt.messageConverters.add(
ByteArrayHttpMessageConverter())
val headers = HttpHeaders()
headers.accept = listOf(MediaType.APPLICATION_OCTET_STREAM)
val entity = HttpEntity<String>(headers)
val r = rt.exchange("http://byte-array-service/info", HttpMethod.GET,entity, ByteArray::class.java)
return Flux.just(r.body)
}
it returns all 274124 bytes sent from byte-array-service
here is my consumer
fun doReadFromByteArrayService(req: ServerRequest): Mono<ServerResponse> {
return Mono.from(myService
.foo()
.flatMap {
accepted().body(fromObject(it.size))
})
}
A: If I understood your question right, and you just need to deliver the flux forward, this should work. I tested it on my own environment and had no problems reading all the bytes.
To get bytes:
fun foo(): Flux<ByteArray> =
WebClient.create("http://byte-array-service")
.get()
.uri("/info")
.accept(MediaType.APPLICATION_OCTET_STREAM)
.retrieve()
.bodyToFlux(ByteArray::class.java)
Return bytes with response:
fun doReadFromByteArrayService(req: ServerRequest): Mono<ServerResponse> =
ServerResponse.ok().body(foo())
|
Q: Decode ByteArray with spring 5 WebFlux framework I'm trying to use new Spring WebFlux framework with kotlin. And I can not find where I am wrong with this code (myService):
fun foo(): Flux<ByteArray> {
val client = WebClient.create("http://byte-array-service")
return client
.get()
.uri("/info")
.accept(MediaType.APPLICATION_OCTET_STREAM)
.exchange()
.flatMapMany {
r -> r.bodyToFlux(ByteArray::class.java)
}
}
This method returns Flux with 7893 bytes and I know there are not all bytes sent by byte-array-service. If I use old-style rest template all is ok
fun foo(): Flux<ByteArray> {
val rt = RestTemplate()
rt.messageConverters.add(
ByteArrayHttpMessageConverter())
val headers = HttpHeaders()
headers.accept = listOf(MediaType.APPLICATION_OCTET_STREAM)
val entity = HttpEntity<String>(headers)
val r = rt.exchange("http://byte-array-service/info", HttpMethod.GET,entity, ByteArray::class.java)
return Flux.just(r.body)
}
it returns all 274124 bytes sent from byte-array-service
here is my consumer
fun doReadFromByteArrayService(req: ServerRequest): Mono<ServerResponse> {
return Mono.from(myService
.foo()
.flatMap {
accepted().body(fromObject(it.size))
})
}
A: If I understood your question right, and you just need to deliver the flux forward, this should work. I tested it on my own environment and had no problems reading all the bytes.
To get bytes:
fun foo(): Flux<ByteArray> =
WebClient.create("http://byte-array-service")
.get()
.uri("/info")
.accept(MediaType.APPLICATION_OCTET_STREAM)
.retrieve()
.bodyToFlux(ByteArray::class.java)
Return bytes with response:
fun doReadFromByteArrayService(req: ServerRequest): Mono<ServerResponse> =
ServerResponse.ok().body(foo())
|
stackoverflow
|
{
"language": "en",
"length": 191,
"provenance": "stackexchange_0000F.jsonl.gz:855917",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514263"
}
|
a0e5a6be4255b4a69a59ef5bcf91d179acc35e63
|
Stackoverflow Stackexchange
Q: Time picker with UltraMaskedEdit (hh:MM tt) changes format to HH:MM when entering into edit mode I am using UltraMaskedEdit control of Infragistics to pick and show Time only in the format : hh:MM tt. It shows up fine normally, but when entering into edit mode it changes its format to HH:MM and here is the problem since I dont want to change the format in edit mode. I am using properties for UltraMaskedEdit Control:
UltraMaskedEdit1.EditAs=Infragistics.Win.UltraWinMaskedEdit.EditAsType.DateTime;
UltraMaskedEdit1.InputMask = "{time}";
UltraMaskedEdit1.FormatString = "hh:MM tt";
UltraMaskedEdit1.PromptChar = ' ';
UltraMaskedEdit1.SpinButtonDisplayStyle = Infragistics.Win.SpinButtonDisplayStyle.OnRight;
UltraMaskedEdit1.SpinWrap = true;
Please let me know if there is any way to achieve this.
A: Setting the FormatString to "hh:MM tt" will show hours, month and AM/PM. Is this what you really need?
If you need to show hours, minutes and AM/PM setting of InputMask to {time} should be enough. Therefore try to remove the FormatString.
|
Q: Time picker with UltraMaskedEdit (hh:MM tt) changes format to HH:MM when entering into edit mode I am using UltraMaskedEdit control of Infragistics to pick and show Time only in the format : hh:MM tt. It shows up fine normally, but when entering into edit mode it changes its format to HH:MM and here is the problem since I dont want to change the format in edit mode. I am using properties for UltraMaskedEdit Control:
UltraMaskedEdit1.EditAs=Infragistics.Win.UltraWinMaskedEdit.EditAsType.DateTime;
UltraMaskedEdit1.InputMask = "{time}";
UltraMaskedEdit1.FormatString = "hh:MM tt";
UltraMaskedEdit1.PromptChar = ' ';
UltraMaskedEdit1.SpinButtonDisplayStyle = Infragistics.Win.SpinButtonDisplayStyle.OnRight;
UltraMaskedEdit1.SpinWrap = true;
Please let me know if there is any way to achieve this.
A: Setting the FormatString to "hh:MM tt" will show hours, month and AM/PM. Is this what you really need?
If you need to show hours, minutes and AM/PM setting of InputMask to {time} should be enough. Therefore try to remove the FormatString.
|
stackoverflow
|
{
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:855941",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514349"
}
|
625bbb3d2bc5025a65fe2c55a2b3a3e89aeb8299
|
Stackoverflow Stackexchange
Q: unclear java.util.ConcurrentModificationException Why did it happen?
I have written this code and it throws java.util.ConcurrentModificationException
List<Integer> list = Stream.iterate(0, t -> t + 1).limit(10).collect(Collectors.toList());
System.out.println(list);
List<Integer> subList = list.subList(5, list.size());
list.removeAll(subList);
System.out.println(subList);
System.out.println(list);
But next code doesn't throw
List<Integer> list = Stream.iterate(0, t -> t + 1).limit(10).collect(Collectors.toList());
System.out.println(list);
List<Integer> subList = list.subList(5, list.size());
System.out.println(subList);
list.removeAll(subList);
System.out.println(list);
A: Looking at the Javadoc for the subList() method, it clearly states:
The semantics of the list returned by this method become undefined if
the backing list (i.e., this list) is structurally modified in any way
other than via the returned list.
In your first example, that's exactly what's happening: you're structurally modifying the backing list by calling removeAll(), so the behavior of your sub-list is now unspecified.
The fact that the subsequent call to print the list ends up throwing a ConcurrentModificationException is just an implementation detail.
If you want to avoid this, you would have to create a new list from the sub-list you retrieve, i.e.
List<Integer> subList = new ArrayList<>(list.subList(5, list.size()));
list.removeAll(subList);
Both lists can now be independently accessed and modified.
|
Q: unclear java.util.ConcurrentModificationException Why did it happen?
I have written this code and it throws java.util.ConcurrentModificationException
List<Integer> list = Stream.iterate(0, t -> t + 1).limit(10).collect(Collectors.toList());
System.out.println(list);
List<Integer> subList = list.subList(5, list.size());
list.removeAll(subList);
System.out.println(subList);
System.out.println(list);
But next code doesn't throw
List<Integer> list = Stream.iterate(0, t -> t + 1).limit(10).collect(Collectors.toList());
System.out.println(list);
List<Integer> subList = list.subList(5, list.size());
System.out.println(subList);
list.removeAll(subList);
System.out.println(list);
A: Looking at the Javadoc for the subList() method, it clearly states:
The semantics of the list returned by this method become undefined if
the backing list (i.e., this list) is structurally modified in any way
other than via the returned list.
In your first example, that's exactly what's happening: you're structurally modifying the backing list by calling removeAll(), so the behavior of your sub-list is now unspecified.
The fact that the subsequent call to print the list ends up throwing a ConcurrentModificationException is just an implementation detail.
If you want to avoid this, you would have to create a new list from the sub-list you retrieve, i.e.
List<Integer> subList = new ArrayList<>(list.subList(5, list.size()));
list.removeAll(subList);
Both lists can now be independently accessed and modified.
A: Thank you, Robby Cornelissen.
Next code will fix
List<Integer> subList = new ArrayList<>(list.subList(5, list.size()));
A: instead of creating new list what about using CopyOnWriteArrayList?
List<Integer> subList = new CopyOnWriteArrayList<>(list.subList(5, list.size()));
|
stackoverflow
|
{
"language": "en",
"length": 212,
"provenance": "stackexchange_0000F.jsonl.gz:855952",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514394"
}
|
9521dba8b8838bce08e5c944b593dcdd06d25746
|
Stackoverflow Stackexchange
Q: react router with negative lookahead Basically I am trying to catch <Route path="/:slug" /> but other than /not-found.
I try to use path-to-regex's Custom Match Parameters that looks like /:slug(^(?!.*not-found).*$) but it doesn't work.
It does not work here http://forbeslindesay.github.io/express-route-tester/, but it does work in https://regex101.com/
I am wondering how can I exclude a path when I am trying to catch /:slug?
A: I found this issue in path-to-regex repo saying this is not supported, but using RegEx is. However react-router only support path as string currently.
Also, you can test path-to-regex matching in this demo
So to get this behaviour I think the way to go is to look at the match or location props of the component render by <Route path="/:slug" /> and conditionally render a not-found component.
|
Q: react router with negative lookahead Basically I am trying to catch <Route path="/:slug" /> but other than /not-found.
I try to use path-to-regex's Custom Match Parameters that looks like /:slug(^(?!.*not-found).*$) but it doesn't work.
It does not work here http://forbeslindesay.github.io/express-route-tester/, but it does work in https://regex101.com/
I am wondering how can I exclude a path when I am trying to catch /:slug?
A: I found this issue in path-to-regex repo saying this is not supported, but using RegEx is. However react-router only support path as string currently.
Also, you can test path-to-regex matching in this demo
So to get this behaviour I think the way to go is to look at the match or location props of the component render by <Route path="/:slug" /> and conditionally render a not-found component.
|
stackoverflow
|
{
"language": "en",
"length": 131,
"provenance": "stackexchange_0000F.jsonl.gz:855991",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514510"
}
|
95236f5a85b24965d1e9f6dbfd1694dc229e22a6
|
Stackoverflow Stackexchange
Q: How do I hide or show content with CSS depending on screen size? Just like Bootstrap, Ionic (Ionic 3) lets us resize the width of a column based on screen size using col-sm-8, col-md-6, col-lg-4. Bootstrap also comes with classes like visible-xs, hidden-sm, etc. that enables us to show or hide content according to the screen size. Does Ionic 3 ship with anything that lets us do the same?
A: Example:
<div hidden-xs visible-block-md>Hidden on small screen</div>
A SCSS solution would be:
$screen-breakpoints: (
xs: 0,
sm: 576px,
md: 768px,
lg: 992px,
xl: 1200px
) !default;
@each $keySize, $valueSize in $screen-breakpoints {
[hidden-#{$keySize}] {
@media (min-width: $valueSize) {
display: none;
}
}
}
@each $keySize, $valueSize in $screen-breakpoints {
[visible-block-#{$keySize}] {
@media (min-width: $valueSize) {
display: block;
}
}
}
@each $keySize, $valueSize in $screen-breakpoints {
[visible-inline-block-#{$keySize}] {
@media (min-width: $valueSize) {
display: inline-block;
}
}
}
If you're using Ionic you could go with something like:
@each $breakpoint in map-keys($screen-breakpoints) {
$infix: breakpoint-infix($breakpoint, $screen-breakpoints);
@include media-breakpoint-up($breakpoint, $screen-breakpoints) {
// Provide `[hidden-{bp}]` attributes for floating the element based
// on the breakpoint
[hidden#{$infix}] {
display: none !important;
}
}
}
|
Q: How do I hide or show content with CSS depending on screen size? Just like Bootstrap, Ionic (Ionic 3) lets us resize the width of a column based on screen size using col-sm-8, col-md-6, col-lg-4. Bootstrap also comes with classes like visible-xs, hidden-sm, etc. that enables us to show or hide content according to the screen size. Does Ionic 3 ship with anything that lets us do the same?
A: Example:
<div hidden-xs visible-block-md>Hidden on small screen</div>
A SCSS solution would be:
$screen-breakpoints: (
xs: 0,
sm: 576px,
md: 768px,
lg: 992px,
xl: 1200px
) !default;
@each $keySize, $valueSize in $screen-breakpoints {
[hidden-#{$keySize}] {
@media (min-width: $valueSize) {
display: none;
}
}
}
@each $keySize, $valueSize in $screen-breakpoints {
[visible-block-#{$keySize}] {
@media (min-width: $valueSize) {
display: block;
}
}
}
@each $keySize, $valueSize in $screen-breakpoints {
[visible-inline-block-#{$keySize}] {
@media (min-width: $valueSize) {
display: inline-block;
}
}
}
If you're using Ionic you could go with something like:
@each $breakpoint in map-keys($screen-breakpoints) {
$infix: breakpoint-infix($breakpoint, $screen-breakpoints);
@include media-breakpoint-up($breakpoint, $screen-breakpoints) {
// Provide `[hidden-{bp}]` attributes for floating the element based
// on the breakpoint
[hidden#{$infix}] {
display: none !important;
}
}
}
A: I copied the following CSS classes from Bootstrap 4 Alpha into my project and they work perfectly.
.invisible {
visibility: hidden !important;
}
.hidden-xs-up {
display: none !important;
}
@media (max-width: 575px) {
.hidden-xs-down {
display: none !important;
}
}
@media (min-width: 576px) {
.hidden-sm-up {
display: none !important;
}
}
@media (max-width: 767px) {
.hidden-sm-down {
display: none !important;
}
}
@media (min-width: 768px) {
.hidden-md-up {
display: none !important;
}
}
@media (max-width: 991px) {
.hidden-md-down {
display: none !important;
}
}
@media (min-width: 992px) {
.hidden-lg-up {
display: none !important;
}
}
@media (max-width: 1199px) {
.hidden-lg-down {
display: none !important;
}
}
@media (min-width: 1200px) {
.hidden-xl-up {
display: none !important;
}
}
.hidden-xl-down {
display: none !important;
}
Docs:
https://v4-alpha.getbootstrap.com/layout/responsive-utilities/
|
stackoverflow
|
{
"language": "en",
"length": 317,
"provenance": "stackexchange_0000F.jsonl.gz:856001",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514556"
}
|
4fe2001038cc14f89cc321b82cfa755c85a820b2
|
Stackoverflow Stackexchange
Q: DecimalFormat not working properly after windows update Up until recently my code was working fine on my development machine as well as on the deployment server.
Now out of the blue, the DecimalFormat does not work as expected and I am pretty sure that is after the windows 10 Creators Update.
My code is:
double x = 22.44;
DecimalFormat df = new DecimalFormat("0.00");
System.out.println(df.format(x));
Output: 22,44
Instead of 22.44
If i change it to :
double x = 22.44;
DecimalFormat df = new DecimalFormat("0,00");
System.out.println(df.format(x));
Output is: 0.22
I am using netbeans 7.4 with jdk 1.7.0_79u (64 bit)
Tried changing my jdk to 1.7.0_80u (32 bit) but made no difference.
Also changed the locale setting for Decimal Symbol and Digit Grouping Symbol but still the same problem.
Anyone with ideas on how to solve this issue?
A: This will be your system locale, different countries usedifferent characters for the decimal and thousand separator.
You can set the locale in the decimal format to override your system default. Or you can change your system default.
|
Q: DecimalFormat not working properly after windows update Up until recently my code was working fine on my development machine as well as on the deployment server.
Now out of the blue, the DecimalFormat does not work as expected and I am pretty sure that is after the windows 10 Creators Update.
My code is:
double x = 22.44;
DecimalFormat df = new DecimalFormat("0.00");
System.out.println(df.format(x));
Output: 22,44
Instead of 22.44
If i change it to :
double x = 22.44;
DecimalFormat df = new DecimalFormat("0,00");
System.out.println(df.format(x));
Output is: 0.22
I am using netbeans 7.4 with jdk 1.7.0_79u (64 bit)
Tried changing my jdk to 1.7.0_80u (32 bit) but made no difference.
Also changed the locale setting for Decimal Symbol and Digit Grouping Symbol but still the same problem.
Anyone with ideas on how to solve this issue?
A: This will be your system locale, different countries usedifferent characters for the decimal and thousand separator.
You can set the locale in the decimal format to override your system default. Or you can change your system default.
A: This is likely a locale issue - your current code uses the default locale of the system, which may be done differently in Java 7 and Java 8. If you want to use a specific locale you can use:
double x = 22.44;
DecimalFormat df = new DecimalFormat("0.00", new DecimalFormatSymbols(Locale.FRANCE));
System.out.println(df.format(x));
df = new DecimalFormat("0.00", new DecimalFormatSymbols(Locale.UK));
System.out.println(df.format(x));
which outputs:
22,44 (with a comma)
22.44 (with a dot)
|
stackoverflow
|
{
"language": "en",
"length": 244,
"provenance": "stackexchange_0000F.jsonl.gz:856031",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514659"
}
|
d766044e944479cd741e0d134b0030442932c0b7
|
Stackoverflow Stackexchange
Q: Change text in PDF I have a PDF and I want to programmatically change text, not fonts, colors, just letters.
I tried
*
*pdf-toolkit - just metadata
*prawn - templates not supported any more
*combine_pdf - some fonts not supported
Is there easier way to change just text?
Just decode the XML inside the PDF file, change a encode back?
|
Q: Change text in PDF I have a PDF and I want to programmatically change text, not fonts, colors, just letters.
I tried
*
*pdf-toolkit - just metadata
*prawn - templates not supported any more
*combine_pdf - some fonts not supported
Is there easier way to change just text?
Just decode the XML inside the PDF file, change a encode back?
|
stackoverflow
|
{
"language": "en",
"length": 61,
"provenance": "stackexchange_0000F.jsonl.gz:856035",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514670"
}
|
ec98d2f521042b7cc3a18e3990bfd3a384f8d81c
|
Stackoverflow Stackexchange
Q: (Docker) Getting error: docker-php-source: no such file or directory when building docker file When I'm trying to build the docker file at: https://github.com/docker-library/php/blob/3f43309a0d5a427f54dc885e0812068ee767c03e/7.1/Dockerfile
command: docker build -t php_image .
I'm encontering the following error:
Step 14 : COPY docker-php-source /usr/local/bin/
lstat docker-php-source: no such file or directory
Could anybody help me to figure out something wrong here?
Thanks
A: You don't have the proper context of the docker build.
Just clone the repo to be sure to have all the files (and its right permissions):
git clone https://github.com/docker-library/php
docker build . -t php_image
But if you need to customize that image, it's easier to make your own Dockerfile based on the official build:
FROM php:7
RUN #your commands
RUN ...
|
Q: (Docker) Getting error: docker-php-source: no such file or directory when building docker file When I'm trying to build the docker file at: https://github.com/docker-library/php/blob/3f43309a0d5a427f54dc885e0812068ee767c03e/7.1/Dockerfile
command: docker build -t php_image .
I'm encontering the following error:
Step 14 : COPY docker-php-source /usr/local/bin/
lstat docker-php-source: no such file or directory
Could anybody help me to figure out something wrong here?
Thanks
A: You don't have the proper context of the docker build.
Just clone the repo to be sure to have all the files (and its right permissions):
git clone https://github.com/docker-library/php
docker build . -t php_image
But if you need to customize that image, it's easier to make your own Dockerfile based on the official build:
FROM php:7
RUN #your commands
RUN ...
|
stackoverflow
|
{
"language": "en",
"length": 121,
"provenance": "stackexchange_0000F.jsonl.gz:856049",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514720"
}
|
3a1764978108c5044aac5e9451b246faafff50c1
|
Stackoverflow Stackexchange
Q: HTTP Basic: Access denied fatal: Authentication failed I use GitLab Community Edition 9.1.3 2e4e522 on Windows 10 Pro x64. With Git client.
Error
Cloning into 'project_name'...
remote: HTTP Basic: Access denied
fatal: Authentication failed for 'http://[email protected]/my_user_name/project_name.git/'
How to fix it?
A: Open CMD (Run as administrator)
type command:
git config --system --unset credential.helper
then enter new password for Git remote server.
|
Q: HTTP Basic: Access denied fatal: Authentication failed I use GitLab Community Edition 9.1.3 2e4e522 on Windows 10 Pro x64. With Git client.
Error
Cloning into 'project_name'...
remote: HTTP Basic: Access denied
fatal: Authentication failed for 'http://[email protected]/my_user_name/project_name.git/'
How to fix it?
A: Open CMD (Run as administrator)
type command:
git config --system --unset credential.helper
then enter new password for Git remote server.
A: i coped with same error and my suggestion are:
*
*Start with try build another user in git lab
*Recheck username & password (although it sounds obvious)
*Validate the windows credential (start -> "cred")
*Copy & paste same URL like you get from git lab, the struct should be:
http://{srvName}/{userInGitLab}/{Repository.git}
no '/' at the end
*Recheck the authorization in GitLab
*Give an attention to case sensitive
Hope one of the above will solve it.
A: If username and password is prompted. Just add Gitlab username and password for clone.
For pop up dialog asking credential, follow the steps below.
*
*Go to "control panel"
*user accounts
*manage credentials
*windows credentials
*git:https://[email protected]
*click on down arrow
*Click remove.
Hope this helps!
A: This can happen also because of a change in the password and since Git Credential Manager caches it, so if that's the case
1. Open Credential Manager in Windows
2. Search for your GIT credential and reset it to the new password.
A: Go to your Credential manager => git credentials
Check your git credentials and check your password.
This worked for me.
A: *
*Generate an access token with never expire date, and select all the options available.
*Remove the existing SSH keys.
*Clone the repo with the https instead of ssh.
*Use the username but use the generated access token instead of password.
alternatively you can set remote to http by using this command in the existing repo, and use this command git remote set-url origin https://gitlab.com/[username]/[repo-name].git
A: in ubuntu 20.04 ,
vscode terminal was unable to pull/push
returned this error
remote: HTTP Basic: Access denied
fatal: Authentication failed for
Simply opened regular terminal in the folder location
git pull/push
worked properly
A: Before digging into the solution lets first see why this happens.
Before any transaction with git that your machine does git checks for your authentication which can be done using
*
*An SSH key token present in your machine and shared with git-repo(most preferred)
OR
*Using your username/password (mostly used)
Why did this happen
In simple words, this happened because the credentials stored in your machine are not authentic i.e.there are chances that your password stored in the machine has changed from whats there in git therefore
Solution
Head towards, control panel and search for Credential Manager look for your use git url and change the creds.
There you go this works with mostly every that windows keep track off
A: Im my case i was using Git Credential Manager for Windows (it was installed by default, I didn't install it manually)
Credentials Manager had saved my old password but i changed it lately.
If you are in the same conditions, to solve this problem:
Go to Control Panel -> Credentials Manager and delete git account.
After that it will ask you again for the credentials.
A: For my case, I initially tried with
git config --system --unset credential.helper
But I was getting error
error: could not lock config file C:/Program Files/Git/etc/gitconfig: Permission denied
Then tried with
git config --global --unset credential.helper
No error, but still got access denied error while git pulling.
Then went to Control Panel -> Credentials Manager > Windows Credential and deleted git account.
After that when I tried git pull again, it asked for the credentials and a new git account added in Credentails manager.
A: A simple git fetch/pull command will throw a authentication failed message. But do the same git fetch/pull command second time, and it should prompt a window asking for credential(username/password). Enter your Id and new password and it should save and move on.
A: I use VS Code on my mac OS and GitLab for my project. I tried so many ways but it worked simply for me by resetting the remote origin of your project repository with the below command:
cd <local-project-repo-on-machine>
git remote set-url <remote-name> <remote-url>
for ex: git remote set-url origin https://<project-repository>.git
Hope it helps someone.
macosgitgitlabauthentication
A: Edit the entry(possibly the password field that may have changed) for git: inside Generic Credentials section of Windows Credentials which can be accessed from Control Panel. Please note this is for Windows OS.
A: I solved with:
*
*Deleting the github credential from the windows credentials,
*Adding a new credential like in this answer (the password is the PAT)
A: *
*Try if it works on the Git Bash
*Have you added a ssh key to your account? If yes remove it and try
again. If not add one and try the ssh url.
*You don't necessarily need Tortoise Git but it may also work around
your problem
*Try to re-install Git without the Git Credential Manager for Windows
When you've fixed the push problem you will also be able to clone it when it is private or internal.
A: The updating of the password in the windows credential manager was not the solution for me.
I had to set a different remote url, by:
git remote set-url origin https://gitlab....git
The url in this case was the one that could be found in Gitlab under Clone -> Clone with HTTPS. It was not the one in the command line instructions.
|
stackoverflow
|
{
"language": "en",
"length": 920,
"provenance": "stackexchange_0000F.jsonl.gz:856050",
"question_score": "59",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514728"
}
|
e92977509d7e7cd772c2cec3dfdeb1b469a1033c
|
Stackoverflow Stackexchange
Q: How to send multipartFile and Json with postman and spring boot I want to send a file and a json model at one post request.
My Request Mapping looks like that:
@ResponseBody
@RequestMapping(value = "/sftp/upload", method = RequestMethod.POST)
public ResponseEntity<SftpModel> upload(@RequestPart("file") MultipartFile file, @RequestPart("sftpModel") SftpModel sftpModel) {
My Json has this structure:
{
"sftpHost": "ftp01.Host.de",
"sftpPort": 22,
"sftpUser": "anyUser",
"sftpPassword": "anyPass",
"sftpRemoteDirectory": "/"
}
And the file is on my system.
I'm able to send the file or the sftpModel seperatly but not together. The error I receive is:
{
"timestamp": 1497336812907,
"status": 415,
"error": "Unsupported Media Type",
"exception": "org.springframework.web.HttpMediaTypeNotSupportedException",
"message": "Content type 'application/octet-stream' not supported",
"path": "/secure-data-transfer-service/sftp/upload"
}
I tried it with postman and curl. But no chance.
curl --form "[email protected]" --form "sftpModel={"sftpHost":"ftp01.Host.de","sftpPort":22,"sftpUser":"anyUser","sftpPassword":"anyPass","sftpRemoteDirectory":"/"}" http://localhost:8080/secure-data-transfer-service/sftp/upload
Is there any way to send both?
A: You java code is looks like perfect.
@ResponseBody
@RequestMapping(value = "/sftp/upload", method = RequestMethod.POST)
public ResponseEntity<SftpModel> upload(@RequestPart("file") MultipartFile file, @RequestPart("sftpModel") SftpModel sftpModel) { }
You can write your SftpModel json string in one json file and try uploading with that json file.
Click here to see the postman image
|
Q: How to send multipartFile and Json with postman and spring boot I want to send a file and a json model at one post request.
My Request Mapping looks like that:
@ResponseBody
@RequestMapping(value = "/sftp/upload", method = RequestMethod.POST)
public ResponseEntity<SftpModel> upload(@RequestPart("file") MultipartFile file, @RequestPart("sftpModel") SftpModel sftpModel) {
My Json has this structure:
{
"sftpHost": "ftp01.Host.de",
"sftpPort": 22,
"sftpUser": "anyUser",
"sftpPassword": "anyPass",
"sftpRemoteDirectory": "/"
}
And the file is on my system.
I'm able to send the file or the sftpModel seperatly but not together. The error I receive is:
{
"timestamp": 1497336812907,
"status": 415,
"error": "Unsupported Media Type",
"exception": "org.springframework.web.HttpMediaTypeNotSupportedException",
"message": "Content type 'application/octet-stream' not supported",
"path": "/secure-data-transfer-service/sftp/upload"
}
I tried it with postman and curl. But no chance.
curl --form "[email protected]" --form "sftpModel={"sftpHost":"ftp01.Host.de","sftpPort":22,"sftpUser":"anyUser","sftpPassword":"anyPass","sftpRemoteDirectory":"/"}" http://localhost:8080/secure-data-transfer-service/sftp/upload
Is there any way to send both?
A: You java code is looks like perfect.
@ResponseBody
@RequestMapping(value = "/sftp/upload", method = RequestMethod.POST)
public ResponseEntity<SftpModel> upload(@RequestPart("file") MultipartFile file, @RequestPart("sftpModel") SftpModel sftpModel) { }
You can write your SftpModel json string in one json file and try uploading with that json file.
Click here to see the postman image
A: Please try with below code:
public ResponseEntity<?> uploadFile(@RequestPart MultipartFile file, @RequestPart String user) {
User users = new ObjectMapper().readValue(user, User.class);
}
|
stackoverflow
|
{
"language": "en",
"length": 205,
"provenance": "stackexchange_0000F.jsonl.gz:856075",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514814"
}
|
6101aa65f7b59e73b56121c522cdc6941ec15fab
|
Stackoverflow Stackexchange
Q: TRAVIS_PULL_REQUEST_BRANCH is not defined during Travis build I'm trying to get my js script to pull the TRAVIS_PULL_REQUEST_BRANCH value but it comes up undefined when my js script runs. Not sure what I'm doing wrong here:
deploy-pull-request.js
const chalk = require('chalk'),
_ = require('lodash'),
green = chalk.green,
info = chalk.yellow,
Deploy = require('./deployApi')
// require('dotenv').config()
// const { env } = process
let options = {
awsKey: process.env.AWS_ACCESS_KEY_ID,
awsSecret: process.env.AWS_SECRET_ACCESS_KEY,
localBuildFolder: 'build',
domain: 'admin-'
}
const branch = process.env.FAKE_PULL_REQUEST_BRANCH || process.env.TRAVIS_PULL_REQUEST_BRANCH
.. rest of the code...
travis.yml
language: node_js
node_js:
- 8
cache:
yarn: true
directories:
- node_modules
deploy:
provider: s3
access_key_id: $AWS_ACCESS_KEY_ID
secret_access_key: $AWS_SECRET_ACCESS_KEY
before_script:
- pip install --user awscli
- yarn run build
- yarn run test
script:
- babel-node ./src/client/deploy/deploy-pull-request.js
|
Q: TRAVIS_PULL_REQUEST_BRANCH is not defined during Travis build I'm trying to get my js script to pull the TRAVIS_PULL_REQUEST_BRANCH value but it comes up undefined when my js script runs. Not sure what I'm doing wrong here:
deploy-pull-request.js
const chalk = require('chalk'),
_ = require('lodash'),
green = chalk.green,
info = chalk.yellow,
Deploy = require('./deployApi')
// require('dotenv').config()
// const { env } = process
let options = {
awsKey: process.env.AWS_ACCESS_KEY_ID,
awsSecret: process.env.AWS_SECRET_ACCESS_KEY,
localBuildFolder: 'build',
domain: 'admin-'
}
const branch = process.env.FAKE_PULL_REQUEST_BRANCH || process.env.TRAVIS_PULL_REQUEST_BRANCH
.. rest of the code...
travis.yml
language: node_js
node_js:
- 8
cache:
yarn: true
directories:
- node_modules
deploy:
provider: s3
access_key_id: $AWS_ACCESS_KEY_ID
secret_access_key: $AWS_SECRET_ACCESS_KEY
before_script:
- pip install --user awscli
- yarn run build
- yarn run test
script:
- babel-node ./src/client/deploy/deploy-pull-request.js
|
stackoverflow
|
{
"language": "en",
"length": 124,
"provenance": "stackexchange_0000F.jsonl.gz:856081",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514844"
}
|
8da827272855c3611ae1806dd1ce918e2b30949a
|
Stackoverflow Stackexchange
Q: JSF not null condition Using JSF & EL, I'm basically trying to check if a variable is null (or not).
Here is a code snippet:
<p:dataGrid value="#{bean.graphiques}"
var="graphique"
rows="1" columns="3">
<c:if test="#{not empty graphique}">
<p:chart type="line" model="#{graphique}"/>
</c:if>
<c:if test="#{empty graphique}">
<p:outputLabel>
Add a new chart.
</p:outputLabel>
</c:if>
</p:dataGrid>
First check, #{not empty graphique} is always false, even if graphique is not null. I tried with #{graphique ne null} and #{graphique != null}, but it's false, too.
When I remove the c:if statement, the chart is displayed. Thus, graphique is not null.
I looked for a solution on a lot of websites - including SO - but didn't manage to find a solution.
Do you know what's going on and how to solve my problem?
Thanks!
A: Did you try...
<p:chart type="line" model="#{graphique}" rendered="#{graphique != null}"/>
Sometimes I had issues with primefaces tags in <c:if>
|
Q: JSF not null condition Using JSF & EL, I'm basically trying to check if a variable is null (or not).
Here is a code snippet:
<p:dataGrid value="#{bean.graphiques}"
var="graphique"
rows="1" columns="3">
<c:if test="#{not empty graphique}">
<p:chart type="line" model="#{graphique}"/>
</c:if>
<c:if test="#{empty graphique}">
<p:outputLabel>
Add a new chart.
</p:outputLabel>
</c:if>
</p:dataGrid>
First check, #{not empty graphique} is always false, even if graphique is not null. I tried with #{graphique ne null} and #{graphique != null}, but it's false, too.
When I remove the c:if statement, the chart is displayed. Thus, graphique is not null.
I looked for a solution on a lot of websites - including SO - but didn't manage to find a solution.
Do you know what's going on and how to solve my problem?
Thanks!
A: Did you try...
<p:chart type="line" model="#{graphique}" rendered="#{graphique != null}"/>
Sometimes I had issues with primefaces tags in <c:if>
|
stackoverflow
|
{
"language": "en",
"length": 146,
"provenance": "stackexchange_0000F.jsonl.gz:856094",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514880"
}
|
b18115c38cd50b4fe42b2cddbe5390191cf46f37
|
Stackoverflow Stackexchange
Q: Reproducing UnknownTopicOrPartitionException: This server does not host this topic-partition We have encountered few exception on production environment:
UnknownTopicOrPartitionException: This server does not host this topic-partition
As per my analysis, one possible workaround for this issue is increasing no of retries since this is a retriable exception.
I am facing some difficulties which reproducing this issue locally. I tried bringing down broker while producing but it is failing with TimeoutException.
I am looking for suggestions to reproduce this issue.
A: If you get this error log during topic creation process, there is an open issue for this:
KAFKA-6221 ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic creation
at some point of time during batch creating topics, it's likely that UpdateMetadata requests got processed later than FetchRequest, therefore metadata cache was not updated on a timely basis.
issue was about log messages that have no impact on cluster health.
|
Q: Reproducing UnknownTopicOrPartitionException: This server does not host this topic-partition We have encountered few exception on production environment:
UnknownTopicOrPartitionException: This server does not host this topic-partition
As per my analysis, one possible workaround for this issue is increasing no of retries since this is a retriable exception.
I am facing some difficulties which reproducing this issue locally. I tried bringing down broker while producing but it is failing with TimeoutException.
I am looking for suggestions to reproduce this issue.
A: If you get this error log during topic creation process, there is an open issue for this:
KAFKA-6221 ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic creation
at some point of time during batch creating topics, it's likely that UpdateMetadata requests got processed later than FetchRequest, therefore metadata cache was not updated on a timely basis.
issue was about log messages that have no impact on cluster health.
|
stackoverflow
|
{
"language": "en",
"length": 145,
"provenance": "stackexchange_0000F.jsonl.gz:856112",
"question_score": "19",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44514923"
}
|
4a753f9924d6cd0e06ab9d57c1f10065f0529e50
|
Stackoverflow Stackexchange
Q: Is Kotlin "pass-by-value" or "pass-by-reference"? As I know Java is pass-by-value from this post. I am from Java background I wonder what Kotlin is using for passing values in between. Like in Extensions or Methods etc.
A: For primitives value is passed, and for non-primitives a reference to the object is passed. I'll explain with an example:
The code:
fun main() {
var a = 5
var b = a
a = 6
println("b = $b")
}
prints: b = 5
Kotlin passes the value of a to b, because a is a primitive. So changing a afterwards won't impact b.
The code:
fun main() {
var a = Dog(5)
var b = a
a.value = 6
println("b = ${b.value}")
}
class Dog (var value: Int)
prints b = 6, because this time a is not a primitive and so the reference to the object (Dog) was passed to b and not its value. Therefore changing a would affect all objects that point to it.
|
Q: Is Kotlin "pass-by-value" or "pass-by-reference"? As I know Java is pass-by-value from this post. I am from Java background I wonder what Kotlin is using for passing values in between. Like in Extensions or Methods etc.
A: For primitives value is passed, and for non-primitives a reference to the object is passed. I'll explain with an example:
The code:
fun main() {
var a = 5
var b = a
a = 6
println("b = $b")
}
prints: b = 5
Kotlin passes the value of a to b, because a is a primitive. So changing a afterwards won't impact b.
The code:
fun main() {
var a = Dog(5)
var b = a
a.value = 6
println("b = ${b.value}")
}
class Dog (var value: Int)
prints b = 6, because this time a is not a primitive and so the reference to the object (Dog) was passed to b and not its value. Therefore changing a would affect all objects that point to it.
A: Every time I hear about the "pass-by-value" vs "pass-by-reference" Java debate I always think the same. The answer I give: "Java passes a copy (pass-by-value) of the reference (pass-by-reference)". So everyone is happy. I would say Kotlin does the same as it is JVM based language.
UPDATE
OK, so it's been a while since this answer and I think some clarification should be included. As @robert-liberatore is mentioning in the comments, the behaviour I'm describing is true for objects. Whenever your methods expect any object, you can assume that the JVM internally will make a copy of the reference to the object and pass it to your method. That's why having code like
void doSomething(List<Integer> x) {
x = new ArrayList<Integer>()
}
List<Integer> x = Arrays.asList(1, 2, 3);
doSomething(x);
x.length() == 3
behaves like it does. You're copying the reference to the list, so "reassigning it" will take no effect in the real object. But since you're referring to the same object, modifying its inner content will affect the outer object.
This is something you may miss when defining your attributes as final in order to achieve immutability. You won't be able to reassign them, but there's nothing preventing you from changing its content
Of course, this is true for objects where you have a reference. In case of primitives, which are not a reference to an object containing something but "something" themselves, the thing is different. Java will still make a copy of the whole value (as it does with the whole reference) and pass it to the method. But primitives are just values, you can't "modify its inner values". So any change inside a method will not have effect in the outer values
Now, talking about Kotlin
In Kotlin you "don't have" primitive values. But you "do have" primitive classes. Internally, the compiler will try to use JVM primitive values where needed but you can assume that you always work with the boxed version of the JVM primitives. Because of that, when possible the compiler will just make a copy of the primitive value and, in other scenarios, it will copy the reference to the object. Or with code
fun aJvmPrimitiveWillBeUsedHere(x: Int): Int = x * 2
fun aJvmObjectWillBeUsedHere(x: Int?): Int = if (x != null) x * 2 else 1
I'd say that Kotlin scenario is a bit safer than Java because it forces its arguments to be final. So you can modify its inner content but not reassign it
fun doSomething(x: MutableList<Int>) {
x.add(2) // this works, you can modify the inner state
x = mutableListOf(1, 2) // this doesn't work, you can't reassign an argument
}
A: In Java primitive types like int, float, double, boolean are passed to a method by value, if you modify them inside the receiver method they doesn't change into the calling method. But if the property/variable type isn't a primitive, like arrays of primitives or other classes when they are changed inside the method that receive them as parameter they also change in the caller method.
But with Kotlin nothing seems to be primitive, so I think all is passed by reference.
A: It uses the same principles like Java. It is always pass-by-value, you can imagine that a copy is passed. For primitive types, e.g. Int this is obvious, the value of such an argument will be passed into a function and the outer variable will not be modified. Please note that parameters in Kotlin cannot be reassigned since they act like vals:
fun takeInt(a: Int) {
a = 5
}
This code will not compile because a cannot be reassigned.
For objects it's a bit more difficult but it's also call-by-value. If you call a function with an object, a copy of its reference is passed into that function:
data class SomeObj(var x: Int = 0)
fun takeObject(o: SomeObj) {
o.x = 1
}
fun main(args: Array<String>) {
val obj = SomeObj()
takeObject(obj)
println("obj after call: $obj") // SomeObj(x=1)
}
You can use a reference passed into a function to change the actual object.
A: The semantics is identical to Java.
In Java, when you have an instance of an object, and you pass it to a method, that method can change the state of that object, and when the method is done, the changes would have been applied to the object at the call site.
The same applies in Kotlin.
A: This might be a little bit confusing.
The correct answer, IMHO, is that everything passes by reference, but no assignment is possible so it will be similar to passing by value in C++.
Note that function parameters are constant, i.e., they cannot be assigned.
Remember that in Kotlin there are no primitive types. Everything is an object.
When you write:
var x: Int = 3
x += 10
You actually create an object of type Int, assign it the value 3, and get a reference, or pointer, named x.
When you write
x += 10
You reassign a new Int object, with the value 13, to x. The older object becomes a garbage (and garbage-collected).
Of course, the compiler optimizes it, and creates no objects in the heap in this particular case, but conceptually it is as explained.
So what is the meaning of passing by reference function parameters?
*
*Since no assignment is possible for function parameters, the main advantage of passing by reference in C++ does not exist in Kotlin.
*If the object (passed to the function) has a method which changes its internal state, it will affect the original object.
*
*No such method exists for Int, String, etc. They are immutable objects.
*No copy is ever generated when passing objects to functions.
A: Since Kotlin is a new language for JVM, like Java it is pass-by-value. The confusing part is with object, at first it looks like that it is passed-by-reference but the actuality is that the reference/pointer itself is pass-by-value (a copy of a reference is passed to a method) hence when a method receives a reference to an object, the method can manipulate the original object.
A: Bear in mind, am quite new to Kotlin. In my opinion, primitives are passed-by-value, but objects are passed-by-reference.
A primitive passed to a class works by default, but if you pass an object from a list, for example, and that object changes, the class object changes too. Because, in fact, it is the same object.
Additionally, if objects gets removed from the list, the class object IS STILL A REFERENCE. So it can still change due to references in other places.
Example below explaines. You can run it here.
fun main() {
val listObjects = mutableListOf(ClassB(), ClassB(), ClassB())
val listPrimitives = mutableListOf(111, 222, 333)
val test = ClassA()
test.ownedObject = listObjects[0]
test.ownedPrimitive = listPrimitives[0]
println("ownedObject: " + test.ownedObject.isEnabled +", ownedPrimitive: " +
test.ownedPrimitive)
listObjects[0].isEnabled = true
println("ownedObject: " + test.ownedObject.isEnabled +", ownedPrimitive: " +
test.ownedPrimitive)
listPrimitives[0] = 999
println("ownedObject: " + test.ownedObject.isEnabled +", ownedPrimitive: " +
test.ownedPrimitive)
}
class ClassA {
var ownedObject: ClassB = ClassB()
var ownedPrimitive: Int = 0
}
class ClassB {
var isEnabled = false
}
|
stackoverflow
|
{
"language": "en",
"length": 1355,
"provenance": "stackexchange_0000F.jsonl.gz:856143",
"question_score": "99",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44515031"
}
|
0ac17f428c91f1cd694eee7784f204f7658f3c35
|
Stackoverflow Stackexchange
Q: Android - Black BG color issue in canvas masking I an creating an android application in which i am going to crop a bitmap image using path in canvas.
I am able to cut the bitmap using path but it leaves black background on the remaining portion of the bitmap.
Below is my code to cut a bitmap with path and mask in canvas.
public Bitmap cropBitmap(Path path){
Bitmap maskImage = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), Bitmap.Config.ARGB_8888);
Canvas maskCanvas = new Canvas(maskImage);
maskCanvas.drawColor(0, PorterDuff.Mode.CLEAR);
Paint pathPaint = new Paint();
pathPaint.setAntiAlias(true);
pathPaint.setXfermode(null);
pathPaint.setStyle(Style.FILL);
pathPaint.setColor(Color.WHITE);
maskCanvas.drawPath(path,pathPaint);
Bitmap resultImg = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), Bitmap.Config.ARGB_8888);
Canvas mCanvas = new Canvas(resultImg);
Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
mCanvas.drawBitmap(bitmap, 0, 0, null);
paint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.DST_IN));
mCanvas.drawBitmap(maskImage, 0, 0, paint);
return resultImg;
}
and below is the input image with path.
and below is the result which i am getting right now.
I want to remove that black background portion.
that black portion should be transparent.
Is there any way i can remove that black portion and make it transparent?
|
Q: Android - Black BG color issue in canvas masking I an creating an android application in which i am going to crop a bitmap image using path in canvas.
I am able to cut the bitmap using path but it leaves black background on the remaining portion of the bitmap.
Below is my code to cut a bitmap with path and mask in canvas.
public Bitmap cropBitmap(Path path){
Bitmap maskImage = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), Bitmap.Config.ARGB_8888);
Canvas maskCanvas = new Canvas(maskImage);
maskCanvas.drawColor(0, PorterDuff.Mode.CLEAR);
Paint pathPaint = new Paint();
pathPaint.setAntiAlias(true);
pathPaint.setXfermode(null);
pathPaint.setStyle(Style.FILL);
pathPaint.setColor(Color.WHITE);
maskCanvas.drawPath(path,pathPaint);
Bitmap resultImg = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), Bitmap.Config.ARGB_8888);
Canvas mCanvas = new Canvas(resultImg);
Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
mCanvas.drawBitmap(bitmap, 0, 0, null);
paint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.DST_IN));
mCanvas.drawBitmap(maskImage, 0, 0, paint);
return resultImg;
}
and below is the input image with path.
and below is the result which i am getting right now.
I want to remove that black background portion.
that black portion should be transparent.
Is there any way i can remove that black portion and make it transparent?
|
stackoverflow
|
{
"language": "en",
"length": 168,
"provenance": "stackexchange_0000F.jsonl.gz:856145",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44515038"
}
|
90b0c5a93ce2c44b4426871d75657f1120fa7190
|
Stackoverflow Stackexchange
Q: Running selenium test cases(Test cases are in robot framework) using jenkins I have test cases which i have written in robot framework. I have written one library for robot framework but it all for selenium. I am using firefox browser. This test cases are working fine if i am running through command line.
If I start test cases using jenkins this error will show. I am using shell command to start robot framework.
NoSuchElementException: Message: Unable to locate element: {"method":"link text","selector":"Config Box"}
Stacktrace:
at FirefoxDriver.prototype.findElementInternal_ (file:///tmp/tmpkRQ7Lc/extensions/[email protected]/components/driver-component.js:10770)
at FirefoxDriver.prototype.findElement (file:///tmp/tmpkRQ7Lc/extensions/[email protected]/components/driver-component.js:10779)
at DelayedCommand.prototype.executeInternal_/h (file:///tmp/tmpkRQ7Lc/extensions/[email protected]/components/command-processor.js:12661)
at DelayedCommand.prototype.executeInternal_ (file:///tmp/tmpkRQ7Lc/extensions/[email protected]/components/command-processor.js:12666)
at DelayedCommand.prototype.execute/< (file:///tmp/tmpkRQ7Lc/extensions/[email protected]/components/command-processor.js:12608)
A: When running tests with Jenkins there are different timings on when the elements are available. Try to use keywords of Wait For ... or Sleep.
|
Q: Running selenium test cases(Test cases are in robot framework) using jenkins I have test cases which i have written in robot framework. I have written one library for robot framework but it all for selenium. I am using firefox browser. This test cases are working fine if i am running through command line.
If I start test cases using jenkins this error will show. I am using shell command to start robot framework.
NoSuchElementException: Message: Unable to locate element: {"method":"link text","selector":"Config Box"}
Stacktrace:
at FirefoxDriver.prototype.findElementInternal_ (file:///tmp/tmpkRQ7Lc/extensions/[email protected]/components/driver-component.js:10770)
at FirefoxDriver.prototype.findElement (file:///tmp/tmpkRQ7Lc/extensions/[email protected]/components/driver-component.js:10779)
at DelayedCommand.prototype.executeInternal_/h (file:///tmp/tmpkRQ7Lc/extensions/[email protected]/components/command-processor.js:12661)
at DelayedCommand.prototype.executeInternal_ (file:///tmp/tmpkRQ7Lc/extensions/[email protected]/components/command-processor.js:12666)
at DelayedCommand.prototype.execute/< (file:///tmp/tmpkRQ7Lc/extensions/[email protected]/components/command-processor.js:12608)
A: When running tests with Jenkins there are different timings on when the elements are available. Try to use keywords of Wait For ... or Sleep.
|
stackoverflow
|
{
"language": "en",
"length": 125,
"provenance": "stackexchange_0000F.jsonl.gz:856161",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44515096"
}
|
4acf9c9c5471f938ea84078aca64124f0fe006fa
|
Stackoverflow Stackexchange
Q: How to synchronize ios device contact with qt app? I want to sync up iOS device contacts with an application based on QT platform, I am in search of an API which can do this with qt
I find workaround (QT-JNI-Java bridge) for android.
http://doc.qt.io/qt-5/qandroidjniobject.html#details
It is working fine, but i didn't find any bridge for ios.
Note: QT has suggested following link
https://wiki.qt.io/Category:Developing_with_Qt::QtMobility
But it is related to Nokia Ovi store (which doesn't exist anymore) and the last date mentioned is 2011. The link to a release is dead.
If you come across any workaround for ios-qt, request you to suggest me.
Thanks in advance.
A: You can use the V-Play API for a cross-platform solution to handle phone contacts on iOS & Android.
These are the APIs:
https://v-play.net/doc/nativeutils/#getContacts-method
https://v-play.net/doc/nativeutils/#storeContacts-method
|
Q: How to synchronize ios device contact with qt app? I want to sync up iOS device contacts with an application based on QT platform, I am in search of an API which can do this with qt
I find workaround (QT-JNI-Java bridge) for android.
http://doc.qt.io/qt-5/qandroidjniobject.html#details
It is working fine, but i didn't find any bridge for ios.
Note: QT has suggested following link
https://wiki.qt.io/Category:Developing_with_Qt::QtMobility
But it is related to Nokia Ovi store (which doesn't exist anymore) and the last date mentioned is 2011. The link to a release is dead.
If you come across any workaround for ios-qt, request you to suggest me.
Thanks in advance.
A: You can use the V-Play API for a cross-platform solution to handle phone contacts on iOS & Android.
These are the APIs:
https://v-play.net/doc/nativeutils/#getContacts-method
https://v-play.net/doc/nativeutils/#storeContacts-method
|
stackoverflow
|
{
"language": "en",
"length": 132,
"provenance": "stackexchange_0000F.jsonl.gz:856172",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44515140"
}
|
b9dcd583eb446daf5a1f22dde4911ba48b043f6d
|
Stackoverflow Stackexchange
Q: List jobs on specific queue in Laravel + RabbitMQ I'm currently using Laravel 5.1 and RabbitMQ. My task requires that I list all the jobs in a specific queue, select at least one, and manipulate (purge/delete) it.
Is there a way to do it programmatically?
A: This is not possible, RabbitMQ allows you to get the first message from the queue and process it.
There is a way to get total messages count while calling declare a queue. But to me it looks suspicious
|
Q: List jobs on specific queue in Laravel + RabbitMQ I'm currently using Laravel 5.1 and RabbitMQ. My task requires that I list all the jobs in a specific queue, select at least one, and manipulate (purge/delete) it.
Is there a way to do it programmatically?
A: This is not possible, RabbitMQ allows you to get the first message from the queue and process it.
There is a way to get total messages count while calling declare a queue. But to me it looks suspicious
|
stackoverflow
|
{
"language": "en",
"length": 85,
"provenance": "stackexchange_0000F.jsonl.gz:856178",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44515150"
}
|
396101bacade001d159dccfd4541bea9bd638a16
|
Stackoverflow Stackexchange
Q: How to project the shapefile with GDAL ogr2ogr The GDAL ogr2ogr project the shape file with EPSG:28991 and create .prj file near Amersfoort. But the actual place of shp file should be in amsterdam.
How to reproject the shape file to locate it on amsterdam with the help of xmin ymin xmax, ymax.enter image description here
A: This Command can help you you convert shapefile projection when you have two different EPSGs.
ogr2ogr -f "ESRI Shapefile" -t_srs EPSG:NEW_EPSG_NUMBER -s_srs EPSG:OLD_EPSG_NUMBER output.shp input.shp
|
Q: How to project the shapefile with GDAL ogr2ogr The GDAL ogr2ogr project the shape file with EPSG:28991 and create .prj file near Amersfoort. But the actual place of shp file should be in amsterdam.
How to reproject the shape file to locate it on amsterdam with the help of xmin ymin xmax, ymax.enter image description here
A: This Command can help you you convert shapefile projection when you have two different EPSGs.
ogr2ogr -f "ESRI Shapefile" -t_srs EPSG:NEW_EPSG_NUMBER -s_srs EPSG:OLD_EPSG_NUMBER output.shp input.shp
A: There are a lot of reasons for why this didn't work. Do you know what the original projection of the shapefile was? Wrong placement doesn't necesarrily mean wrong projection. Is the actual data correct?
Ogr2ogr and other commandline tools are not the best solution for one-time only actions and solving such problems. Much more user friendly tool, that actually run on the same engines, is QGIS for example. You will get more visual perception on the problem and you will trouble shoot everything much faster.
A: Regarding the GDAL OGR, no problem in the tool itself. The prj file created contains the EPSG projection that you have assigned to the shapefile. The problem in such cases is, that the map and the shapefile layer are not on the same coordinates system which causes the shifting. To solve this problem make sure that the PRJ file generated contains the targeted coordinates system and what's more important is to assure that the map coordinates is exactly the same.
|
stackoverflow
|
{
"language": "en",
"length": 250,
"provenance": "stackexchange_0000F.jsonl.gz:856217",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44515255"
}
|
48296bdaf2d24fca3eb9f9c60660ec2d9c00716c
|
Stackoverflow Stackexchange
Q: Getting values from cells in google scripts I am trying to make working sheets for my work. In Google scripts, I've created "Custom Menu" for my sheet wich is sending email correctly. But now I want to get value from the specific cell and check if it is below, for example, 2, send an email with that value. For now, I have this:
function onOpen() {
var ui = SpreadsheetApp.getUi();
// Or DocumentApp or FormApp.
ui.createMenu('Custom Menu')
.addItem('First item', 'menuItem1')
.addSeparator()
.addToUi();
}
function menuItem1() {
SpreadsheetApp.getUi() // Or DocumentApp or FormApp.
.alert('You clicked the first menu item!');
if( 'A1' > 3){
MailApp.sendEmail('[email protected]', 'subject', 'message');
}
}
I don't know how to get this value from this cell. This 'If" is just an example of what I am trying to do, I know it is not working. Thank you in advance for any kind of help.
A: First, You need to find the sheet:
var sheet = SpreadsheetApp.getActiveSheet();
Then, you need to specify a cell range and get the value(s):
var value = sheet.getRange("A1").getValue();
You can browse the API for more functions here: https://developers.google.com/apps-script/reference/spreadsheet/spreadsheet-app
|
Q: Getting values from cells in google scripts I am trying to make working sheets for my work. In Google scripts, I've created "Custom Menu" for my sheet wich is sending email correctly. But now I want to get value from the specific cell and check if it is below, for example, 2, send an email with that value. For now, I have this:
function onOpen() {
var ui = SpreadsheetApp.getUi();
// Or DocumentApp or FormApp.
ui.createMenu('Custom Menu')
.addItem('First item', 'menuItem1')
.addSeparator()
.addToUi();
}
function menuItem1() {
SpreadsheetApp.getUi() // Or DocumentApp or FormApp.
.alert('You clicked the first menu item!');
if( 'A1' > 3){
MailApp.sendEmail('[email protected]', 'subject', 'message');
}
}
I don't know how to get this value from this cell. This 'If" is just an example of what I am trying to do, I know it is not working. Thank you in advance for any kind of help.
A: First, You need to find the sheet:
var sheet = SpreadsheetApp.getActiveSheet();
Then, you need to specify a cell range and get the value(s):
var value = sheet.getRange("A1").getValue();
You can browse the API for more functions here: https://developers.google.com/apps-script/reference/spreadsheet/spreadsheet-app
|
stackoverflow
|
{
"language": "en",
"length": 185,
"provenance": "stackexchange_0000F.jsonl.gz:856362",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44515670"
}
|
b11e4be26dbdad10a06642754c865c0a4a8f31a6
|
Stackoverflow Stackexchange
Q: Akka stream stops when AMQP server restarts I'm having a really weird issue using the Alpakka AMQP connector and Akka Streams.
When my RabbitMQ message broker restarts, the source seems to restart fine. However, once it's restarted, the stream never completes, and the message gets lost in a partition farther in the stream. When I start the AMQP server, my Akka app works fine, but everything is messed up the other way around.
Here's how I initialize my AMQPSource:
val amqpMessageSource = builder.add {
val amqpSource = AmqpSource(
NamedQueueSourceSettings(connectionDetails, amqpInMessageQueue).withDeclarations(queueDeclaration),
bufferSize = 10
).map { message =>
fromIncomingMessage(message)
}.initialDelay(5.seconds)
amqpSource.recoverWithRetries(-1, { case _ => amqpSource }) // Retry every 5 seconds an infinity of times
}
I've tried to remove the partition where the issue occurs to send the stream straight to the flow that is relevant for my example, and it's even weirder: in this case, the AMQP client doesn't even read messages from RabbitMQ anymore.
I'm obviously missing something here but I've tried a lot of different things that didn't solve my problem at all.
|
Q: Akka stream stops when AMQP server restarts I'm having a really weird issue using the Alpakka AMQP connector and Akka Streams.
When my RabbitMQ message broker restarts, the source seems to restart fine. However, once it's restarted, the stream never completes, and the message gets lost in a partition farther in the stream. When I start the AMQP server, my Akka app works fine, but everything is messed up the other way around.
Here's how I initialize my AMQPSource:
val amqpMessageSource = builder.add {
val amqpSource = AmqpSource(
NamedQueueSourceSettings(connectionDetails, amqpInMessageQueue).withDeclarations(queueDeclaration),
bufferSize = 10
).map { message =>
fromIncomingMessage(message)
}.initialDelay(5.seconds)
amqpSource.recoverWithRetries(-1, { case _ => amqpSource }) // Retry every 5 seconds an infinity of times
}
I've tried to remove the partition where the issue occurs to send the stream straight to the flow that is relevant for my example, and it's even weirder: in this case, the AMQP client doesn't even read messages from RabbitMQ anymore.
I'm obviously missing something here but I've tried a lot of different things that didn't solve my problem at all.
|
stackoverflow
|
{
"language": "en",
"length": 178,
"provenance": "stackexchange_0000F.jsonl.gz:856384",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44515718"
}
|
6cd171ccc3b46116c6973b2edb06f52915f493b9
|
Stackoverflow Stackexchange
Q: 'Conda' is not recognized as internal or external command I installed Anaconda3 4.4.0 (32 bit) on my Windows 7 Professional machine and imported NumPy and Pandas on Jupyter notebook so I assume Python was installed correctly. But when I type conda list and conda --version in command prompt, it says conda is not recognized as internal or external command.
I have set environment variable for Anaconda3; Variable Name: Path, Variable Value: C:\Users\dipanwita.neogy\Anaconda3
How do I make it work?
A: In addition to adding C:\Users\yourusername\Anaconda3 and C:\Users\yourusername\Anaconda3\Scripts, as recommended by Raja (above), also add C:\Users\yourusername\Anaconda3\Library\bin to your path variable. This will prevent an SSL error that is bound to happen if you're performing this on a fresh install of Anaconda.
|
Q: 'Conda' is not recognized as internal or external command I installed Anaconda3 4.4.0 (32 bit) on my Windows 7 Professional machine and imported NumPy and Pandas on Jupyter notebook so I assume Python was installed correctly. But when I type conda list and conda --version in command prompt, it says conda is not recognized as internal or external command.
I have set environment variable for Anaconda3; Variable Name: Path, Variable Value: C:\Users\dipanwita.neogy\Anaconda3
How do I make it work?
A: In addition to adding C:\Users\yourusername\Anaconda3 and C:\Users\yourusername\Anaconda3\Scripts, as recommended by Raja (above), also add C:\Users\yourusername\Anaconda3\Library\bin to your path variable. This will prevent an SSL error that is bound to happen if you're performing this on a fresh install of Anaconda.
A: If you have a newer version of the Anaconda Navigator, open the Anaconda Prompt program that came in the install. Type all the usual conda update/conda install commands there.
I think the answers above explain this, but I could have used a very simple instruction like this. Perhaps it will help others.
A: For conda --version greater than 4.6, from the base of your Anaconda promt, run
conda update conda
conda init
This will update your conda root environment and setup the stuff you need to run it on both cwd and powershell.
After this, you can start any terminal and it will be conda ready.
A: I found the solution.
Variable value should be C:\Users\dipanwita.neogy\Anaconda3\Scripts
A: When you install anaconda on windows now, it doesn't automatically add Python or Conda to your path.
While during the installation process you can check this box, you can also add python and/or python to your path manually (as you can see below the image)
If you don’t know where your conda and/or python is, you type the following commands into your anaconda prompt
where python
where conda
Next, you can add Python and Conda to your path by using the setx command in your command prompt (replace C:\Users\mgalarnyk\Anaconda2 with the results you got when running where python and where conda).
SETX PATH "%PATH%;C:\Users\mgalarnyk\Anaconda2\Scripts;C:\Users\mgalarnyk\Anaconda2"
Next close that command prompt and open a new one. Congrats you can now use conda and python
Source: https://medium.com/@GalarnykMichael/install-python-on-windows-anaconda-c63c7c3d1444
A: Go To anaconda prompt(type "anaconda" in search box in your laptop). type following commands
where conda
add that location to your environment path variables. Close the cmd and open it again
A: This problem arose for me when I installed Anaconda multiple times. I was careful to do an uninstall but there are some things that the uninstall process doesn't undo.
In my case, I needed to remove a file Microsoft.PowerShell_profile.ps1 from ~\Documents\WindowsPowerShell\. I identified that this file was the culprit by opening it in a text editor. I saw that it referenced the old installation location C:\Anaconda3\.
A: I was faced with the same issue in windows 10, Updating the environment variable following steps, it's working fine.
I know It is a lengthy answer for the simple environment setups, I thought it's may be useful for the new window 10 users.
1) Open Anaconda Prompt:
2) Check Conda Installed Location.
where conda
3) Open Advanced System Settings
4) Click on Environment Variables
5) Edit Path
6) Add New Path
C:\Users\RajaRama\Anaconda3\Scripts
C:\Users\RajaRama\Anaconda3
C:\Users\RajaRama\Anaconda3\Library\bin
7) Open Command Prompt and Check Versions
8) After 7th step type
conda install anaconda-navigator in cmd then press y
A: Just to be clear, you need to go to the controlpanel\System\Advanced system settings\Environment Variables\Path,
then hit edit and add:
C:Users\user.user\Anaconda3\Scripts
to the end and restart the cmd line
A: Although you were offered a good solution by others I think it is helpful to point out what is really happening. As per the Anaconda 4.4 changelog, https://docs.anaconda.com/anaconda/reference/release-notes/#what-s-new-in-anaconda-4-4:
On Windows, the PATH environment variable is no longer changed by default, as this can cause trouble with other software. The recommended approach is to instead use Anaconda Navigator or the Anaconda Command Prompt (located in the Start Menu under “Anaconda”) when you wish to use Anaconda software.
(Note: recent Win 10 does not assume you have privileges to install or update. If the command fails, right-click on the Anaconda Command Prompt, choose "More", chose "Run as administrator")
This is a change from previous installations. It is suggested to use Navigator or the Anaconda Prompt although you can always add it to your PATH as well. During the install the box to add Anaconda to the PATH is now unchecked but you can select it.
A: If you don't want to add Anaconda to env. path and you are using Windows try this:
*
*Open cmd;
*Type path to your folder instalation. It's something like:
C:\Users\your_home folder\Anaconda3\Scripts
*Test Anaconda, for exemple type conda --version.
*Update Anaconda: conda update conda or conda update --all or conda update anaconda.
Update Spyder:
*
*conda update qt pyqt
*conda update spyder
A: I have Windows 10 64 bit, this worked for me,
This solution can work for both (Anaconda/MiniConda) distributions.
*
*First of all try to uninstall anaconda/miniconda which is causing problem.
*After that delete '.anaconda' and '.conda' folders from 'C:\Users\'
*If you have any antivirus software installed then try to exclude all the folders,subfolders inside 'C:\ProgramData\Anaconda3\' from
*
*Behaviour detection.
*Virus detection.
*DNA scan.
*Suspicious files scan.
*Any other virus protection mode.
*(Note: 'C:\ProgramData\Anaconda3' this folder is default installation folder, you can change it just replace your excluded path at installation destination prompt while installing Anaconda)*
*Now install Anaconda with admin privileges.
*
*Set the installation path as 'C:\ProgramData\Anaconda3' or you can specify your custom path just remember it should not contain any white space and it should be excluded from virus detection.
*At Advanced Installation Options you can check "Add Anaconda to my PATH environment variable(optional)" and "Register Anaconda as my default Python 3.6"
*Install it with further default settings. Click on finish after done.
*Restart your computer.
Now open Command prompt or Anaconda prompt and check installation using following command
conda list
If you get any package list then the anaconda/miniconda is successfully installed.
A: I have just launched anaconda-navigator and run the conda commands from there.
A: For those who didn't check "Add Anaconda to my PATH environment variable". In Windows 10 it looks like that:
5 paths:
C:\Users\shtosh\anaconda3
C:\Users\shtosh\anaconda3\Library\mingw-w64\bin
C:\Users\shtosh\anaconda3\Library\usr\bin
C:\Users\shtosh\anaconda3\Library\bin
C:\Users\shtosh\anaconda3\Scripts
A: if you use chocolatey, conda is in C:\tools\Anaconda3\Scripts
A: I had this problem in windows. Most of the answers are not as recommended by anaconda, you should not add the path to the environment variables as it can break other things. Instead you should use anaconda prompt as mentioned in the top answer.
However, this may also break. In this case right click on the shortcut, go to shortcut tab, and the target value should read something like:
%windir%\System32\cmd.exe "/K" C:\Users\myUser\Anaconda3\Scripts\activate.bat C:\Users\myUser\Anaconda3
|
stackoverflow
|
{
"language": "en",
"length": 1120,
"provenance": "stackexchange_0000F.jsonl.gz:856400",
"question_score": "244",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44515769"
}
|
adff070f35c42876b2f41392779210e55ef8ef83
|
Stackoverflow Stackexchange
Q: Using React-file-viewer I'm trying to use React-file-viewer. Npm tutorial is here
But I have an error in the console : "you may need an appropriate loader to handle this file type"
This is my code :
import FileViewer from 'react-file-viewer';
import { CustomErrorComponent } from 'custom-error';
const file = 'http://example.com/image.png'
const type = 'png'
onError = (e) => {
logger.logError(e, 'error in file-viewer');
}
<FileViewer
fileType={type}
filePath={file}
errorComponent={CustomErrorComponent}
onError={this.onError}/>
I specify, i have babel-preset-es2015 and i use it
How can I do ?
Thank you
A: You have to include babel in webpack config as
loaders: [
{test: /\.js$/, include: path.join(__dirname, 'src'), loaders: ['babel']},
{ test: /\.jsx$/, exclude: /node_modules/, loader: 'babel-loader' }]
As you are using
onError = (e) => {
logger.logError(e, 'error in file-viewer');
}
which is es6 syntax.To make it browser compatible you have to add
{test: /\.js$/, include: path.join(__dirname, 'src'), loaders: ['babel']}
|
Q: Using React-file-viewer I'm trying to use React-file-viewer. Npm tutorial is here
But I have an error in the console : "you may need an appropriate loader to handle this file type"
This is my code :
import FileViewer from 'react-file-viewer';
import { CustomErrorComponent } from 'custom-error';
const file = 'http://example.com/image.png'
const type = 'png'
onError = (e) => {
logger.logError(e, 'error in file-viewer');
}
<FileViewer
fileType={type}
filePath={file}
errorComponent={CustomErrorComponent}
onError={this.onError}/>
I specify, i have babel-preset-es2015 and i use it
How can I do ?
Thank you
A: You have to include babel in webpack config as
loaders: [
{test: /\.js$/, include: path.join(__dirname, 'src'), loaders: ['babel']},
{ test: /\.jsx$/, exclude: /node_modules/, loader: 'babel-loader' }]
As you are using
onError = (e) => {
logger.logError(e, 'error in file-viewer');
}
which is es6 syntax.To make it browser compatible you have to add
{test: /\.js$/, include: path.join(__dirname, 'src'), loaders: ['babel']}
A: module: {
loaders: [
// .ts(x) files should first pass through the Typescript loader, and then through babel
{ test: /\.tsx?$/, loaders: ['babel', 'ts-loader'] },
{ test: /\.css$/, loaders: ['style', 'css-loader'] },
{ test: /\.scss$/, loaders: ['style', 'css-loader?modules&importLoaders=1&localIdentName=[local]-[hash:base64:5]', 'postcss-loader', 'sass'] },
{ test: /\.(png|svg|gif|jpg|jpeg)$/, loaders: [ 'url-loader', 'image-webpack?bypassOnDebug'] },
{ test: /\.(eot|woff|ttf|woff2)$/, loader: "file?name=[name].[ext]" }
]
}
A: Make sure you have installed the following presets and plugins, as listed in node-modules/react-file-viewer/.babelrc file:
{
"presets": [
"react",
"es2015",
"stage-0"
],
"plugins": [
"transform-class-properties",
"transform-es2015-classes",
"transform-es2015-object-super",
"transform-runtime"
]
}
Assuming you already have the react and es2015 in your project, the npm command will be:
npm install --save-dev babel-preset-stage-0 \
babel-plugin-transform-class-properties \
babel-plugin-transform-es2015-classes \
babel-plugin-transform-es2015-object-super \
babel-plugin-transform-runtime
A: You need to import logger from Logging library,
something like this:
import logger from 'logging-Lib';
see more:
https://www.npmjs.com/package/react-file-viewer
A: You need to import the file using require.
// import FileViewer from "react-file-viewer";
const FileViewer = require('react-file-viewer');
After that if you are getting any error like Module not found: Can't resolve 'console'. you can run
npm install console
|
stackoverflow
|
{
"language": "en",
"length": 321,
"provenance": "stackexchange_0000F.jsonl.gz:856408",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44515802"
}
|
68eaafea6c01ff67cca9c2e08487f4cb6f5a1576
|
Stackoverflow Stackexchange
Q: Combining functions and consumers with double-column notation I often use the double-colon notation for brevity.
I am writing the following method that takes a short list of entities, validates them, and saves back to database.
@Override@Transactional
public void bulkValidate(Collection<Entity> transactions)
{
Consumer<Entity> validator = entityValidator::validate;
validator = validator.andThen(getDao()::update);
if (transactions != null)
transactions.forEach(validator);
}
I'd like to know if there is a shorthand syntax avoiding to instantiate the validator variable
Following syntax is invalid ("The target type of this expression must be a functional interface")
transactions.forEach((entityValidator::validate).andThen(getDao()::update));
A: You could do that, but you would need to cast explicitly...
transactions.forEach(((Consumer<Entity>)(entityValidator::validate))
.andThen(getDao()::update));
The thing is that a method reference like this entityValidator::validate does not have a type, it's a poly expression and it depends on the context.
You could also define a method to combine these Consumers:
@SafeVarargs
private static <T> Consumer<T> combine(Consumer<T>... consumers) {
return Arrays.stream(consumers).reduce(s -> {}, Consumer::andThen);
}
And use it:
transactions.forEach(combine(entityValidator::validate, getDao()::update))
|
Q: Combining functions and consumers with double-column notation I often use the double-colon notation for brevity.
I am writing the following method that takes a short list of entities, validates them, and saves back to database.
@Override@Transactional
public void bulkValidate(Collection<Entity> transactions)
{
Consumer<Entity> validator = entityValidator::validate;
validator = validator.andThen(getDao()::update);
if (transactions != null)
transactions.forEach(validator);
}
I'd like to know if there is a shorthand syntax avoiding to instantiate the validator variable
Following syntax is invalid ("The target type of this expression must be a functional interface")
transactions.forEach((entityValidator::validate).andThen(getDao()::update));
A: You could do that, but you would need to cast explicitly...
transactions.forEach(((Consumer<Entity>)(entityValidator::validate))
.andThen(getDao()::update));
The thing is that a method reference like this entityValidator::validate does not have a type, it's a poly expression and it depends on the context.
You could also define a method to combine these Consumers:
@SafeVarargs
private static <T> Consumer<T> combine(Consumer<T>... consumers) {
return Arrays.stream(consumers).reduce(s -> {}, Consumer::andThen);
}
And use it:
transactions.forEach(combine(entityValidator::validate, getDao()::update))
|
stackoverflow
|
{
"language": "en",
"length": 155,
"provenance": "stackexchange_0000F.jsonl.gz:856457",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44515968"
}
|
1116f40abb1d1f4136cd6c6e1cb42250ce9b9afb
|
Stackoverflow Stackexchange
Q: How to handle window scroll event in Angular 4? I can't seem to be able to capture the Window scroll event.
On several sites I found code similar to this:
@HostListener("window:scroll", [])
onWindowScroll() {
console.log("Scrolling!");
}
The snippets often come from version 2. This doesn't seem to work (anymore?) in Angular 4.2.2. If I replace "window:scroll" with "window:touchmove" for example, then then touchmove event is handled fine.
Does anyone know what I'm missing? Thank you very much!
A: In angular 8, implement this code, in my case it worked correctly to change the color of the navbar using scroll...
your template:
<div class="level" (scroll)="scrolling($event)" [ngClass]="{'level-trans': scroll}">
<!-- your template -->
</div>
your .ts
export class HomeNavbarComponent implements OnInit {
scroll:boolean=false;
constructor() { }
ngOnInit() {
window.addEventListener('scroll', this.scrolling, true)
}
scrolling=(s)=>{
let sc = s.target.scrollingElement.scrollTop;
console.log();
if(sc >=100){this.scroll=true}
else{this.scroll=false}
}
your css
.level{
width: 100%;
height: 57px;
box-shadow: 0 0 5px 0 rgba(0, 0,0,0.7);
background: transparent;
display: flex;
position: fixed;
top: 0;
z-index: 5;
transition: .8s all ease;
}
.level-trans{
background: whitesmoke;
}
|
Q: How to handle window scroll event in Angular 4? I can't seem to be able to capture the Window scroll event.
On several sites I found code similar to this:
@HostListener("window:scroll", [])
onWindowScroll() {
console.log("Scrolling!");
}
The snippets often come from version 2. This doesn't seem to work (anymore?) in Angular 4.2.2. If I replace "window:scroll" with "window:touchmove" for example, then then touchmove event is handled fine.
Does anyone know what I'm missing? Thank you very much!
A: In angular 8, implement this code, in my case it worked correctly to change the color of the navbar using scroll...
your template:
<div class="level" (scroll)="scrolling($event)" [ngClass]="{'level-trans': scroll}">
<!-- your template -->
</div>
your .ts
export class HomeNavbarComponent implements OnInit {
scroll:boolean=false;
constructor() { }
ngOnInit() {
window.addEventListener('scroll', this.scrolling, true)
}
scrolling=(s)=>{
let sc = s.target.scrollingElement.scrollTop;
console.log();
if(sc >=100){this.scroll=true}
else{this.scroll=false}
}
your css
.level{
width: 100%;
height: 57px;
box-shadow: 0 0 5px 0 rgba(0, 0,0,0.7);
background: transparent;
display: flex;
position: fixed;
top: 0;
z-index: 5;
transition: .8s all ease;
}
.level-trans{
background: whitesmoke;
}
A: Just in case I was looking to capture the wheel action over an element that had no way to scroll since it didn't have a scroll bar ...
So, what I needed was this:
@HostListener('mousewheel', ['$event'])
onMousewheel(event) {
console.log(event)
}
A: If you happen to be using Angular Material, you can do this:
import { ScrollDispatchModule } from '@angular/cdk/scrolling';
In Ts:
import { ScrollDispatcher } from '@angular/cdk/scrolling';
constructor(private scrollDispatcher: ScrollDispatcher) {
this.scrollDispatcher.scrolled().subscribe(x => console.log('I am scrolling'));
}
And in Template:
<div cdkScrollable>
<div *ngFor="let one of manyToScrollThru">
{{one}}
</div>
</div>
Reference: https://material.angular.io/cdk/scrolling/overview
A: I am not allowed to comment yet. @PierreDuc your answer is spot on, except as @Robert said the document does not scroll. I modified your answer a little bit to use the event sent by the listener and then monitor the source element.
ngOnInit() {
window.addEventListener('scroll', this.scrollEvent, true);
}
ngOnDestroy() {
window.removeEventListener('scroll', this.scrollEvent, true);
}
scrollEvent = (event: any): void => {
const n = event.srcElement.scrollingElement.scrollTop;
}
A: Probably your document isn't scrolling, but a div inside it is. The scroll event only bubbles up to the window if it's called from document. Also if you capture the event from document and call something like stopPropagation, you will not receive the event in window.
If you want to capture all the scroll events inside your application, which will also be from tiny scrollable containers, you have to use the default addEventListener method with useCapture set to true.
This will fire the event when it goes down the DOM, instead of the bubble stage. Unfortunately, and quite frankly a big miss, angular does not provide an option to pass in the event listener options, so you have to use the addEventListener:
export class WindowScrollDirective {
ngOnInit() {
window.addEventListener('scroll', this.scroll, true); //third parameter
}
ngOnDestroy() {
window.removeEventListener('scroll', this.scroll, true);
}
scroll = (event): void => {
//handle your scroll here
//notice the 'odd' function assignment to a class field
//this is used to be able to remove the event listener
};
}
Now this is not all there is to it, because all major browsers (except IE and Edge, obviously) have implemented the new addEventListener spec, which makes it possible to pass an object as third parameter.
With this object you can mark an event listener as passive. This is a recommend thing to do on an event which fires a lot of time, which can interfere with UI performance, like the scroll event. To implement this, you should first check if the current browser supports this feature. On the mozilla.org they've posted a method passiveSupported, with which you can check for browser support. You can only use this though, when you are sure you are not going to use event.preventDefault()
Before I show you how to do that, there is another performance feature you could think of. To prevent change detection from running (the DoCheck gets called every time something async happens within the zone. Like an event firing), you should run your event listener outside the zone, and only enter it when it's really necessary. Soo, let's combine all these things:
export class WindowScrollDirective {
private eventOptions: boolean|{capture?: boolean, passive?: boolean};
constructor(private ngZone: NgZone) {}
ngOnInit() {
if (passiveSupported()) { //use the implementation on mozilla
this.eventOptions = {
capture: true,
passive: true
};
} else {
this.eventOptions = true;
}
this.ngZone.runOutsideAngular(() => {
window.addEventListener('scroll', this.scroll, <any>this.eventOptions);
});
}
ngOnDestroy() {
window.removeEventListener('scroll', this.scroll, <any>this.eventOptions);
//unfortunately the compiler doesn't know yet about this object, so cast to any
}
scroll = (): void => {
if (somethingMajorHasHappenedTimeToTellAngular) {
this.ngZone.run(() => {
this.tellAngular();
});
}
};
}
A: an alternative with window in HostListener :
use body
in core.d.ts:
The global target names that can be used to prefix an event name are
document:, window: and body:
@HostListener('body:scroll', ['$event']) onScroll(event: any) {
console.log(event);
}
|
stackoverflow
|
{
"language": "en",
"length": 808,
"provenance": "stackexchange_0000F.jsonl.gz:856472",
"question_score": "59",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516017"
}
|
e4b186632e55f8180fb8343b88fe09faf9c8d401
|
Stackoverflow Stackexchange
Q: Error:(81, 0) getMainOutputFile is no longer supported. Use getOutputFileName if you need to determine the file name of the output. I am trying to customize the build process using below code
android.applicationVariants.all { variant ->
def appName = "MyApplication.apk"
variant.outputs.each { output ->
output.outputFile = new File(output.outputFile.parent, appName)
}
}
But from android studio 3.0 it not working I am getting below error
Error:(81, 0) getMainOutputFile is no longer supported. Use getOutputFileName if you need to determine the file name of the output.
A: Just do it like this:
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
signingConfig getSigningConfig()
android.applicationVariants.all { variant ->
def date = new Date();
def formattedDate = date.format('dd MMMM yyyy')
variant.outputs.all {
def newApkName
newApkName = "MyApp-${variant.versionName}, ${formattedDate}.apk"
outputFileName = newApkName;
}
}
}
}
|
Q: Error:(81, 0) getMainOutputFile is no longer supported. Use getOutputFileName if you need to determine the file name of the output. I am trying to customize the build process using below code
android.applicationVariants.all { variant ->
def appName = "MyApplication.apk"
variant.outputs.each { output ->
output.outputFile = new File(output.outputFile.parent, appName)
}
}
But from android studio 3.0 it not working I am getting below error
Error:(81, 0) getMainOutputFile is no longer supported. Use getOutputFileName if you need to determine the file name of the output.
A: Just do it like this:
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
signingConfig getSigningConfig()
android.applicationVariants.all { variant ->
def date = new Date();
def formattedDate = date.format('dd MMMM yyyy')
variant.outputs.all {
def newApkName
newApkName = "MyApp-${variant.versionName}, ${formattedDate}.apk"
outputFileName = newApkName;
}
}
}
}
A: This is covered in the Android Gradle Plugin v3 migration guide:
Using the Variant API to manipulate variant outputs is broken with the
new plugin. It still works for simple tasks, such as changing the APK
name during build time, as shown below:
// If you use each() to iterate through the variant objects,
// you need to start using all(). That's because each() iterates
// through only the objects that already exist during configuration time—
// but those object don't exist at configuration time with the new model.
// However, all() adapts to the new model by picking up object as they are
// added during execution.
android.applicationVariants.all { variant ->
variant.outputs.all {
outputFileName = "${project.name}-${variant.name}-${variant.versionName}.apk"
}
}
There will be a new api for more complex use cases than renaming the output file name.
|
stackoverflow
|
{
"language": "en",
"length": 268,
"provenance": "stackexchange_0000F.jsonl.gz:856474",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516023"
}
|
0f025ad2f0a2c9afa4cd7de6d140eb5a8be6849d
|
Stackoverflow Stackexchange
Q: Escape quotes in jq I've the following two bash lines
TMPFILE="$(mktemp)" || exit 1
< package.json jq '. + {"foo":'"${BOO}"'}' > "$TMPFILE"
but I get the following error:
jq: error: syntax error, unexpected '}' (Unix shell quoting issues?) at <top-level>, line 1:
. + {"foo":}
jq: 1 compile error
any idea how to escape properly that part by having the double quote there to mute the shellcheck error
A: Just use a variable and save yourself the hassle:
< package.json jq --arg b "$BOO" '. + { foo: $b }'
--arg b "$BOO" creates a variable $b that you can use inside jq, without having to deal with quoting issues.
That said, the reason that your attempt was failing was that you were missing some literal double quotes:
< package.json jq '. + { foo: "'"$BOO"'" }'
The extra double quotes inside each of the the single-quoted parts of the command are needed, as the other ones are consumed by the shell before the command string is passed to jq.
This will still fail in the case that the shell variable contains any quotes, so the first approach is the preferred one.
|
Q: Escape quotes in jq I've the following two bash lines
TMPFILE="$(mktemp)" || exit 1
< package.json jq '. + {"foo":'"${BOO}"'}' > "$TMPFILE"
but I get the following error:
jq: error: syntax error, unexpected '}' (Unix shell quoting issues?) at <top-level>, line 1:
. + {"foo":}
jq: 1 compile error
any idea how to escape properly that part by having the double quote there to mute the shellcheck error
A: Just use a variable and save yourself the hassle:
< package.json jq --arg b "$BOO" '. + { foo: $b }'
--arg b "$BOO" creates a variable $b that you can use inside jq, without having to deal with quoting issues.
That said, the reason that your attempt was failing was that you were missing some literal double quotes:
< package.json jq '. + { foo: "'"$BOO"'" }'
The extra double quotes inside each of the the single-quoted parts of the command are needed, as the other ones are consumed by the shell before the command string is passed to jq.
This will still fail in the case that the shell variable contains any quotes, so the first approach is the preferred one.
|
stackoverflow
|
{
"language": "en",
"length": 193,
"provenance": "stackexchange_0000F.jsonl.gz:856477",
"question_score": "15",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516029"
}
|
87fe7f9b17b55c8f8a79c35f2288d89cbdde885d
|
Stackoverflow Stackexchange
Q: Can I change the simulator order in Xcode? As title...
The simulator list is too long,
the default order of the list seems like order by alphabet,
If I want "iPhone Simulator" to show on the top, any method?
A: It's a very old question, but I will answer if anyone needs it.
Yes, it is totally possible.
Xcode with sorted simulators
It's not easy, but it's posssible. You will need to delete all existing simulators through "Devices and Simulators".
Devices and Simulators
Then you must recreate one at a time, in the order you want, from top to bottom, numbering them so that they are in the order.
No use renaming, it will stay in the order it was created. Create numbered, because in the creation it will put in the order you think necessary.
|
Q: Can I change the simulator order in Xcode? As title...
The simulator list is too long,
the default order of the list seems like order by alphabet,
If I want "iPhone Simulator" to show on the top, any method?
A: It's a very old question, but I will answer if anyone needs it.
Yes, it is totally possible.
Xcode with sorted simulators
It's not easy, but it's posssible. You will need to delete all existing simulators through "Devices and Simulators".
Devices and Simulators
Then you must recreate one at a time, in the order you want, from top to bottom, numbering them so that they are in the order.
No use renaming, it will stay in the order it was created. Create numbered, because in the creation it will put in the order you think necessary.
A: No, it is not order by alphabet, I check it. I think, you can just remove redundant simulators.
|
stackoverflow
|
{
"language": "en",
"length": 156,
"provenance": "stackexchange_0000F.jsonl.gz:856482",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516049"
}
|
0695c4b0e6549e5d08daae6d2f832c6bdb738285
|
Stackoverflow Stackexchange
Q: User-Scalable meta attribute not working I tried to use "user-scalable=no" in the meta tag to disable zooming my webpage. I tested it and the webpage was zoomed in. How can I make it the original but still disable zooming? I also tried initial-scale=1.0 doesnt work though. So what do I have to put in the meta tag do disable zooming but with the original size?
thanks in advance
A: Unfortunately there is no easy way workable solution yet (as of moment of writing this). iOS simple ignores user-scalable.
Btw, this seems like duplicated question. Please see :
disable viewport zooming iOS 10+ safari?
|
Q: User-Scalable meta attribute not working I tried to use "user-scalable=no" in the meta tag to disable zooming my webpage. I tested it and the webpage was zoomed in. How can I make it the original but still disable zooming? I also tried initial-scale=1.0 doesnt work though. So what do I have to put in the meta tag do disable zooming but with the original size?
thanks in advance
A: Unfortunately there is no easy way workable solution yet (as of moment of writing this). iOS simple ignores user-scalable.
Btw, this seems like duplicated question. Please see :
disable viewport zooming iOS 10+ safari?
|
stackoverflow
|
{
"language": "en",
"length": 104,
"provenance": "stackexchange_0000F.jsonl.gz:856483",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516052"
}
|
39e2cd2f2304b306d2d67091a8ed6b1f0375756b
|
Stackoverflow Stackexchange
Q: Disable screen offset when displaying the software keyboard Project qml / c ++ for Android / iOS. When the software keyboard is displayed, we sometimes get an unwanted shift of the entire page up.
What can be done? Is it possible to block the change of our page when the software keyboard is displayed?
|
Q: Disable screen offset when displaying the software keyboard Project qml / c ++ for Android / iOS. When the software keyboard is displayed, we sometimes get an unwanted shift of the entire page up.
What can be done? Is it possible to block the change of our page when the software keyboard is displayed?
|
stackoverflow
|
{
"language": "en",
"length": 55,
"provenance": "stackexchange_0000F.jsonl.gz:856492",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516085"
}
|
4a3b35159001f2f1a459d6a7b7a2a8affce0f620
|
Stackoverflow Stackexchange
Q: How to enforce types on another interface after implementing an interface I want to create a custom Function which enforces the types.
public interface StringGroupFunction implements Function<String, String> {
}
This does not allowed. Only possibility I found was making StringGroupFunction a abstract class. Any other ideas ?
A: This is a common misunderstanding: An interface does not implement another interface, it extends it, as it doesn't provide a body to the functions.
As said in the doc:
If you want to add additional methods to an interface, you have several options. You could create a DoItPlus interface that extends DoIt:
public interface DoItPlus extends DoIt {
boolean didItWork(int i, double x, String s);
}
You can read more about interfaces in the java specification:
If an extends clause is provided, then the interface being declared extends each of the other named interfaces and therefore inherits the member types, methods, and constants of each of the other named interfaces.
These other named interfaces are the direct superinterfaces of the interface being declared.
Any class that implements the declared interface is also considered to implement all the interfaces that this interface extends.
|
Q: How to enforce types on another interface after implementing an interface I want to create a custom Function which enforces the types.
public interface StringGroupFunction implements Function<String, String> {
}
This does not allowed. Only possibility I found was making StringGroupFunction a abstract class. Any other ideas ?
A: This is a common misunderstanding: An interface does not implement another interface, it extends it, as it doesn't provide a body to the functions.
As said in the doc:
If you want to add additional methods to an interface, you have several options. You could create a DoItPlus interface that extends DoIt:
public interface DoItPlus extends DoIt {
boolean didItWork(int i, double x, String s);
}
You can read more about interfaces in the java specification:
If an extends clause is provided, then the interface being declared extends each of the other named interfaces and therefore inherits the member types, methods, and constants of each of the other named interfaces.
These other named interfaces are the direct superinterfaces of the interface being declared.
Any class that implements the declared interface is also considered to implement all the interfaces that this interface extends.
|
stackoverflow
|
{
"language": "en",
"length": 192,
"provenance": "stackexchange_0000F.jsonl.gz:856495",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516088"
}
|
82677d05240f0f50d1a02424bb83b3030280fc6f
|
Stackoverflow Stackexchange
Q: how to create a login API in node js? I am developing a web application which is completely written in node js.
All the functionalities and the web UI has been completed.
What happening is, I am able to hit the services API without logging into the application. It is like you can use the services without logging to app.
Can anyone tell me how I can create a login API so that until the user logs in, it cant use the services?
Any help will be appreciated.
A: I recommend to use passportjs, great library to implement log in strategies, and really well documented. I hope that it would be help full.
http://passportjs.org/
Regards.
|
Q: how to create a login API in node js? I am developing a web application which is completely written in node js.
All the functionalities and the web UI has been completed.
What happening is, I am able to hit the services API without logging into the application. It is like you can use the services without logging to app.
Can anyone tell me how I can create a login API so that until the user logs in, it cant use the services?
Any help will be appreciated.
A: I recommend to use passportjs, great library to implement log in strategies, and really well documented. I hope that it would be help full.
http://passportjs.org/
Regards.
A: Create a custom middleware where you check whether the user is logged in or not, And then redirect accordingly (to the login page).
Something like below, (I used express-session here)
var isLoggedIn = function(req, res, next) {
if (!req.session.username) {
res.redirect('/login');
}
next();
}
And then use it in the following form with your API.
app.get('/home', isLoggedIn, function(req, res) {
res.sendFile(path.join(__dirname+'/views/home.html'));
});
A: From what you have written I understnad that you have written an API using nodeJS. As others have stated, passportJS is the right tool for this but it comes with many strategies. Try using this one: https://github.com/themikenicholson/passport-jwt. is intended to be used to secure RESTful endpoints without sessions.
|
stackoverflow
|
{
"language": "en",
"length": 228,
"provenance": "stackexchange_0000F.jsonl.gz:856517",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516163"
}
|
84b3b82da8218288dd2928c90a302cbd51028b81
|
Stackoverflow Stackexchange
Q: AJAX data not passing to PHP I am having trouble passing AJAX data to PHP. I am experienced with PHP but new to JavaScript.
HTML / JavaScript
<input type="text" id="commodity_code"><button id="button"> = </button>
<script id="source" language="javascript" type="text/javascript">
$('#button').click(function()
{
var commodity_code = $('#commodity_code').val();
$.ajax({
url: 'get_code.php',
data: "commodity_code: commodity_code",
dataType: 'json',
success:function(data) {
var commodity_desc = data[0];
alert(commodity_desc);
}
});
});
</script>
PHP
$commodity_code = $_POST['commodity_code'];
$result = mysql_query("SELECT description FROM oc_commodity_codes WHERE code = '$commodity_code'");
$array = mysql_fetch_row($result);
echo json_encode($array);
I know the general AJAX fetch and PHP code is working as I can manually create the $commodity_code variable and the script works fine. I think my issue lies somewhere in passing the AJAX data to my PHP script.
A: You forgot to add the method: 'POST' in your AJAX Call. And you have some issues with your call. Check below:
$.ajax({
url: 'get_code.php',
method: "POST", // Change here.
data: {commodity_code: commodity_code}, // Change here.
dataType: 'json',
success:function(data) {
var commodity_desc = data[0];
alert(commodity_desc);
}
});
Or to make it simple, use the shorthand function:
$.post('get_code.php', {commodity_code: commodity_code}, function(data) {
var commodity_desc = data[0];
alert(commodity_desc);
});
|
Q: AJAX data not passing to PHP I am having trouble passing AJAX data to PHP. I am experienced with PHP but new to JavaScript.
HTML / JavaScript
<input type="text" id="commodity_code"><button id="button"> = </button>
<script id="source" language="javascript" type="text/javascript">
$('#button').click(function()
{
var commodity_code = $('#commodity_code').val();
$.ajax({
url: 'get_code.php',
data: "commodity_code: commodity_code",
dataType: 'json',
success:function(data) {
var commodity_desc = data[0];
alert(commodity_desc);
}
});
});
</script>
PHP
$commodity_code = $_POST['commodity_code'];
$result = mysql_query("SELECT description FROM oc_commodity_codes WHERE code = '$commodity_code'");
$array = mysql_fetch_row($result);
echo json_encode($array);
I know the general AJAX fetch and PHP code is working as I can manually create the $commodity_code variable and the script works fine. I think my issue lies somewhere in passing the AJAX data to my PHP script.
A: You forgot to add the method: 'POST' in your AJAX Call. And you have some issues with your call. Check below:
$.ajax({
url: 'get_code.php',
method: "POST", // Change here.
data: {commodity_code: commodity_code}, // Change here.
dataType: 'json',
success:function(data) {
var commodity_desc = data[0];
alert(commodity_desc);
}
});
Or to make it simple, use the shorthand function:
$.post('get_code.php', {commodity_code: commodity_code}, function(data) {
var commodity_desc = data[0];
alert(commodity_desc);
});
A: error in this line data: "commodity_code: commodity_code", .. you can simple pass the commodity_code variable..
$.ajax({
url: 'get_code.php',
method: "POST",
data: commodity_code,
dataType: 'json',
success:function(data) {
var commodity_desc = data[0];
alert(commodity_desc);
}
});
|
stackoverflow
|
{
"language": "en",
"length": 222,
"provenance": "stackexchange_0000F.jsonl.gz:856533",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516216"
}
|
f9deed1b10af3906a24d5b9fa76393eacd99f42e
|
Stackoverflow Stackexchange
Q: is for/while loop from python is a generator In an interview , the interviewer asked me for some of generators being used in Python. I know a generator is like a function which yield values instead of return.
so any one tell me is for/while loop is an example of generator.
A: Short answer: No, but there are other forms of generators.
A for/while loop is a loop structure: it does not emit values and thus is not a generator.
Nevertheless, there are other ways to construct generators.
You example with yield is for instance a generator:
def some_generator(xs):
for x in xs:
if x:
yield x
But there are also generator expressions, like:
(x for x in xs if x)
Furthermore in python-3.x the range(..), map(..), filter(..) constructs are generators as well.
And of course you can make an iterable (by using an iterable pattern):
class some_generator(object):
def __init__(self, xs):
self.n = n
self.idx = 0
def __iter__(self):
return self
def __next__(self):
return self.next()
def next(self):
while self.num < len(self.xs) and not self.xs[self.num]:
self.num += 1
if self.num < len(self.xs):
res = self.xs[self.num]
self.num += 1
return res
else:
raise StopIteration()
|
Q: is for/while loop from python is a generator In an interview , the interviewer asked me for some of generators being used in Python. I know a generator is like a function which yield values instead of return.
so any one tell me is for/while loop is an example of generator.
A: Short answer: No, but there are other forms of generators.
A for/while loop is a loop structure: it does not emit values and thus is not a generator.
Nevertheless, there are other ways to construct generators.
You example with yield is for instance a generator:
def some_generator(xs):
for x in xs:
if x:
yield x
But there are also generator expressions, like:
(x for x in xs if x)
Furthermore in python-3.x the range(..), map(..), filter(..) constructs are generators as well.
And of course you can make an iterable (by using an iterable pattern):
class some_generator(object):
def __init__(self, xs):
self.n = n
self.idx = 0
def __iter__(self):
return self
def __next__(self):
return self.next()
def next(self):
while self.num < len(self.xs) and not self.xs[self.num]:
self.num += 1
if self.num < len(self.xs):
res = self.xs[self.num]
self.num += 1
return res
else:
raise StopIteration()
A: Neither while nor for are themselves generators or iterators. They are control constructs that perform iteration. Certainly, you can use for or while to iterate over the items yielded by a generator, and you can use for or while to perform iteration inside the code of a generator. But neither of those facts make for or while generators.
A: The first line in the python wiki for generators:
Generators functions allow you to declare a function that behaves like an iterator, i.e. it can be used in a for loop.
So in the context of your interview I'd believe they were looking for you to answer about the creation of an iterable.
The wiki for a for loop
In Python this is controlled instead by generating the appropriate sequence.
So you could get pedantic but generally, no, a for loop isn't a generator.
A: for and while are loop structures, and you can use them to iterate over generators. You can take certain elements of a generator by converting it to a list.
|
stackoverflow
|
{
"language": "en",
"length": 367,
"provenance": "stackexchange_0000F.jsonl.gz:856556",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516289"
}
|
af2742a7779404eb5c4b3b622e82c94ee76fe2d8
|
Stackoverflow Stackexchange
Q: Builtin way to transform asynchronous iterable to synchronous iterable list Python3.6 now asynchronous iterables. Is there builtin way to transform a asynchronous iterable to a synchronous iterable.
I currently have this helper function, but it feels very un-pythonic. Is there a better way to do this?
async def aiter_to_list(aiter):
l = []
async for i in aiter:
l.append(i)
return l
A: You can use aiostream.stream.list:
from aiostream import stream
async def agen():
yield 1
yield 2
yield 3
async def main():
lst = await stream.list(agen())
print(lst) # prints [1, 2, 3]
More operators and examples in the documentation.
|
Q: Builtin way to transform asynchronous iterable to synchronous iterable list Python3.6 now asynchronous iterables. Is there builtin way to transform a asynchronous iterable to a synchronous iterable.
I currently have this helper function, but it feels very un-pythonic. Is there a better way to do this?
async def aiter_to_list(aiter):
l = []
async for i in aiter:
l.append(i)
return l
A: You can use aiostream.stream.list:
from aiostream import stream
async def agen():
yield 1
yield 2
yield 3
async def main():
lst = await stream.list(agen())
print(lst) # prints [1, 2, 3]
More operators and examples in the documentation.
A: Your "asynchronous to synchronous" helper is itself asynchronous; not a big change at all. In general: no, you cannot make something asynchronous synchronous. An asynchronous value will be supplied "sometime later"; you cannot make that into "now" because the value doesn't exist "now" and you will have to wait for it, asynchronously.
A: From Python 3.6 you can use Asynchronous Comprehensions
async def async_iter():
for i in range(0,5):
yield i
# async comprehension
sync_list = [gen async for gen in async_iter()]
print(sync_list) # [0, 1, 2, 3, 4]
A: These functions allow you to convert from / to iterable <==> async iterable, not just simple lists.
Basic imports
import asyncio
import threading
import time
DONE = object()
TIMEOUT = 0.001
The function to_sync_iterable will convert any async iterable to a sync iterable:
def to_sync_iterable(async_iterable, maxsize = 0):
def sync_iterable():
queue = asyncio.Queue(maxsize=maxsize)
loop = asyncio.get_event_loop()
t = threading.Thread(target=_run_coroutine, args=(loop, async_iterable, queue))
t.daemon = True
t.start()
while True:
if not queue.empty():
x = queue.get_nowait()
if x is DONE:
break
else:
yield x
else:
time.sleep(utils.TIMEOUT)
t.join()
return sync_iterable()
def _run_coroutine(loop, async_iterable, queue):
loop.run_until_complete(_consume_async_iterable(async_iterable, queue))
async def _consume_async_iterable(async_iterable, queue):
async for x in async_iterable:
await queue.put(x)
await queue.put(DONE)
You can use it like this:
async def slow_async_generator():
yield 0
await asyncio.sleep(1)
yield 1
await asyncio.sleep(1)
yield 2
await asyncio.sleep(1)
yield 3
for x in to_sync_iterable(slow_async_generator()):
print(x)
The function to_async_iterable will convert any sync iterable to an async iterable:
def to_async_iterable(iterable, maxsize = 0):
async def async_iterable():
queue = asyncio.Queue(maxsize=maxsize)
loop = asyncio.get_event_loop()
task = loop.run_in_executor(None, lambda: _consume_iterable(loop, iterable, queue))
while True:
x = await queue.get()
if x is DONE:
break
else:
yield x
await task
return async_iterable()
def _consume_iterable(loop, iterable, queue):
for x in iterable:
while True:
if not queue.full():
loop.call_soon_threadsafe(queue.put_nowait, x)
break
else:
time.sleep(TIMEOUT)
while True:
if not queue.full():
loop.call_soon_threadsafe(queue.put_nowait, DONE)
break
else:
time.sleep(TIMEOUT)
This one is specially useful for asyncio programs because it won't block the event loop even if the the sync iterable blocks. You can use it like this:
def slow_sync_generator():
yield 0
time.sleep(1)
yield 1
time.sleep(1)
yield 2
time.sleep(1)
yield 3
async def async_task():
async for x in to_async_iterable(slow_sync_generator()):
print(x)
asyncio.get_event_loop().run_until_complete(async_task())
|
stackoverflow
|
{
"language": "en",
"length": 453,
"provenance": "stackexchange_0000F.jsonl.gz:856580",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516372"
}
|
356e22f131cf4f4643a2c0755154b699f1cab76e
|
Stackoverflow Stackexchange
Q: How to join/merge a list of dataframes with common keys in PySpark?
df1
uid1 var1
0 John 3
1 Paul 4
2 George 5
df2
uid1 var2
0 John 23
1 Paul 44
2 George 52
df3
uid1 var3
0 John 31
1 Paul 45
2 George 53
df_lst=[df1,df2,df3]
How do I merge/join the 3 dataframes in the list based on common key uid1 ?
Edit:
Expected output
df1
uid1 var1 var2 var3
0 John 3 23 31
1 Paul 4 44 45
2 George 5 52 53
A: You can join a list of dataframe. Below is the simple example
import spark.implicits._
val df1 = spark.sparkContext.parallelize(Seq(
(0,"John",3),
(1,"Paul",4),
(2,"George",5)
)).toDF("id", "uid1", "var1")
import spark.implicits._
val df2 = spark.sparkContext.parallelize(Seq(
(0,"John",23),
(1,"Paul",44),
(2,"George",52)
)).toDF("id", "uid1", "var2")
import spark.implicits._
val df3 = spark.sparkContext.parallelize(Seq(
(0,"John",31),
(1,"Paul",45),
(2,"George",53)
)).toDF("id", "uid1", "var3")
val df = List(df1, df2, df3)
df.reduce((a,b) => a.join(b, Seq("id", "uid1")))
Output:
+---+------+----+----+----+
| id| uid1|var1|var2|var3|
+---+------+----+----+----+
| 1| Paul| 4| 44| 45|
| 2|George| 5| 52| 53|
| 0| John| 3| 23| 31|
+---+------+----+----+----+
Hope this helps!
|
Q: How to join/merge a list of dataframes with common keys in PySpark?
df1
uid1 var1
0 John 3
1 Paul 4
2 George 5
df2
uid1 var2
0 John 23
1 Paul 44
2 George 52
df3
uid1 var3
0 John 31
1 Paul 45
2 George 53
df_lst=[df1,df2,df3]
How do I merge/join the 3 dataframes in the list based on common key uid1 ?
Edit:
Expected output
df1
uid1 var1 var2 var3
0 John 3 23 31
1 Paul 4 44 45
2 George 5 52 53
A: You can join a list of dataframe. Below is the simple example
import spark.implicits._
val df1 = spark.sparkContext.parallelize(Seq(
(0,"John",3),
(1,"Paul",4),
(2,"George",5)
)).toDF("id", "uid1", "var1")
import spark.implicits._
val df2 = spark.sparkContext.parallelize(Seq(
(0,"John",23),
(1,"Paul",44),
(2,"George",52)
)).toDF("id", "uid1", "var2")
import spark.implicits._
val df3 = spark.sparkContext.parallelize(Seq(
(0,"John",31),
(1,"Paul",45),
(2,"George",53)
)).toDF("id", "uid1", "var3")
val df = List(df1, df2, df3)
df.reduce((a,b) => a.join(b, Seq("id", "uid1")))
Output:
+---+------+----+----+----+
| id| uid1|var1|var2|var3|
+---+------+----+----+----+
| 1| Paul| 4| 44| 45|
| 2|George| 5| 52| 53|
| 0| John| 3| 23| 31|
+---+------+----+----+----+
Hope this helps!
A: Let me suggest python answer:
from pyspark import SparkContext
SparkContext._active_spark_context.stop()
sc = SparkContext()
sqlcontext = SQLContext(sc)
import pyspark.sql.types as t
rdd_list = [sc.parallelize([('John',i+1),('Paul',i+2),('George',i+3)],1) \
for i in [100,200,300]]
df_list = []
for i,r in enumerate(rdd_list):
schema = t.StructType().add('uid1',t.StringType())\
.add('var{}'.format(i+1),t.IntegerType())
df_list.append(sqlcontext.createDataFrame(r, schema))
df_list[-1].show()
+------+----+
| uid1|var1|
+------+----+
| John| 101|
| Paul| 102|
|George| 103|
+------+----+
+------+----+
| uid1|var2|
+------+----+
| John| 201|
| Paul| 202|
|George| 203|
+------+----+
+------+----+
| uid1|var3|
+------+----+
| John| 301|
| Paul| 302|
|George| 303|
+------+----+
df_res = df_list[0]
for df_next in df_list[1:]:
df_res = df_res.join(df_next,on='uid1',how='inner')
df_res.show()
+------+----+----+----+
| uid1|var1|var2|var3|
+------+----+----+----+
| John| 101| 201| 301|
| Paul| 102| 202| 302|
|George| 103| 203| 303|
+------+----+----+----+
One more option:
def join_red(left,right):
return left.join(right,on='uid1',how='inner')
res = reduce(join_red, df_list)
res.show()
+------+----+----+----+
| uid1|var1|var2|var3|
+------+----+----+----+
| John| 101| 201| 301|
| Paul| 102| 202| 302|
|George| 103| 203| 303|
+------+----+----+----+
A: Merge and join are two different things in dataframe. According to what I understand from your question join would be the one
joining them as
df1.join(df2, df1.uid1 == df2.uid1).join(df3, df1.uid1 == df3.uid1)
should do the trick but I also suggest to change the column names of df2 and df3 dataframes to uid2 and uid3 so that conflict doesn't arise in the future
|
stackoverflow
|
{
"language": "en",
"length": 382,
"provenance": "stackexchange_0000F.jsonl.gz:856591",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516409"
}
|
e1e475959d8928b15bd717cbefd5f57ad61db8da
|
Stackoverflow Stackexchange
Q: `fullVisitorId` => clientId, one-to-many mapping? I'm under the impression fullVisitorId being just a hash of clientId, there should be one-to-one mapping between the two. But here, I've a situation where few of the fullVisitorId are mapped to two different client Id (we're collecting GA Client ID into User scoped custom dimensions)
Is that possible ? under what circumstances?
Thanks for any clarification on this
Cheers!
[edit: ] attaching screenshot
A: You may be interested in reading about the Google Analytics schema for BigQuery. Some of the relevant parts are:
*
*fullVisitorId: The unique visitor ID (also known as client ID).
*visitId: An identifier for this session. This is part of the value usually stored as the _utmb cookie. This is only unique to the user. For a completely unique ID, you should use a combination of fullVisitorId and visitId.
So client ID and full visitor ID are synonymous, and if you want a unique ID for a particular visit, you should use a combination of fullVisitorId and visitId.
|
Q: `fullVisitorId` => clientId, one-to-many mapping? I'm under the impression fullVisitorId being just a hash of clientId, there should be one-to-one mapping between the two. But here, I've a situation where few of the fullVisitorId are mapped to two different client Id (we're collecting GA Client ID into User scoped custom dimensions)
Is that possible ? under what circumstances?
Thanks for any clarification on this
Cheers!
[edit: ] attaching screenshot
A: You may be interested in reading about the Google Analytics schema for BigQuery. Some of the relevant parts are:
*
*fullVisitorId: The unique visitor ID (also known as client ID).
*visitId: An identifier for this session. This is part of the value usually stored as the _utmb cookie. This is only unique to the user. For a completely unique ID, you should use a combination of fullVisitorId and visitId.
So client ID and full visitor ID are synonymous, and if you want a unique ID for a particular visit, you should use a combination of fullVisitorId and visitId.
|
stackoverflow
|
{
"language": "en",
"length": 169,
"provenance": "stackexchange_0000F.jsonl.gz:856604",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516452"
}
|
4922f89a411e0af66c45de8ae0175ced8ff34f06
|
Stackoverflow Stackexchange
Q: Jenkins Dashboard of Pipeline steps I'm using the pipeline plugin (with pipeline stage view) on Jenkins and I want to know if a plugin can make a dashboard of the stage view information as the dashboard plugin does for jobs.
A: I am looking for the same dashboard too, I just want to show all pipelines by stages view on one page.
Most close is this.
https://github.com/jenkinsci/pipeline-aggregator-view-plugin
And,
https://wiki.jenkins-ci.org/display/JENKINS/Delivery+Pipeline+Plugin
|
Q: Jenkins Dashboard of Pipeline steps I'm using the pipeline plugin (with pipeline stage view) on Jenkins and I want to know if a plugin can make a dashboard of the stage view information as the dashboard plugin does for jobs.
A: I am looking for the same dashboard too, I just want to show all pipelines by stages view on one page.
Most close is this.
https://github.com/jenkinsci/pipeline-aggregator-view-plugin
And,
https://wiki.jenkins-ci.org/display/JENKINS/Delivery+Pipeline+Plugin
|
stackoverflow
|
{
"language": "en",
"length": 70,
"provenance": "stackexchange_0000F.jsonl.gz:856610",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516477"
}
|
aed41eaa79190e6b0eb51138febc3f9879fee0a1
|
Stackoverflow Stackexchange
Q: Set Allowed Countries on Store View In Magento 1.4, I was able to set allowed countries on the Store View Level, therefore I could have a Website with one Store und multiple Store Views for each of my countries:
Now in Magento 2, I can only set the Allowed Countries on the Website and not on the Store View, the Store View setting looks as follows:
Why do I want to change that? I need to be able to set a different store contact address for each of these Store Views, because I e.g. have an Argentinien und a Bulgarian Store View, so I want to set the different addresses but use the same Website/Store.
Unfortunately, I'm also not able to change the Store Contact Address per Store View anymore, this also only works on Website Level.
Am I missing something? Was there a logical change from 1.X to 2.X about the Store Views?
A: I don't know why the allowed country option was removed from settings in store view. But looking in the code shows that the information is used if present. So you can just enter the data into core_config_data (scope: stores, scope_id: your_store_id, value: AT,AB,AC...
|
Q: Set Allowed Countries on Store View In Magento 1.4, I was able to set allowed countries on the Store View Level, therefore I could have a Website with one Store und multiple Store Views for each of my countries:
Now in Magento 2, I can only set the Allowed Countries on the Website and not on the Store View, the Store View setting looks as follows:
Why do I want to change that? I need to be able to set a different store contact address for each of these Store Views, because I e.g. have an Argentinien und a Bulgarian Store View, so I want to set the different addresses but use the same Website/Store.
Unfortunately, I'm also not able to change the Store Contact Address per Store View anymore, this also only works on Website Level.
Am I missing something? Was there a logical change from 1.X to 2.X about the Store Views?
A: I don't know why the allowed country option was removed from settings in store view. But looking in the code shows that the information is used if present. So you can just enter the data into core_config_data (scope: stores, scope_id: your_store_id, value: AT,AB,AC...
A: the correct answer that respects Magento 2 standardization is overloading the system.xml of the magento/Backend/etc/adminhtml.
you should try:
Vendor/ModuleName/etc/adminhtml/system.xml
<?xml version="1.0"?>
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="urn:magento:module:Magento_Config:etc/system_file.xsd">
<system>
<section id="general">
<group id="country" translate="label" type="text" sortOrder="1" showInDefault="1" showInWebsite="1" showInStore="1">
<label>Country Options</label>
<field id="allow" translate="label" type="multiselect" sortOrder="2" showInDefault="1" showInWebsite="1" showInStore="1" canRestore="1">
<label>Allow Countries</label>
<source_model>Magento\Directory\Model\Config\Source\Country</source_model>
<can_be_empty>1</can_be_empty>
</field>
</group>
</section>
</system>
</config>
Remember to add overridden module - Magento_Backend
Vendor/ModuleName/etc/module.xml
<?xml version="1.0"?>
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="urn:magento:framework:Module/etc/module.xsd">
<module name="Vendor_YourModule" setup_version="1.0.0">
<sequence>
<module name="Magento_Backend"/>
</sequence>
</module>
</config>
|
stackoverflow
|
{
"language": "en",
"length": 277,
"provenance": "stackexchange_0000F.jsonl.gz:856617",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516494"
}
|
a3f6a2b46c6bc63da53258411bd25328cc354a1d
|
Stackoverflow Stackexchange
Q: What is maximum purchaseToken length provided by google after in app purchases in android? Can anyone tell me the exact length of purchaseToken provided by google after successfull in app purchase in andorid.
A: This token is an opaque character sequence that may be up to 1,000 characters long.
|
Q: What is maximum purchaseToken length provided by google after in app purchases in android? Can anyone tell me the exact length of purchaseToken provided by google after successfull in app purchase in andorid.
A: This token is an opaque character sequence that may be up to 1,000 characters long.
|
stackoverflow
|
{
"language": "en",
"length": 50,
"provenance": "stackexchange_0000F.jsonl.gz:856620",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516501"
}
|
14bdb0505333ab11f3401e83b43ff37464771296
|
Stackoverflow Stackexchange
Q: Why is there no std equivalent of boost::enable_if? In C++11, std::enable_if was added to the Standard Library. It is equivalent to boost::enable_if_c whose condition is a bool. This is suitable for rather simple conditions, but as soon as you use predicates that hold their result in a value constant, you have to use the more verbose construct my_predicate<MyArgs>::value to turn it into bool.
This is exactly what boost::enable_if (without _c suffix) was made for.
Why is there no equivalent in Standard Library?
A: The standard library goes a different route here. C++17 added variable templates shortcuts for all the type traits that return a ::value. The pattern is always
template <typename... Args>
some_trait_v = some_trait<Args...>::value;
For instance you can write
std::enable_if<std::is_same_v<T1,T2>>
Further the argument for enable_if could be the result of constexpr expressions, for instance
std::enable_if<some_constexpr_function<T1,T2>()>
This way is more generic and does not depend on passing something that must have a value member.
|
Q: Why is there no std equivalent of boost::enable_if? In C++11, std::enable_if was added to the Standard Library. It is equivalent to boost::enable_if_c whose condition is a bool. This is suitable for rather simple conditions, but as soon as you use predicates that hold their result in a value constant, you have to use the more verbose construct my_predicate<MyArgs>::value to turn it into bool.
This is exactly what boost::enable_if (without _c suffix) was made for.
Why is there no equivalent in Standard Library?
A: The standard library goes a different route here. C++17 added variable templates shortcuts for all the type traits that return a ::value. The pattern is always
template <typename... Args>
some_trait_v = some_trait<Args...>::value;
For instance you can write
std::enable_if<std::is_same_v<T1,T2>>
Further the argument for enable_if could be the result of constexpr expressions, for instance
std::enable_if<some_constexpr_function<T1,T2>()>
This way is more generic and does not depend on passing something that must have a value member.
|
stackoverflow
|
{
"language": "en",
"length": 155,
"provenance": "stackexchange_0000F.jsonl.gz:856692",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516718"
}
|
0a82baa435ba1429a61a3c9b44fc1145af9fd972
|
Stackoverflow Stackexchange
Q: "No locale data has been provided" regardless of what is passed in I am trying to use intl to do some formatting but no matter what I pass in as the locale, I always get the following error message:
ReferenceError: No locale data has been provided for this object yet
I have tried the following:
new Intl.NumberFormat('en-ZA', { minimumFractionDigits: percentDecimals });
as well as
new Intl.NumberFormat(['en-ZA'], { minimumFractionDigits: percentDecimals });
and I am not sure what else do.
I have added the package to the package.json
"intl": "latest"
and I do import it
import Intl from "intl";
A: Depending on the enviromnent you are running this code you might need to import locale data as well to polyfill locale
import 'intl/locale-data/jsonp/en-ZA'
This import does side-effect that register en-ZA locale IntlPolyfill.__addLocaleData({locale:"en-ZA", when polyfill is required.
|
Q: "No locale data has been provided" regardless of what is passed in I am trying to use intl to do some formatting but no matter what I pass in as the locale, I always get the following error message:
ReferenceError: No locale data has been provided for this object yet
I have tried the following:
new Intl.NumberFormat('en-ZA', { minimumFractionDigits: percentDecimals });
as well as
new Intl.NumberFormat(['en-ZA'], { minimumFractionDigits: percentDecimals });
and I am not sure what else do.
I have added the package to the package.json
"intl": "latest"
and I do import it
import Intl from "intl";
A: Depending on the enviromnent you are running this code you might need to import locale data as well to polyfill locale
import 'intl/locale-data/jsonp/en-ZA'
This import does side-effect that register en-ZA locale IntlPolyfill.__addLocaleData({locale:"en-ZA", when polyfill is required.
A: For me it was solved by importing the "intl" at the top of the source code of the app.
In my case I was doing the import in (i18n.ts).
(i18n.ts) is where i18next is initialized. Then I have to move the import to the App.tsx.
// Polyfill Intl as it is not included in RN
import "intl";
|
stackoverflow
|
{
"language": "en",
"length": 193,
"provenance": "stackexchange_0000F.jsonl.gz:856704",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516753"
}
|
e3824bea64d96da4912af4f718534aa26bed8adb
|
Stackoverflow Stackexchange
Q: React Native image performance In my React Native app I use image resources in the bundle for some of the UI elements. On some of the screens, I download images from the web for FlatList Items. When I switch to a UI that is image heavy, the images that I use for the UI elements loaded from the bundle show almost after all the images in the FlatList loads which seems very unprofessional for a native app.
I searched the web for performant image loaders, found some but since I use Expo for development, I can't use native plugins.
Can someone direct me to useful resources to overcome this problem?
|
Q: React Native image performance In my React Native app I use image resources in the bundle for some of the UI elements. On some of the screens, I download images from the web for FlatList Items. When I switch to a UI that is image heavy, the images that I use for the UI elements loaded from the bundle show almost after all the images in the FlatList loads which seems very unprofessional for a native app.
I searched the web for performant image loaders, found some but since I use Expo for development, I can't use native plugins.
Can someone direct me to useful resources to overcome this problem?
|
stackoverflow
|
{
"language": "en",
"length": 111,
"provenance": "stackexchange_0000F.jsonl.gz:856706",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516769"
}
|
e5f412b35caf5edd5343938e71d93a0986f18aba
|
Stackoverflow Stackexchange
Q: Install Chrome Headless using NPM Chrome Headless is fantastic!!!
But is there a way I can install Chrome Headless using NPM so that can I use it for my unit tests in automated test environments?
Is there an alternative way of doing this?
Many thanks in advance!!!
A: https://www.npmjs.com/package/chromium
npm install chromium
For windows:
\node_modules\chromium\lib\chromium\chrome-win\chrome.exe
|
Q: Install Chrome Headless using NPM Chrome Headless is fantastic!!!
But is there a way I can install Chrome Headless using NPM so that can I use it for my unit tests in automated test environments?
Is there an alternative way of doing this?
Many thanks in advance!!!
A: https://www.npmjs.com/package/chromium
npm install chromium
For windows:
\node_modules\chromium\lib\chromium\chrome-win\chrome.exe
A: chrome is bundled with the puppeteer package on npm by default. Puppeteer provides a nice API for using chrome headless for automated tests or even regular chrome (headless turned off).
https://www.npmjs.com/package/puppeteer
https://github.com/GoogleChrome/puppeteer
npm install puppeteer
A: This worked for me (on Windows Subsystem for Linux, Ubuntu):
npm install puppeteer
sudo apt-get install gconf-service libasound2 libatk1.0-0 libatk-bridge2.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget
Dependency list is from:
https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md#chrome-headless-doesnt-launch-on-unix
|
stackoverflow
|
{
"language": "en",
"length": 153,
"provenance": "stackexchange_0000F.jsonl.gz:856719",
"question_score": "49",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516816"
}
|
cd8849670fd6b02007a18726c22b7635576dd31c
|
Stackoverflow Stackexchange
Q: Cannot authenticate with Yahoo OAuth I'm trying to authorize with Yahoo using a link like this:
https://api.login.yahoo.com/oauth2/request_auth?client_id=dj0yJmk9ZHNUWExxZmhHckFDJmQ9WVdrOVdsQmtNa3BKTlRZbWNHbzlNQS0tJnM9Y29uc3VtZXJzZWNyZXQmeD03MA--&redirect_uri=https%3A%2F%2Flastlink.com%2Fauthorize&response_type=code
However it responds with:
Please check the redirect URI in your request and submit again
I tried to search for this topic on ydn forums but they seems to be broken.
A: The domain of the redirect_uri has to be the same as the callback domain for the YDN App.
I can get a code using redirect_uri=oob:
https://api.login.yahoo.com/oauth2/request_auth?client_id=dj0yJmk9ZHNUWExxZmhHckFDJmQ9WVdrOVdsQmtNa3BKTlRZbWNHbzlNQS0tJnM9Y29uc3VtZXJzZWNyZXQmeD03MA--&redirect_uri=oob&response_type=code
|
Q: Cannot authenticate with Yahoo OAuth I'm trying to authorize with Yahoo using a link like this:
https://api.login.yahoo.com/oauth2/request_auth?client_id=dj0yJmk9ZHNUWExxZmhHckFDJmQ9WVdrOVdsQmtNa3BKTlRZbWNHbzlNQS0tJnM9Y29uc3VtZXJzZWNyZXQmeD03MA--&redirect_uri=https%3A%2F%2Flastlink.com%2Fauthorize&response_type=code
However it responds with:
Please check the redirect URI in your request and submit again
I tried to search for this topic on ydn forums but they seems to be broken.
A: The domain of the redirect_uri has to be the same as the callback domain for the YDN App.
I can get a code using redirect_uri=oob:
https://api.login.yahoo.com/oauth2/request_auth?client_id=dj0yJmk9ZHNUWExxZmhHckFDJmQ9WVdrOVdsQmtNa3BKTlRZbWNHbzlNQS0tJnM9Y29uc3VtZXJzZWNyZXQmeD03MA--&redirect_uri=oob&response_type=code
|
stackoverflow
|
{
"language": "en",
"length": 76,
"provenance": "stackexchange_0000F.jsonl.gz:856720",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516819"
}
|
2dc19cdac5ca76b2d13786c2a9a96a126cdcaf95
|
Stackoverflow Stackexchange
Q: telegram private channel unique invite link i create a private channel in telegram.
i want to know if there is any way to create an unique invite link that i can share to people i want to join my channel. unique like single use.
actually telegram gives you an invite link but its always the same so if i give it to a person he can give it to anyone he wants. i need a method to avoid this. i'd tried some url shortening services to hide the invite link but at the end they still show the iniztial invite link.
any suggestion?
i'd tried http://once.ly/index.html
A: Edit:
Now you can generate unique links for different people, and limit how many people can join and change the expiring time!
Original answer (2017):
There is no way to create a unique invite link at this time.
But if I were you, I would create a bot, send link via bot with inline button, which is default hiding link behind text.
For example, you give your user a link like t.me/bot?start=channel_link, and when your bot received /start channel_link, send a message with inline button with url parameter.
|
Q: telegram private channel unique invite link i create a private channel in telegram.
i want to know if there is any way to create an unique invite link that i can share to people i want to join my channel. unique like single use.
actually telegram gives you an invite link but its always the same so if i give it to a person he can give it to anyone he wants. i need a method to avoid this. i'd tried some url shortening services to hide the invite link but at the end they still show the iniztial invite link.
any suggestion?
i'd tried http://once.ly/index.html
A: Edit:
Now you can generate unique links for different people, and limit how many people can join and change the expiring time!
Original answer (2017):
There is no way to create a unique invite link at this time.
But if I were you, I would create a bot, send link via bot with inline button, which is default hiding link behind text.
For example, you give your user a link like t.me/bot?start=channel_link, and when your bot received /start channel_link, send a message with inline button with url parameter.
A: Try this one from the documentation.
<a href="https://t.me/botname_bot?start=vCH1vGWJxfSeofSAs0K5PA">start=channel_link</a>
A: Ok, now this is possible using tg bots: the createchatinvitelink endpoint is available and has member_limit param. Set the latter to 1 and the link becomes "unique".
|
stackoverflow
|
{
"language": "en",
"length": 233,
"provenance": "stackexchange_0000F.jsonl.gz:856731",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44516853"
}
|
a3707ceafee2d3b998ff2d0c04ad67d8d672753f
|
Stackoverflow Stackexchange
Q: Shell script to monitor postgresql database status and alerts I need postgresql shell script to alert me if database is goes down.
A: pg_isready is a utility for checking the connection status of a PostgreSQL database server. The exit status specifies the result of the connection check.
Example:
while true; do
if ! /usr/bin/pg_isready &>/dev/null; then
echo 'alert';
fi;
sleep 3;
done;
This will check the status of postgresql database every 3 seconds and echos "alert" if it is down.
https://www.postgresql.org/docs/9.3/static/app-pg-isready.html
|
Q: Shell script to monitor postgresql database status and alerts I need postgresql shell script to alert me if database is goes down.
A: pg_isready is a utility for checking the connection status of a PostgreSQL database server. The exit status specifies the result of the connection check.
Example:
while true; do
if ! /usr/bin/pg_isready &>/dev/null; then
echo 'alert';
fi;
sleep 3;
done;
This will check the status of postgresql database every 3 seconds and echos "alert" if it is down.
https://www.postgresql.org/docs/9.3/static/app-pg-isready.html
|
stackoverflow
|
{
"language": "en",
"length": 82,
"provenance": "stackexchange_0000F.jsonl.gz:856796",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517022"
}
|
3708c61a575273c6ef4b126b63e5d4807fc37840
|
Stackoverflow Stackexchange
Q: Running two scripts in parallel in Pycharm I have two scripts, server.py and client.py.
I want to be able to start them running, in that order, with one action.
How can achieve that in Pycharm? Please note that I want to be able to set breakpoints
A: You can give Multirun a try:
Allows to run multiple run configurations at once: group multiple run
configurations and start them in a single click. Not only application
and test run configurations can be grouped, but other Multirun
configurations can be organized into single run configuration.
It will let you run all configurations in Debug mode and use breakpoints.
|
Q: Running two scripts in parallel in Pycharm I have two scripts, server.py and client.py.
I want to be able to start them running, in that order, with one action.
How can achieve that in Pycharm? Please note that I want to be able to set breakpoints
A: You can give Multirun a try:
Allows to run multiple run configurations at once: group multiple run
configurations and start them in a single click. Not only application
and test run configurations can be grouped, but other Multirun
configurations can be organized into single run configuration.
It will let you run all configurations in Debug mode and use breakpoints.
A: You can also do:
*
*Run -> Edit Configurations...
*Find and add a new "Compound" project
*Add your configurations into a single component
*Run it
A: Run -> Edit Configurations...
Edit your script configuration
Check the "Allow parallel run" checkbox at the top
Then just run then normally.
You can also pin run tab to not be closed
|
stackoverflow
|
{
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:856805",
"question_score": "21",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517055"
}
|
c18e4a4f71fa415bb68c4beebfb33d503b4b14ab
|
Stackoverflow Stackexchange
Q: Angular4: How do I *ngFor, for every number in an int? I have an int property named "count" in my component.
I would like to display a p tag X amount of times, where X is the int my count property equals. Is there really no simple way to do this, besides messing with fake arrays?
A: You could easily do it with an pipe filter which transforms an empty array to an number of childs depending on a filter param = number.
Pipe filter
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({
name: 'range',
pure: false
})
export class RangePipe implements PipeTransform {
transform(items: any[], quantity: number): any {
items.length = 0;
for (let i = 0; i < quantity; i++) {
items.push(i);
}
return items;
}
}
View
<div *ngFor="let n of [] | range:100"></div>
|
Q: Angular4: How do I *ngFor, for every number in an int? I have an int property named "count" in my component.
I would like to display a p tag X amount of times, where X is the int my count property equals. Is there really no simple way to do this, besides messing with fake arrays?
A: You could easily do it with an pipe filter which transforms an empty array to an number of childs depending on a filter param = number.
Pipe filter
import { Pipe, PipeTransform } from '@angular/core';
@Pipe({
name: 'range',
pure: false
})
export class RangePipe implements PipeTransform {
transform(items: any[], quantity: number): any {
items.length = 0;
for (let i = 0; i < quantity; i++) {
items.push(i);
}
return items;
}
}
View
<div *ngFor="let n of [] | range:100"></div>
A: Plnkr: https://plnkr.co/edit/Yn775KSbBeUPeyaI9sep?p=preview
You can create another variable called countObservable
countObservable = Observable.range(0, this.count).toArray();
Use async for in HTML
<p *ngFor="let num of countObservable | async" >Hello {{num}}</h2>
Update
If we need to update the number we can use flatMap.
Instead of above code for countObservable, use this
count$= new BehaviorSubject(10);
countObservable$ =
this.count$.flatMap(count => Observable.range(0, count).toArray()) ;
To change the number value, just update count$
this.count$.next(newNum);
A: I kind of disliked the approach of creating an empty array of size n every time that I wanted to render an element n times, so I created a custom structural directive:
import { Directive, Input, TemplateRef, ViewContainerRef, isDevMode, EmbeddedViewRef } from '@angular/core';
export class ForNumberContext {
constructor(public count: number, public index: number) { }
get first(): boolean { return this.index === 0; }
get last(): boolean { return this.index === this.count - 1; }
get even(): boolean { return this.index % 2 === 0; }
get odd(): boolean { return !this.even; }
}
@Directive({
selector: '[ForNumber]'
})
export class ForNumberDirective {
@Input() set forNumberOf(n: number) {
this._forNumberOf = n;
this.generate();
}
private _forNumberOf: number;
constructor(private _template: TemplateRef<ForNumberContext>,
private _viewContainer: ViewContainerRef) { }
@Input()
set ngForTemplate(value: TemplateRef<ForNumberContext>) {
if (value) {
this._template = value;
}
}
private generate() {
for (let i = 0; i < this._forNumberOf; i++) {
this._viewContainer.createEmbeddedView(this._template, new ForNumberContext(this._forNumberOf, i));
}
}
}
And then u can use it as follows:
<ng-template ForNumber [forNumberOf]="count" let-index="index">
<span>Iteration: {{index}}!</span></ng-template>
Please note, I havent tested it extensively so I cant promise that its bulletproof :)
A: I solved it using :
In TS:
months = [...Array(12).keys()];
In Template:
<p *ngFor="let month of months">{{month+1}}</p>
A: According Angular documentation:
createEmbeddedView() Instantiates an embedded view and inserts it into this container. it accepts a context object as second parameter:
abstract createEmbeddedView(templateRef: TemplateRef, context?: C, index?: number): EmbeddedViewRef.
When angular creates template by calling createEmbeddedView it can also pass context that will be used inside ng-template.
Using context optional parameter, you may use it in the component
extracting it within the template just as you would with the *ngFor.
app.component.html:
<p *for="randomNumber; let i = index; let first = first; let last = last; let even = even, let odd = odd; length = length">
index :{{i}},
length:{{length}},
is first : {{first}},
is last : {{last}},
is even : {{even}},
is odd : {{odd}}
</p>
for.directive.ts:
import { Directive, Input, TemplateRef, ViewContainerRef, EventEmitter } from '@angular/core';
@Directive({
selector: '[for]'
})
export class ForDirective {
constructor(
private templateRef: TemplateRef<any>,
private viewContainer: ViewContainerRef) { }
@Input('for') set loop(num: number) {
for (var i = 0; i < num; i++)
this.viewContainer.createEmbeddedView(
this.templateRef,
{
index: i,
odd: i % 2 == 1,
even: i % 2 == 0,
first: i == 0,
last: i == num,
length: num - 1,
}
);
}
}
|
stackoverflow
|
{
"language": "en",
"length": 602,
"provenance": "stackexchange_0000F.jsonl.gz:856809",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517077"
}
|
84eae98590504c0c80a72e3f1ee7ceb589b02a3d
|
Stackoverflow Stackexchange
Q: Twig - format date I am working on a project built with craft cms, that uses twig templating, and I would like to format the date in to this kind of format:
13. June 2017
I am not sure how to do that, I have tried with php date functions:
{{ entry.dateUpdated.date('j. F Y') }}
and also with:
{{ entry.dateUpdated.localeDate('j. F Y') }}
But, none of them worked. How can I do that?
A: try this
{{ entry.dateUpdated | date('j. F Y') }}
|
Q: Twig - format date I am working on a project built with craft cms, that uses twig templating, and I would like to format the date in to this kind of format:
13. June 2017
I am not sure how to do that, I have tried with php date functions:
{{ entry.dateUpdated.date('j. F Y') }}
and also with:
{{ entry.dateUpdated.localeDate('j. F Y') }}
But, none of them worked. How can I do that?
A: try this
{{ entry.dateUpdated | date('j. F Y') }}
|
stackoverflow
|
{
"language": "en",
"length": 84,
"provenance": "stackexchange_0000F.jsonl.gz:856814",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517097"
}
|
dc96bfe200f7f88bf5fdbb4efc7188d84e955c8d
|
Stackoverflow Stackexchange
Q: How to find the python list item that start with I have the list that contains some items like:
"GFS01_06-13-2017 05-10-18-38.csv"
"Metadata_GFS01_06-13-2017 05-10-18-38.csv"
How to find the list item that start with "GFS01_"
In SQL I use query: select item from list where item like 'GFS01_%'
A: You have several options, but most obvious are:
Using list comprehension with a condition:
result = [i for i in some_list if i.startswith('GFS01_')]
Using filter (which returns iterator)
result = filter(lambda x: x.startswith('GFS01_'), some_list)
|
Q: How to find the python list item that start with I have the list that contains some items like:
"GFS01_06-13-2017 05-10-18-38.csv"
"Metadata_GFS01_06-13-2017 05-10-18-38.csv"
How to find the list item that start with "GFS01_"
In SQL I use query: select item from list where item like 'GFS01_%'
A: You have several options, but most obvious are:
Using list comprehension with a condition:
result = [i for i in some_list if i.startswith('GFS01_')]
Using filter (which returns iterator)
result = filter(lambda x: x.startswith('GFS01_'), some_list)
A: You should try something like this :
[item for item in my_list if item.startswith('GFS01_')]
where "my_list" is your list of items.
A: If you really want the string output like this "GFS01_06-13-2017 05-10-18-38.csv","GFS01_xxx-xx-xx.csv", you could try this:
', '.join([item for item in myList if item.startswith('GFS01_')])
Or with quotes
', '.join(['"%s"' % item for item in myList if item.startswith('GFS01_')])
Filtering of list will gives you list again and that needs to be handled as per you requirement then.
|
stackoverflow
|
{
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:856846",
"question_score": "33",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517191"
}
|
727dfb60b3c99f67ed0e79e855f3d38da0c0d099
|
Stackoverflow Stackexchange
Q: How do I rebase a list of commits? Say I have the commits
*
*bb8d6cc3c7
*aa213lk321
*f0j9123r9j
I want to do an interactive rebase for each of them. The only way I can think of doing this is typing git rebase -i 'commit_hash' for each commit and doing the rebases one by one.
Is there an easier way to rebase all at once, smt. like
git rebase -i 'commit_hash_1' 'commit_hash_2' 'commit_hash_3' ?
A: This is the list of the commits you want to squash.
1. bb8d6cc3c7 <- This is the HEAD or latest commit
2. aa213lk321
3. f0j9123r9j
Command
git rebase -i HEAD~3 , if all above three commits in sequence
then you need to pick bb8d6cc3c7 and squash other two commits by entering s, it means squash.
if the commit is not in sequence then just do git rebase -i , this will open an editor and you pick one commit and squash other two commits on it.
|
Q: How do I rebase a list of commits? Say I have the commits
*
*bb8d6cc3c7
*aa213lk321
*f0j9123r9j
I want to do an interactive rebase for each of them. The only way I can think of doing this is typing git rebase -i 'commit_hash' for each commit and doing the rebases one by one.
Is there an easier way to rebase all at once, smt. like
git rebase -i 'commit_hash_1' 'commit_hash_2' 'commit_hash_3' ?
A: This is the list of the commits you want to squash.
1. bb8d6cc3c7 <- This is the HEAD or latest commit
2. aa213lk321
3. f0j9123r9j
Command
git rebase -i HEAD~3 , if all above three commits in sequence
then you need to pick bb8d6cc3c7 and squash other two commits by entering s, it means squash.
if the commit is not in sequence then just do git rebase -i , this will open an editor and you pick one commit and squash other two commits on it.
A: NOTE: This answer depends on capabilities of the text-editor opened by git rebase. I'm using vim for which it works.
Say we have a range of commits in branch my-branch on top of a revision (i.e. branch or tag) named my-branch-base.
*
*a8d737b50 (my-branch)
*bb8d6cc3c7
*79a2e5cc5
*fec125378
*193cf566b
*aa213lk321
*f0j9123r9j
*ea781de38
*61785bd55
*04428cafd (my-branch-base)
In this range there are some commits to edit:
*
*bb8d6cc3c7
*aa213lk321
*f0j9123r9j
Perhaps they come from a file perhaps they come from a command (such as git log -G"MyFunction"). I'll show how to select them via a command.
Make a copy of your branch HEAD pre rebase
(because: you will have a backup in case your rebase does not work out as you intended; and, if you select commits to edit via git log, git needs a reference to them during the rebase):
git branch -c my-branch my-branch-prerebase
Then start the rebase:
git rebase --interactive my-branch-base
Opens your configured editor with:
pick a8d737b50 {commit msg}
pick bb8d6cc3c7 {commit msg}
pick 79a2e5cc5 {commit msg}
pick fec125378 {commit msg}
pick 193cf566b {commit msg}
pick aa213lk321 {commit msg}
pick f0j9123r9j {commit msg}
pick ea781de38 {commit msg}
pick 61785bd55 {commit msg}
Open a second buffer --in the editor started by git rebase-- with the output of the revision selection command or a the file with the hashes to edit.
In vim you can do
(note you need to specify my-branch-prerebase to git log, since HEAD now points to my-branch-base and git log would not select any applicable commit hashes.):
:vsplit
:enew
:r !git log --oneline -G"MyFunction" my-branch-prerebase
Change the (generated) list of revisions into a format that your editor can use as an expression to select lines to edit.
In vim you can do (in the right window)
(note: that \x selects hex-digits and OP's hashes contain non-hex digits suck as j and k, so OP might need \w; and, in my siuation git log --oneline produced 9 digit hash abbreviations, you might need a different length (the made-up example needs {9,10}:
:%s/\v(\x{7,9}).*\n/\1|/
The \n concatenates the lines to a single line.
In vim we copy this to a register and paste it later into the minibuffer via CTRL+r.
Return to the buffer with rebase actions. Search and replace 'pick' with 'edit' on the lines matching the selected commit hashes.
In vim:
:g/\vbb8d6cc3c7|aa213lk321|f0j9123r9j/ :s/pick/edit/
Save the list of rebase actions and discard the temporary buffer with the selected commit hashes. Close the editor. git rebase starts rebasing and stops on the marked commits.
A: Weel, if you have master branch and some-feature branch.
If someone updates master branch and you want to rebase you some-feature branch, ...
Just
git checkout some-feature
git rebase -i master
All the some-feature's commits are merged to master branch
|
stackoverflow
|
{
"language": "en",
"length": 616,
"provenance": "stackexchange_0000F.jsonl.gz:856873",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517286"
}
|
53ff9e72920b97fb8ddf3e1fadcf189254bf9ee3
|
Stackoverflow Stackexchange
Q: How can I re-upload package to pypi? I upload a package to pypi, but I got some trouble after upload, so I delete it completely, and I tried to re-upload, but there are some error after upload again:
HTTP Error 400: This filename has previously been used, you should use a different version.
error: HTTP Error 400: This filename has previously been used, you should use a different version.
It seems pypi can track the upload activity, I delete project and account and upload again, but I can see the previous record. Why?
How can I solve the problem?
A:
Yes you can reupload the package with same name.
I had faced similar issue what I did was increased the version number in setup.py and delete the folders generated by running python setup.py sdist i.e. dist and your_package_name-egg.info and again run the commands python setup.py sdist to make the package upload ready.
I think pypi tracks the repo from folder generated by sdist i.e. dist and your_package_name-egg.info so you have to delete it.
|
Q: How can I re-upload package to pypi? I upload a package to pypi, but I got some trouble after upload, so I delete it completely, and I tried to re-upload, but there are some error after upload again:
HTTP Error 400: This filename has previously been used, you should use a different version.
error: HTTP Error 400: This filename has previously been used, you should use a different version.
It seems pypi can track the upload activity, I delete project and account and upload again, but I can see the previous record. Why?
How can I solve the problem?
A:
Yes you can reupload the package with same name.
I had faced similar issue what I did was increased the version number in setup.py and delete the folders generated by running python setup.py sdist i.e. dist and your_package_name-egg.info and again run the commands python setup.py sdist to make the package upload ready.
I think pypi tracks the repo from folder generated by sdist i.e. dist and your_package_name-egg.info so you have to delete it.
A: In short, you cannot reupload a distribution with the same name due to stability reasons. Here you can read more about this issue at https://github.com/pypa/packaging-problems/issues/74.
You need to change the distribution's file name, usually done by increasing the version number, and upload it again.
A: If you are running your local pypi server then you can use -o,--overwrite option which will allow overwriting existing package files.
pypi-server -p 8080 --overwrite ~/packages &
|
stackoverflow
|
{
"language": "en",
"length": 247,
"provenance": "stackexchange_0000F.jsonl.gz:856880",
"question_score": "16",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517306"
}
|
3b398f8b1bd4ae7151a0e202ac38037d1d8693eb
|
Stackoverflow Stackexchange
Q: Android Emulator "To Start Android, enter your password" and it reminds me the password is wrong" I just installed Android Studio, when I run the Android Emulator it says: "To Start Android, enter your password" and it reminds me the password is wrong".
How can i fix it?
I would be very glad if someone has an answer for this
A: In VS2017 version 15.9.3,perform the following steps :
*
*Open **Tools > Android Device Manager ** and search the device that you debug
*Click with Right Click on it
*Choose "Factory Reset"
*And you will get brand new device, ready to debug.
Good Luck!
|
Q: Android Emulator "To Start Android, enter your password" and it reminds me the password is wrong" I just installed Android Studio, when I run the Android Emulator it says: "To Start Android, enter your password" and it reminds me the password is wrong".
How can i fix it?
I would be very glad if someone has an answer for this
A: In VS2017 version 15.9.3,perform the following steps :
*
*Open **Tools > Android Device Manager ** and search the device that you debug
*Click with Right Click on it
*Choose "Factory Reset"
*And you will get brand new device, ready to debug.
Good Luck!
A: Wiping data in Android Virtual Device Manager works for me.
Tools -> Android -> AVD Manager -> Actions (triangle down) -> Wipe Data
A: Setting the ANDROID_SDK_HOME as described in a previous answer didn't work for me (although it did start using the folder I specified, placing a new .android folder there and using it)
Neither did doing a Factory Reset by right-clicking on the device in the Android Device Manager in Visual Studio 2019.
The only thing that worked for me was making sure that the checkbox for Google APIs was checked, and making sure it was on x86 of course; selecting x86_64 grays out the two checkboxes.
Note that it doesn't matter whether I use a Pixel or some other device such as a Nexus; it apparently always needs to use the image with the Google APIs.
A: I had this problem on my Visual Studio 2017 Community development setup and here is how I solved it:
*
*Shut down the AVD if it's running.
*While in the Android Device Manager, select the AVD with the problem.
*Right click or select the context sensitive menu and select "Factory Reset".
*Start the AVD again and it should boot normally.
*Just be sure that all the data you've saved on the AVD will be lost once you factory-reset it.
A: Windows
Hyosoo Kim is right about the problem (non-ascii characters in the user directory), but I think this is an easier solution :
Add a new User Environment Variable :
Control Panel -> System -> Advanced System Settings -> Environment Variables
Add a new user variable
Name : ANDROID_SDK_HOME
Value : the path to a directory not in your user home (C:\Android for example)
There you go !
A: If your user directory contains non-ascii characters, then try to change user name to English name.
I had same problem and it was solved when I logged on to root account.
A: My case was the described by Hyosoo Kim. I solved the problem renaming my user profile folder (following these intructions Change the Name of a User Profile Folder in Windows 10) and deleting the folder .android under my profile folder.
A: I had the same issue but was using Visual Studio 2017 and Xamarin rather than Android Studio. Doing a Factory reset in the Android Device Manager in Visual Studio didn't work for me.
So I installed Android Studio and as per Yao's answer, launched the AVD Manager from in there then selected 'Wipe Data' from the triangle dropdown. This resolved the issue and I was able to use the emulator from within Visual Studio again.
The Android Device Manager in Visual Studio seems to be missing some features compared to the one in Android Studio, so this solution is the only one I could find that worked.
A: I had this issue when trying to use the x86_64 processor, even when I created a brand new device. Using the x86 processor in the device properties resolved the issue for me.
A: Open
...\USER.android\AVD_NAME.avd\emulator-user.ini
There's a "uuid" which contains the password.
For example:
uuid = 1568486155483
And 1568486155483 is the password.
Worked 4 me :)
A: This solution working for me on VS2017/Xamarin
*
*Turn Off your device
*Open MS-DOS console
*Start your device with following command:
emulator.exe -verbose -feature WindowsHypervisorPlatform -avd
"My_device_name" -prop emu.uuid="your_id" -skip-adb-auth -wipe-data
*Turn Off your device (no more password is needed)
*Launch your device with 'Android Device Manager'
More information will be available in your log file:
See most recent file here:
C:\Users\"my_account_name"\AppData\Roaming\XamarinDeviceManager
A: Try this:
*
*Shutdown the AVD (if it's running).
*Go to the config file of this AVD, The default location is the following (where AVD_NAME is the AVD name, and USER is your system user):
*
*Mac and Linux: ~/.android/avd/AVD_NAME.avd/config.ini
*XP: C:\Documents and Settings\USER\.android\AVD_NAME.avd\config.ini
*Vista - C:\Users\USER\.android\AVD_NAME.avd\config.ini
*Win10, C:\Users\<user>\.android\avd\Android_Accelerated_x86_Oreo.avd\config.ini
*Remove any blank line from the beginning and the end of the file.
*Finally, run your AVD directly from the AVD Manager.
hope this helps.
A: Go to your Users folder > .android > avd folder > then delete the avds configuration or wipe out everything.
Then go back to your VS and click Android Emulator. VS will show you their recommended configuration. Use that and run your project with that emulator.
|
stackoverflow
|
{
"language": "en",
"length": 818,
"provenance": "stackexchange_0000F.jsonl.gz:856915",
"question_score": "44",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517420"
}
|
b59a2a0d554f829f184564d4ad8827ca61810cf3
|
Stackoverflow Stackexchange
Q: MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery' While I am running this query
SELECT * FROM
tb_emp_daily_status d
where
d.shiftdate>='2017-06-07' and
shiftdate<='2017-06-13' and
emp_id in (SELECT id FROM tb_employee e WHERE e.user_id = '18145' LIMIT 20)
order by d.id asc
How to write this query in mysql
A: Try this:
SELECT * FROM
tb_emp_daily_status d
where
d.shiftdate>='2017-06-07' and
shiftdate<='2017-06-13' and
emp_id in (SELECT * FROM (SELECT id FROM tb_employee e WHERE e.user_id = '18145' LIMIT 20)
as t)
order by d.id asc
|
Q: MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery' While I am running this query
SELECT * FROM
tb_emp_daily_status d
where
d.shiftdate>='2017-06-07' and
shiftdate<='2017-06-13' and
emp_id in (SELECT id FROM tb_employee e WHERE e.user_id = '18145' LIMIT 20)
order by d.id asc
How to write this query in mysql
A: Try this:
SELECT * FROM
tb_emp_daily_status d
where
d.shiftdate>='2017-06-07' and
shiftdate<='2017-06-13' and
emp_id in (SELECT * FROM (SELECT id FROM tb_employee e WHERE e.user_id = '18145' LIMIT 20)
as t)
order by d.id asc
A: instead of an IN subselect you can use ever a join with the a subselect result
SELECT * FROM
tb_emp_daily_status d
INNER JOIN (
SELECT id FROM tb_employee e WHERE e.user_id = '18145' LIMIT 20
) t on d.emp_id = t.id
where d.shiftdate>='2017-06-07' and
shiftdate<='2017-06-13' and
order by d.id asc
|
stackoverflow
|
{
"language": "en",
"length": 135,
"provenance": "stackexchange_0000F.jsonl.gz:856924",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517450"
}
|
db01483ffb9c4f146ae49cea00062571078612e8
|
Stackoverflow Stackexchange
Q: How does the new Java 9 Platform Module System integrate into Maven? I have recently looked at the new Java Platform Module System which provides functionality that seems to overlap with what Maven offers in terms of dependency management between jar files. I was wondering how this new Java feature will affect Maven and if it has already been integrated into Maven or a similar tool, and if so, what would be a hello-world usage example.
Cheers
A: You seemed to misundestand the module system. Maven offers a dependency management system. The module system offers a system to define modules and the exported/required interfaces of modules...(something like OSGi without the dynamic part of OSGi)...Apart from that you can compile modules usign a module-info.java file for a longer time.
http://blog.soebes.de/blog/2017/06/06/howto-create-a-java-run-time-image-with-maven/
https://maven.apache.org/plugins/maven-compiler-plugin/examples/module-info.html
https://www.slideshare.net/RobertScholte/java-9-and-the-impact-on-maven-projects
http://blog.joda.org/2017/04/java-se-9-jpms-modules-are-not-artifacts.html
|
Q: How does the new Java 9 Platform Module System integrate into Maven? I have recently looked at the new Java Platform Module System which provides functionality that seems to overlap with what Maven offers in terms of dependency management between jar files. I was wondering how this new Java feature will affect Maven and if it has already been integrated into Maven or a similar tool, and if so, what would be a hello-world usage example.
Cheers
A: You seemed to misundestand the module system. Maven offers a dependency management system. The module system offers a system to define modules and the exported/required interfaces of modules...(something like OSGi without the dynamic part of OSGi)...Apart from that you can compile modules usign a module-info.java file for a longer time.
http://blog.soebes.de/blog/2017/06/06/howto-create-a-java-run-time-image-with-maven/
https://maven.apache.org/plugins/maven-compiler-plugin/examples/module-info.html
https://www.slideshare.net/RobertScholte/java-9-and-the-impact-on-maven-projects
http://blog.joda.org/2017/04/java-se-9-jpms-modules-are-not-artifacts.html
|
stackoverflow
|
{
"language": "en",
"length": 133,
"provenance": "stackexchange_0000F.jsonl.gz:856925",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517454"
}
|
0529d5332434cd8eca5f18136a0920fa6a54ddd0
|
Stackoverflow Stackexchange
Q: Null-conditional boolean in if statement I have an event which returns a boolean. To make sure the event is only fired if anyone is listening, i call it using the null-conditional operator (questionmark).
However, this means that I have to add the null-conditional operator to the returned boolean as well. And that means that I cannot figure out how to use it in an if-statement afterwards. Does anyone knows how to handle this?
switch (someInt)
{
case 1:
// Validate if the form is filled correctly.
// The event returns true if that is the case.
bool? isValid = ValidateStuff?.Invoke();
if (isValid)
// If passed validation go to next step in form
GoToNextStep?.Invoke();
break;
// There are more cases, but you get the point
(...)
}
A: The problem is that in case of Nullable bool? you have three-valued logic: true, false and null and thus you have to put explicitly if null should be treated as true, e.g.:
if (isValid != false) // either true or null
GoToNextStep?.Invoke();
or null shall be considered as false:
if (isValid == true) // only true
GoToNextStep?.Invoke();
|
Q: Null-conditional boolean in if statement I have an event which returns a boolean. To make sure the event is only fired if anyone is listening, i call it using the null-conditional operator (questionmark).
However, this means that I have to add the null-conditional operator to the returned boolean as well. And that means that I cannot figure out how to use it in an if-statement afterwards. Does anyone knows how to handle this?
switch (someInt)
{
case 1:
// Validate if the form is filled correctly.
// The event returns true if that is the case.
bool? isValid = ValidateStuff?.Invoke();
if (isValid)
// If passed validation go to next step in form
GoToNextStep?.Invoke();
break;
// There are more cases, but you get the point
(...)
}
A: The problem is that in case of Nullable bool? you have three-valued logic: true, false and null and thus you have to put explicitly if null should be treated as true, e.g.:
if (isValid != false) // either true or null
GoToNextStep?.Invoke();
or null shall be considered as false:
if (isValid == true) // only true
GoToNextStep?.Invoke();
A: You can use this:
if (isValid.HasValue && isValid.Value)
A: You could use
if (isValid.GetValueOrDefault())
which will give false if isValid is null.
or use the ?? operator
if (isValid ?? false)
which returns the value of the left operand if it is not null and the value of the right operand otherwise. So basically a shorthand for
if (isValid == null ? false : isValid)
A: One option would be to test wether isValid has a value:
if (isValid.HasValue && (bool)isValid)
Another option is to give isValid a default value when nobody is listening to your event. This can be done with the null coalescing operator:
bool isValid = ValidateStuff?.Invoke() ?? true; // assume it is valid when nobody listens
|
stackoverflow
|
{
"language": "en",
"length": 306,
"provenance": "stackexchange_0000F.jsonl.gz:856973",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517614"
}
|
bfc4fa444a641fdb2692c02fb2d93dd073caab1f
|
Stackoverflow Stackexchange
Q: Spring MVC Controller Unit Testing : How do I set private instance boolean field? I have a Spring MVC REST controller class that has a private instance boolean field injected via @Value ,
@Value("${...property_name..}")
private boolean isFileIndex;
Now to unit test this controller class, I need to inject this boolean.
How do I do that with MockMvc?
I can use reflection but MockMvc instance doesn't give me underlying controller instance to pass to Field.setBoolean() method.
Test class runs without mocking or injecting this dependency with value always being false. I need to set it to true to cover all paths.
Set up looks like below.
@RunWith(SpringRunner.class)
@WebMvcTest(value=Controller.class,secure=false)
public class IndexControllerTest {
@Autowired
private MockMvc mockMvc;
....
}
A: You can use @TestPropertySource
@TestPropertySource(properties = {
"...property_name..=testValue",
})
@RunWith(SpringRunner.class)
@WebMvcTest(value=Controller.class,secure=false)
public class IndexControllerTest {
@Autowired
private MockMvc mockMvc;
}
You can also load your test properties form a file
@TestPropertySource(locations = "classpath:test.properties")
EDIT: Some other possible alternative
@RunWith(SpringRunner.class)
@WebMvcTest(value=Controller.class,secure=false)
public class IndexControllerTest {
@Autowired
private MockMvc mockMvc;
@Autowired
private Controller controllerUnderTheTest;
@Test
public void test(){
ReflectionTestUtils.setField(controllerUnderTheTest, "isFileIndex", Boolean.TRUE);
//..
}
}
|
Q: Spring MVC Controller Unit Testing : How do I set private instance boolean field? I have a Spring MVC REST controller class that has a private instance boolean field injected via @Value ,
@Value("${...property_name..}")
private boolean isFileIndex;
Now to unit test this controller class, I need to inject this boolean.
How do I do that with MockMvc?
I can use reflection but MockMvc instance doesn't give me underlying controller instance to pass to Field.setBoolean() method.
Test class runs without mocking or injecting this dependency with value always being false. I need to set it to true to cover all paths.
Set up looks like below.
@RunWith(SpringRunner.class)
@WebMvcTest(value=Controller.class,secure=false)
public class IndexControllerTest {
@Autowired
private MockMvc mockMvc;
....
}
A: You can use @TestPropertySource
@TestPropertySource(properties = {
"...property_name..=testValue",
})
@RunWith(SpringRunner.class)
@WebMvcTest(value=Controller.class,secure=false)
public class IndexControllerTest {
@Autowired
private MockMvc mockMvc;
}
You can also load your test properties form a file
@TestPropertySource(locations = "classpath:test.properties")
EDIT: Some other possible alternative
@RunWith(SpringRunner.class)
@WebMvcTest(value=Controller.class,secure=false)
public class IndexControllerTest {
@Autowired
private MockMvc mockMvc;
@Autowired
private Controller controllerUnderTheTest;
@Test
public void test(){
ReflectionTestUtils.setField(controllerUnderTheTest, "isFileIndex", Boolean.TRUE);
//..
}
}
A: My preferred option would be to set it in the constructor and annotate the constructor parameter with the @Value. You could then pass in whatever you want in the test.
See this answer
|
stackoverflow
|
{
"language": "en",
"length": 215,
"provenance": "stackexchange_0000F.jsonl.gz:856983",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517648"
}
|
aa6552356ab60bf8359f15e87813fd0ba334d8d7
|
Stackoverflow Stackexchange
Q: Android: How to remove divider lines in number picker How can I remove divider lines in number picker, I tried setShowDivider to none(seems none doesn't exist) through xml and code noting worked
picker.setShowDividers(LinearLayout.SHOW_DIVIDER_NONE);
XML:
android:showDividers="none"
A: Add One of those line in your NumberPicker
XML : android:selectionDividerHeight="0dp"
OR
JAVA: picker.setSelectionDividerHeight(0)
|
Q: Android: How to remove divider lines in number picker How can I remove divider lines in number picker, I tried setShowDivider to none(seems none doesn't exist) through xml and code noting worked
picker.setShowDividers(LinearLayout.SHOW_DIVIDER_NONE);
XML:
android:showDividers="none"
A: Add One of those line in your NumberPicker
XML : android:selectionDividerHeight="0dp"
OR
JAVA: picker.setSelectionDividerHeight(0)
A: this code would be better
private void changeDividerColor(NumberPicker picker, int color) {
try {
Field mField = NumberPicker.class.getDeclaredField("mSelectionDivider");
mField.setAccessible(true);
ColorDrawable colorDrawable = new ColorDrawable(color);
mField.set(picker, colorDrawable);
} catch (Exception e) {
e.printStackTrace();
}
}
A: There is a simple solution. Calling picker.setSelectionDividerHeight(0) could do the trick.
A: Set Theme for Numberpicker
<NumberPicker
...
android:theme="@style/DefaultNumberPickerTheme" />
style.xml
<style name="DefaultNumberPickerTheme" parent="AppTheme">
<item name="colorControlNormal">@color/transparent</item>
</style>
OR
private void changeDividerColor(NumberPicker picker, int color) {
java.lang.reflect.Field[] pickerFields = NumberPicker.class.getDeclaredFields();
for (java.lang.reflect.Field pf : pickerFields) {
if (pf.getName().equals("mSelectionDivider")) {
pf.setAccessible(true);
try {
ColorDrawable colorDrawable = new ColorDrawable(color);
pf.set(picker, colorDrawable);
} catch (IllegalArgumentException e) {
e.printStackTrace();
} catch (Resources.NotFoundException e) {
e.printStackTrace();
}
catch (IllegalAccessException e) {
e.printStackTrace();
}
break;
}
}
}
And Set it as
changeDividerColor(yournumberpicker, Color.parseColor("#00ffffff"));
|
stackoverflow
|
{
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:856994",
"question_score": "17",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517686"
}
|
1d172a4ec323bf19834e5942a717a13916539d7a
|
Stackoverflow Stackexchange
Q: How do i disable URL encoding using retrofit2 with Okhttp3 I have a simple Google Places Query string from https://developers.google.com/places/web-service/search.
The following URL shows a search for restaurants near Sydney.
https://maps.googleapis.com/maps/api/place/textsearch/xml?query=restaurants+in+Sydney&key=YOUR_API_KEY
But then my retrofit2 & Okhttp3 encodes it like this below:
https://maps.googleapis.com/maps/api/place/textsearch/xml?query=restaurants%2Bin%2BSydney&key=YOUR_API_KEY
Replacing every occurrence of "+" with "%2B". And i wish to stop this.
How do I achieve this please?
Edit
I just finished reading the Github issue https://github.com/square/retrofit/issues/1407 , No answer found
A: does this work for you ?
Call<List<Articles>> getArticles((@QueryMap(encoded=true) Map<String, String> options);
encoded =true should tell retrofit that the paramter is already encoded.
|
Q: How do i disable URL encoding using retrofit2 with Okhttp3 I have a simple Google Places Query string from https://developers.google.com/places/web-service/search.
The following URL shows a search for restaurants near Sydney.
https://maps.googleapis.com/maps/api/place/textsearch/xml?query=restaurants+in+Sydney&key=YOUR_API_KEY
But then my retrofit2 & Okhttp3 encodes it like this below:
https://maps.googleapis.com/maps/api/place/textsearch/xml?query=restaurants%2Bin%2BSydney&key=YOUR_API_KEY
Replacing every occurrence of "+" with "%2B". And i wish to stop this.
How do I achieve this please?
Edit
I just finished reading the Github issue https://github.com/square/retrofit/issues/1407 , No answer found
A: does this work for you ?
Call<List<Articles>> getArticles((@QueryMap(encoded=true) Map<String, String> options);
encoded =true should tell retrofit that the paramter is already encoded.
|
stackoverflow
|
{
"language": "en",
"length": 99,
"provenance": "stackexchange_0000F.jsonl.gz:857003",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517709"
}
|
128b4e14fb5aaee3082f00739175475bc9b9a223
|
Stackoverflow Stackexchange
Q: XCode 9 beta showing error when app launch I am trying to open my existing project in XCode 9 Beta version. Code is compile without any error, however when simulator showing warning in alert when app launch.
Please let me know what is going wrong.
Failed to change owner of
file:///Users/stiga/Library/Developer/CoreSimulator/Devices/2A6099D8-6743-4551-AE73-CE7AFCAEE9FE/data/Library/Caches/com.apple.mobile.installd.staging/temp.opEVCA/TestWifog.app:
Error Domain=MIInstallerErrorDomain Code=4 "Failed to remove ACL"
UserInfo={NSUnderlyingError=0x7fdb12706dc0 {Error
Domain=NSPOSIXErrorDomain Code=13 "Permission denied"
UserInfo={SourceFileLine=392, NSLocalizedDescription=open of
/Users/stiga/Library/Developer/CoreSimulator/Devices/2A6099D8-6743-4551-AE73-CE7AFCAEE9FE/data/Library/Caches/com.apple.mobile.installd.staging/temp.opEVCA/TestWifog.app/GoogleSignIn.bundle/ar.lproj/GoogleSignIn.strings
failed: Permission denied, FunctionName=-[MIFileManager
removeACLAtPath:isDir:error:]}}, FunctionName=-[MIFileManager
removeACLAtPath:isDir:error:], SourceFileLine=392,
NSLocalizedDescription=Failed to remove ACL}
A: The problem happens when files in your target are marked read-only. One common cause is a copy-files script where the files it is copying are read-only.
You can try adding a chmod u+w command to the script to ensure the files are read-write after being copied into the target.
For Cocoapods, you can try chmod -R u+w /path/to/your/project/Pods to make all files in the pods subdirectory writable.
|
Q: XCode 9 beta showing error when app launch I am trying to open my existing project in XCode 9 Beta version. Code is compile without any error, however when simulator showing warning in alert when app launch.
Please let me know what is going wrong.
Failed to change owner of
file:///Users/stiga/Library/Developer/CoreSimulator/Devices/2A6099D8-6743-4551-AE73-CE7AFCAEE9FE/data/Library/Caches/com.apple.mobile.installd.staging/temp.opEVCA/TestWifog.app:
Error Domain=MIInstallerErrorDomain Code=4 "Failed to remove ACL"
UserInfo={NSUnderlyingError=0x7fdb12706dc0 {Error
Domain=NSPOSIXErrorDomain Code=13 "Permission denied"
UserInfo={SourceFileLine=392, NSLocalizedDescription=open of
/Users/stiga/Library/Developer/CoreSimulator/Devices/2A6099D8-6743-4551-AE73-CE7AFCAEE9FE/data/Library/Caches/com.apple.mobile.installd.staging/temp.opEVCA/TestWifog.app/GoogleSignIn.bundle/ar.lproj/GoogleSignIn.strings
failed: Permission denied, FunctionName=-[MIFileManager
removeACLAtPath:isDir:error:]}}, FunctionName=-[MIFileManager
removeACLAtPath:isDir:error:], SourceFileLine=392,
NSLocalizedDescription=Failed to remove ACL}
A: The problem happens when files in your target are marked read-only. One common cause is a copy-files script where the files it is copying are read-only.
You can try adding a chmod u+w command to the script to ensure the files are read-write after being copied into the target.
For Cocoapods, you can try chmod -R u+w /path/to/your/project/Pods to make all files in the pods subdirectory writable.
|
stackoverflow
|
{
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:857015",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517741"
}
|
44c7397375c2c84179f896407db23fe7fcf23815
|
Stackoverflow Stackexchange
Q: Cannot Find name error coming in angular2 I have a component called customComponent
import {Component} from '@angular/core';
import { appService } from './app.service';
@Component({
selector: 'custom-root',
template: `<h1>how</h1>
<ul *ngFor="let hero of heroes"><li>{{hero}}</li></ul>`,
})
export class CustomComponent {
heroes = ['first','second','third'];
//heroes;
value: string = "";
// constructor(appService: appService) { }
/* ngOnInit(): void {
this.value = this._appService.getApp();
} */
}
In appService i have
import {
Injectable
} from '@angular/core';
@Injectable()
export class appService {
getApp(): string {
return "Hello world";
}
}
In app.module.ts i am importing the app service
import { appService } from './app.service';
@NgModule({
declarations: [
AppComponent,CustomComponent
],
imports: [
BrowserModule,
FormsModule,
HttpModule,
RouterModule
// appService
],
providers: [appService],
bootstrap: [AppComponent]
})
export class AppModule { }
Now I am getting an error
cannot find name appService in custom Component line number 15
How can I solve this?
A: In CustomComponent file import:
import { Inject } from '@angular/core';
Then use it in constructor for DI:
constructor(@Inject(appService) private appService) { }
After this make sure you uncomment appService in providers of your module.
imports: [
BrowserModule,
FormsModule,
HttpModule,
RouterModule,
appService
],
|
Q: Cannot Find name error coming in angular2 I have a component called customComponent
import {Component} from '@angular/core';
import { appService } from './app.service';
@Component({
selector: 'custom-root',
template: `<h1>how</h1>
<ul *ngFor="let hero of heroes"><li>{{hero}}</li></ul>`,
})
export class CustomComponent {
heroes = ['first','second','third'];
//heroes;
value: string = "";
// constructor(appService: appService) { }
/* ngOnInit(): void {
this.value = this._appService.getApp();
} */
}
In appService i have
import {
Injectable
} from '@angular/core';
@Injectable()
export class appService {
getApp(): string {
return "Hello world";
}
}
In app.module.ts i am importing the app service
import { appService } from './app.service';
@NgModule({
declarations: [
AppComponent,CustomComponent
],
imports: [
BrowserModule,
FormsModule,
HttpModule,
RouterModule
// appService
],
providers: [appService],
bootstrap: [AppComponent]
})
export class AppModule { }
Now I am getting an error
cannot find name appService in custom Component line number 15
How can I solve this?
A: In CustomComponent file import:
import { Inject } from '@angular/core';
Then use it in constructor for DI:
constructor(@Inject(appService) private appService) { }
After this make sure you uncomment appService in providers of your module.
imports: [
BrowserModule,
FormsModule,
HttpModule,
RouterModule,
appService
],
A: Replace this code block
export class CustomComponent {
heroes = ['first','second','third'];
//heroes;
value: string = "";
constructor(private appService: appService) { }
ngOnInit(): void {
this.value = this.appService.getApp();
}
}
Issues :
You are calling this._appService.getApp();
*
*You haven't define _appService.
*Haven't DI for appService.
|
stackoverflow
|
{
"language": "en",
"length": 232,
"provenance": "stackexchange_0000F.jsonl.gz:857016",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517752"
}
|
d080cc6aad26c517fd280006691817d69ee03d90
|
Stackoverflow Stackexchange
Q: Laravel validation Error messages to string I want to convert laravel validation error array to a comma separated string. This is to use in an api service for an ios application. So that the iOs developer can process error messages easily.
I tried,
$valArr = [];
foreach ($validator->errors() as $key => $value) {
$errStr = $key.' '.$value[0];
array_push($valArr, $errStr);
}
if(!empty($valArr)){
$errStrFinal = implode(',', $valArr);
}
But it is not working.
A: The $validator->errors() returns a MessageBag,
see: https://laravel.com/api/5.3/Illuminate/Support/MessageBag.html.
You are close, you need to call the getMessages() function on errors(), so:
foreach ($validator->errors()->getMessages() as $key => $value) {
Hope this helps :)
|
Q: Laravel validation Error messages to string I want to convert laravel validation error array to a comma separated string. This is to use in an api service for an ios application. So that the iOs developer can process error messages easily.
I tried,
$valArr = [];
foreach ($validator->errors() as $key => $value) {
$errStr = $key.' '.$value[0];
array_push($valArr, $errStr);
}
if(!empty($valArr)){
$errStrFinal = implode(',', $valArr);
}
But it is not working.
A: The $validator->errors() returns a MessageBag,
see: https://laravel.com/api/5.3/Illuminate/Support/MessageBag.html.
You are close, you need to call the getMessages() function on errors(), so:
foreach ($validator->errors()->getMessages() as $key => $value) {
Hope this helps :)
A: You are not converting validation errors to array.Please use the below function and pass validation errors as parameter.
public function validationErrorsToString($errArray) {
$valArr = array();
foreach ($errArray->toArray() as $key => $value) {
$errStr = $key.' '.$value[0];
array_push($valArr, $errStr);
}
if(!empty($valArr)){
$errStrFinal = implode(',', $valArr);
}
return $errStrFinal;
}
//Function call.
$result = $this->validationErrorsToString($validator->errors());
A: You should do like this :
$errorString = implode(",",$validator->messages()->all());
P.S. Assuming
$validator = Validator::make($dataToBeChecked,$validationArray,$messageArray)
A: If you are doing it like me without your validator and you are pulling messages from the exception you can use laravel helper Arr::flatten($array);
Link and code are for laravel 8.x but I tested this with 5.7 ;) It works.
From documentation:
use Illuminate\Support\Arr;
$array = ['name' => 'Joe', 'languages' => ['PHP', 'Ruby']];
$flattened = Arr::flatten($array);
// ['Joe', 'PHP', 'Ruby']
My code:
try {
$request->validate([
'test1' => 'required|integer',
'test2' => 'required|integer',
'test3' => 'required|string',
]);
} catch (ValidationException $validationException) {
return response()->json([
'type' => 'error',
'title' => $validationException->getMessage(),
'messages' => Arr::flatten($validationException->errors())
], $validationException->status);
} catch (\Exception $exception) {
return response()->json([
'type' => 'error',
'title' => $exception->getMessage(),
], $exception->getCode());
}
As you can see I am pulling the message and setting it as my title. Then I am using Arr::flatten($validationException->errors()) to get the validation messages and but to flatten my array for SweetAlert2 on the frontend.
I know I am late but I hope it will help someone that comes across these problems.
Greetings! :)
|
stackoverflow
|
{
"language": "en",
"length": 338,
"provenance": "stackexchange_0000F.jsonl.gz:857019",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517760"
}
|
32aece12e110c1cea6424e03e07c5ea5def432e7
|
Stackoverflow Stackexchange
Q: concatenate multiple numpy arrays in one array? Assume I have many numpy array:
a = ([1,2,3,4,5])
b = ([2,3,4,5,6])
c = ([3,4,5,6,7])
and I want to generate a new 2-D array:
d = ([[1,2,3,4,5],[2,3,4,5,6],[3,4,5,6,7]])
What should I code?
I tried used:
d = np.concatenate((a,b),axis=0)
d = np.concatenate((d,c),axis=0)
It returns:
d = ([1,2,3,4,5,2,3,4,5,6,3,4,5,6,7])
A: As mentioned in the comments you could just use the np.array function:
>>> import numpy as np
>>> a = ([1,2,3,4,5])
>>> b = ([2,3,4,5,6])
>>> c = ([3,4,5,6,7])
>>> np.array([a, b, c])
array([[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7]])
In the general case that you want to stack based on a "not-yet-existing" dimension, you can also use np.stack:
>>> np.stack([a, b, c], axis=0)
array([[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7]])
>>> np.stack([a, b, c], axis=1) # not what you want, this is only to show what is possible
array([[1, 2, 3],
[2, 3, 4],
[3, 4, 5],
[4, 5, 6],
[5, 6, 7]])
|
Q: concatenate multiple numpy arrays in one array? Assume I have many numpy array:
a = ([1,2,3,4,5])
b = ([2,3,4,5,6])
c = ([3,4,5,6,7])
and I want to generate a new 2-D array:
d = ([[1,2,3,4,5],[2,3,4,5,6],[3,4,5,6,7]])
What should I code?
I tried used:
d = np.concatenate((a,b),axis=0)
d = np.concatenate((d,c),axis=0)
It returns:
d = ([1,2,3,4,5,2,3,4,5,6,3,4,5,6,7])
A: As mentioned in the comments you could just use the np.array function:
>>> import numpy as np
>>> a = ([1,2,3,4,5])
>>> b = ([2,3,4,5,6])
>>> c = ([3,4,5,6,7])
>>> np.array([a, b, c])
array([[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7]])
In the general case that you want to stack based on a "not-yet-existing" dimension, you can also use np.stack:
>>> np.stack([a, b, c], axis=0)
array([[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7]])
>>> np.stack([a, b, c], axis=1) # not what you want, this is only to show what is possible
array([[1, 2, 3],
[2, 3, 4],
[3, 4, 5],
[4, 5, 6],
[5, 6, 7]])
|
stackoverflow
|
{
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:857035",
"question_score": "14",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517809"
}
|
e376a4a7659e14a1b4678ad709a366511512fd74
|
Stackoverflow Stackexchange
Q: Image Button background not transparent ionic I am trying from one hour to add an image as button in ionic , the image is png with transparent background but failed it always show a white background my code is:
<p align="center"><button (click)="buttonTapped()">
<ion-img style="width: 150px; height: 150px; background: transparent
!important;" src="img/btn1.png"></ion-img> </button>
A: You can remove the <button>-element and place the (click)-handler on the <ion-img>-tag directly. Also to set background color use background-color not background
Your code would look something like this:
<p align="center"><ion-img (click)="buttonTapped()"style="width: 150px;
height: 150px; background: transparent !important;" src="img/btn1.png">
</ion-img>
|
Q: Image Button background not transparent ionic I am trying from one hour to add an image as button in ionic , the image is png with transparent background but failed it always show a white background my code is:
<p align="center"><button (click)="buttonTapped()">
<ion-img style="width: 150px; height: 150px; background: transparent
!important;" src="img/btn1.png"></ion-img> </button>
A: You can remove the <button>-element and place the (click)-handler on the <ion-img>-tag directly. Also to set background color use background-color not background
Your code would look something like this:
<p align="center"><ion-img (click)="buttonTapped()"style="width: 150px;
height: 150px; background: transparent !important;" src="img/btn1.png">
</ion-img>
|
stackoverflow
|
{
"language": "en",
"length": 94,
"provenance": "stackexchange_0000F.jsonl.gz:857055",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517881"
}
|
067f93ac81aacea2db036b94b68f61483c6959f1
|
Stackoverflow Stackexchange
Q: is Celery Task initialized per each worker process, or once per app? I have a heavy external library class which takes time to initialize and consumes a lot of memory. I want to create it once per task instance, at minimum.
class NlpTask(Task):
def __init__(self):
print('initializing NLP parser')
self._parser = nlplib.Parser()
print('done initializing NLP parser')
@property
def parser(self):
return self._parser
@celery.task(base=NlpTask)
def my_task(arg):
x = my_task.parser.process(arg)
# etc.
Celery starts 32 worker processes, so I'd expect the printing "initializing ... done" 32 times, as I assume that a task instance is created per each worker. Surprisingly, I'm getting the printing once. What actually happens there? Thanks.
A: Your NlpTask is initializing once when it is getting registered with the worker.
If you have two tasks like
@celery.task(base=NlpTask)
def foo(arg):
pass
@celery.task(base=NlpTask)
def bar(arg):
pass
Then when you start a worker, you will see 2 initializations.
If you want to initialize it once for every worker, you can use worker_process_init signal.
from celery.signals import worker_process_init
@worker_process_init.connect()
def setup(**kwargs):
print('initializing NLP parser')
# setup
print('done initializing NLP parser')
Now, when you start a worker, you will see setup is being called by each process once.
|
Q: is Celery Task initialized per each worker process, or once per app? I have a heavy external library class which takes time to initialize and consumes a lot of memory. I want to create it once per task instance, at minimum.
class NlpTask(Task):
def __init__(self):
print('initializing NLP parser')
self._parser = nlplib.Parser()
print('done initializing NLP parser')
@property
def parser(self):
return self._parser
@celery.task(base=NlpTask)
def my_task(arg):
x = my_task.parser.process(arg)
# etc.
Celery starts 32 worker processes, so I'd expect the printing "initializing ... done" 32 times, as I assume that a task instance is created per each worker. Surprisingly, I'm getting the printing once. What actually happens there? Thanks.
A: Your NlpTask is initializing once when it is getting registered with the worker.
If you have two tasks like
@celery.task(base=NlpTask)
def foo(arg):
pass
@celery.task(base=NlpTask)
def bar(arg):
pass
Then when you start a worker, you will see 2 initializations.
If you want to initialize it once for every worker, you can use worker_process_init signal.
from celery.signals import worker_process_init
@worker_process_init.connect()
def setup(**kwargs):
print('initializing NLP parser')
# setup
print('done initializing NLP parser')
Now, when you start a worker, you will see setup is being called by each process once.
A: for this:
that's my point - I'd expect once per worker, and it seems like once per celery instance. I edited the question – @davka
the answer must be use a sender filter in connect, like:
@worker_process_init.connect(sender='xx')
def func(sender, **kwargs):
if sender == 'xx':
# dosomething
but I found that it's not working in celery 4.0.2.
|
stackoverflow
|
{
"language": "en",
"length": 251,
"provenance": "stackexchange_0000F.jsonl.gz:857079",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517952"
}
|
1ba4ff873402c3db6a74befd538a74e2d22b967b
|
Stackoverflow Stackexchange
Q: Align button to the right in td element I have a <td> element with content and a button. The width of the content should be all except the width of the button which will be fixed. The button should be aligned to the right of the content. How can I achieve this, the following doesn't work:
<table border="1px">
<tr>
<td>
<div>
<div style="width: auto; overflow: auto;">
<span>
<form>
<textarea></textarea>
</form>
</span>
</div>
<div style="float: right; width: 32px;">
<button type="button" class="btn btn-primary">
Click
</button>
</div>
</div>
</td>
<td>Another cell</td>
</tr>
</table>
A: Use style="float: right;"
<table border="1px">
<tr>
<td>
<div style="display:inline-block">
<div style="display: inherit;width: calc(100% - 32px);overflow: auto;">
<span>
<form>
<textarea></textarea>
</form>
</span>
</div>
<div style="float: right; width: 32px;">
<button type="button" class="btn btn-primary" title="Select Financial Instrument" style="width:100%; word-wrap: break-word;">
Click
</button>
</div>
</div>
</td>
<td>Another cell</td>
</tr>
</table>
|
Q: Align button to the right in td element I have a <td> element with content and a button. The width of the content should be all except the width of the button which will be fixed. The button should be aligned to the right of the content. How can I achieve this, the following doesn't work:
<table border="1px">
<tr>
<td>
<div>
<div style="width: auto; overflow: auto;">
<span>
<form>
<textarea></textarea>
</form>
</span>
</div>
<div style="float: right; width: 32px;">
<button type="button" class="btn btn-primary">
Click
</button>
</div>
</div>
</td>
<td>Another cell</td>
</tr>
</table>
A: Use style="float: right;"
<table border="1px">
<tr>
<td>
<div style="display:inline-block">
<div style="display: inherit;width: calc(100% - 32px);overflow: auto;">
<span>
<form>
<textarea></textarea>
</form>
</span>
</div>
<div style="float: right; width: 32px;">
<button type="button" class="btn btn-primary" title="Select Financial Instrument" style="width:100%; word-wrap: break-word;">
Click
</button>
</div>
</div>
</td>
<td>Another cell</td>
</tr>
</table>
A: Based on Gerard's answer the following worked best for me:
<table border="1px">
<tr>
<td>
<div style="display: inline-flex; align-items: center; width: 100%;">
<div>
<span>
<form>
<textarea></textarea>
</form>
</span>
</div>
<div style="padding: 5px;">
<button type="button">
Click
</button>
</div>
</div>
</td>
<td>Another cell</td>
</tr>
</table>
A: you could decrease the font-size of the button so it fits inside the desired width (32px) you've set to the parent div
don't really understand your logic behind this html structure, but here is your solution
<table border="1px">
<tr>
<td>
<div>
<div style="width:calc(100% - 32px);overflow: auto;float:left">
<span>
<form>
<textarea></textarea>
</form>
</span>
</div>
<div style="float: right;">
<button type="button" class="btn btn-primary" style="width:32px;font-size:8px;" title="Select Financial Instrument">
Click
</button>
</div>
</div>
</td>
<td>Another cell</td>
</tr>
</table>
A: Is this what you need?
<table border="1px">
<tr>
<td>
<div style="display: flex;">
<form>
<textarea></textarea>
</form>
<button type="button" class="btn btn-primary" title="Select Financial Instrument">
Click
</button>
</div>
</td>
<td>Another cell</td>
</tr>
</table>
A: You can use a <table> and place the form in one <td> and the button in the other. Then you can fix the width of the <td> containing the button. This will force the first <td> to adjust its width according to the <table> width.
Check this jsFiddle, try to change the width of the main table and see how the <textarea> (and the <td>) adjust the width accordingly.
Update:
How about this, with no changes to your HTML structure:
* { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box }
<table border="1px" style="width: 450px">
<tr>
<td>
<div style="display: table; width: 100%">
<div style="display: table-row">
<div style="display: table-cell">
<span style="display: block; width: 100%">
<form style="display: block; width: 100%">
<textarea style="display: block; width: 100%"></textarea>
</form>
</span>
</div><!-- table-cell -->
<div style="display: table-cell; width: 32px;">
<button type="button" class="btn btn-primary">
Click
</button>
</div><!-- table-cell -->
</div><!-- table-row -->
</div><!-- table -->
</td>
<td>Another cell</td>
</tr>
</table>
|
stackoverflow
|
{
"language": "en",
"length": 437,
"provenance": "stackexchange_0000F.jsonl.gz:857096",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44517998"
}
|
f73a214cdeecdb6dd455723d3e366361b35cdf25
|
Stackoverflow Stackexchange
Q: jpa/hibernate criteria query with treat and join on downcasted entity I have the following domain model with inheritance:
@Entity
class A {
@OneToMany
Set<B> b;
}
@Entity
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(name = "TYPE", discriminatorType = @DiscriminatorType.STRING, length = 3)
abstract class B {
@OneToMany
Set<C> c;
}
@Entity
@DiscriminatorValue("B1")
class B1 extends B {
@OneToMany
Set<C> c;
}
@Entity
@DiscriminatorValue("B2")
class B2 extends B {
}
@Entity
class C {
}
I want to build a query that load some fields from C as follow:
class Dto {
String a;
String b;
String c;
public Dto(String a, String b, String b) {
....
}
}
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery query = cb.createQuery(Dto.class);
Root<A> from = query.from(A.class);
Join<A, B> joinB = from.join(A_.b);
Join<A, B1> joinB1 = cb.treat(joinB, B1.class);
Join<B1, C> joinC = joinB1.join(B1_.c);
q.where(cb.equal(joinC.get(C_.id), cid);
q.select(cb.construct(Dto.class, joinC.get(C_.value), joinB1.get(B1_.value), from.get(A_.value));
e.createQuery(q).getResultList();
It seems, it's not possible to use treat in more joins, the treat must be a leaf join.
see eclipselink, or SO this.
What happens is that hibernate (4.3.6, the version I'm using) doesn't generate the alias in the jpql query.
Have I misunderstood the use of treat operator?
Is there a way to workaround this issue?
|
Q: jpa/hibernate criteria query with treat and join on downcasted entity I have the following domain model with inheritance:
@Entity
class A {
@OneToMany
Set<B> b;
}
@Entity
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(name = "TYPE", discriminatorType = @DiscriminatorType.STRING, length = 3)
abstract class B {
@OneToMany
Set<C> c;
}
@Entity
@DiscriminatorValue("B1")
class B1 extends B {
@OneToMany
Set<C> c;
}
@Entity
@DiscriminatorValue("B2")
class B2 extends B {
}
@Entity
class C {
}
I want to build a query that load some fields from C as follow:
class Dto {
String a;
String b;
String c;
public Dto(String a, String b, String b) {
....
}
}
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery query = cb.createQuery(Dto.class);
Root<A> from = query.from(A.class);
Join<A, B> joinB = from.join(A_.b);
Join<A, B1> joinB1 = cb.treat(joinB, B1.class);
Join<B1, C> joinC = joinB1.join(B1_.c);
q.where(cb.equal(joinC.get(C_.id), cid);
q.select(cb.construct(Dto.class, joinC.get(C_.value), joinB1.get(B1_.value), from.get(A_.value));
e.createQuery(q).getResultList();
It seems, it's not possible to use treat in more joins, the treat must be a leaf join.
see eclipselink, or SO this.
What happens is that hibernate (4.3.6, the version I'm using) doesn't generate the alias in the jpql query.
Have I misunderstood the use of treat operator?
Is there a way to workaround this issue?
|
stackoverflow
|
{
"language": "en",
"length": 198,
"provenance": "stackexchange_0000F.jsonl.gz:857142",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518129"
}
|
e68c768d7dce2dadb95fc78e3caac2e053dafe86
|
Stackoverflow Stackexchange
Q: How to add contacts custom app account like below screen shot
I'm developing an app which needs its own contacts to be visible in their contacts. Like Whatsapp, I need to show the contact nos that they are registered in my app.
For example, If you add a Friend's whatsapp no in contacts, it will show Found in Whatsapp. So I'm also need the same kind of functionality for my app.
When i install my app, my app should be added to the above list. (Have a look at above screenshot).
I've searched and found nothing about this. So any help would be greatly appreciated.
A: Just add the below snippet into your info.Plist to get your app name in list
<key>NSUserActivityTypes</key>
<array>
<string>INStartAudioCallIntent</string>
</array>
Check this link
|
Q: How to add contacts custom app account like below screen shot
I'm developing an app which needs its own contacts to be visible in their contacts. Like Whatsapp, I need to show the contact nos that they are registered in my app.
For example, If you add a Friend's whatsapp no in contacts, it will show Found in Whatsapp. So I'm also need the same kind of functionality for my app.
When i install my app, my app should be added to the above list. (Have a look at above screenshot).
I've searched and found nothing about this. So any help would be greatly appreciated.
A: Just add the below snippet into your info.Plist to get your app name in list
<key>NSUserActivityTypes</key>
<array>
<string>INStartAudioCallIntent</string>
</array>
Check this link
A: finally i found the solution , if u add callkit and need to config , then it will added on the list , https://developer.apple.com/library/content/samplecode/Speakerbox/SpeakerboxUsingCallKittocreateaVoIPapp.zip , download the source and run the project , after installation complete open contacts and add social profile, then you will find 'Speakerbox' on the social profile.
|
stackoverflow
|
{
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:857150",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518149"
}
|
c3ea010fcd643fbeb310227db51622160061b1d1
|
Stackoverflow Stackexchange
Q: Trying to do `askForSignIn` fails for linked account I'm trying to implement account linking against our OAuth service.
I tried logging in using gala-demo.appspot.com and that seems to work.
Calling askForSignIn() seem to fail when invoked, i don't get any calls back to my web service so the error seems to be upstream.
The response i see in the debug info when using the assistant simulator is:
expected_inputs[0].possible_intents[0]: intent 'actions.intent.SIGN_IN' is only supported for version 2 and above.
Any ideas?
On another note; If i set signInRequired on the action configuration for the welcome intent, it seems to get further but gives a bad sign in redirect link in simulator and on a device it opens a dialog that just disappears (looks like a successful login) but no response back to the web service.
A: That happens because you're probably using the old v1 API. I suggest you to check the migration guide:
https://developers.google.com/actions/reference/v1/migration
Cheers!
|
Q: Trying to do `askForSignIn` fails for linked account I'm trying to implement account linking against our OAuth service.
I tried logging in using gala-demo.appspot.com and that seems to work.
Calling askForSignIn() seem to fail when invoked, i don't get any calls back to my web service so the error seems to be upstream.
The response i see in the debug info when using the assistant simulator is:
expected_inputs[0].possible_intents[0]: intent 'actions.intent.SIGN_IN' is only supported for version 2 and above.
Any ideas?
On another note; If i set signInRequired on the action configuration for the welcome intent, it seems to get further but gives a bad sign in redirect link in simulator and on a device it opens a dialog that just disappears (looks like a successful login) but no response back to the web service.
A: That happens because you're probably using the old v1 API. I suggest you to check the migration guide:
https://developers.google.com/actions/reference/v1/migration
Cheers!
A: Sign in intent doesn't work at the moment as clearly explained in the docs, it's just something you can use for test in the emulator, but it's not available in production
|
stackoverflow
|
{
"language": "en",
"length": 188,
"provenance": "stackexchange_0000F.jsonl.gz:857163",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518192"
}
|
e305e26aa350e86c348509c42518894b6d457c7f
|
Stackoverflow Stackexchange
Q: Value split is not a member of (String, String) I am trying to read data from Kafka and Storing into Cassandra tables through Spark RDD's.
Getting error while compiling the code:
/root/cassandra-count/src/main/scala/KafkaSparkCassandra.scala:69: value split is not a member of (String, String)
[error] val lines = messages.flatMap(line => line.split(',')).map(s => (s(0).toString, s(1).toDouble,s(2).toDouble,s(3).toDouble))
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
Below code : when i run the code manually through interactive spark-shell it works fine, but while compiling code for spark-submit error comes.
// Create direct kafka stream with brokers and topics
val topicsSet = Set[String] (kafka_topic)
val kafkaParams = Map[String, String]("metadata.broker.list" -> kafka_broker)
val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder]( ssc, kafkaParams, topicsSet)
// Create the processing logic
// Get the lines, split
val lines = messages.map(line => line.split(',')).map(s => (s(0).toString, s(1).toDouble,s(2).toDouble,s(3).toDouble))
lines.saveToCassandra("stream_poc", "US_city", SomeColumns("city_name", "jan_temp", "lat", "long"))
A: All messages in kafka are keyed. The original Kafka stream, in this case messages, is a stream of tuples (key,value).
And as the compile error points out, there's no split method on tuples.
What we want to do here is:
messages.map{ case (key, value) => value.split(','))} ...
|
Q: Value split is not a member of (String, String) I am trying to read data from Kafka and Storing into Cassandra tables through Spark RDD's.
Getting error while compiling the code:
/root/cassandra-count/src/main/scala/KafkaSparkCassandra.scala:69: value split is not a member of (String, String)
[error] val lines = messages.flatMap(line => line.split(',')).map(s => (s(0).toString, s(1).toDouble,s(2).toDouble,s(3).toDouble))
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
Below code : when i run the code manually through interactive spark-shell it works fine, but while compiling code for spark-submit error comes.
// Create direct kafka stream with brokers and topics
val topicsSet = Set[String] (kafka_topic)
val kafkaParams = Map[String, String]("metadata.broker.list" -> kafka_broker)
val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder]( ssc, kafkaParams, topicsSet)
// Create the processing logic
// Get the lines, split
val lines = messages.map(line => line.split(',')).map(s => (s(0).toString, s(1).toDouble,s(2).toDouble,s(3).toDouble))
lines.saveToCassandra("stream_poc", "US_city", SomeColumns("city_name", "jan_temp", "lat", "long"))
A: All messages in kafka are keyed. The original Kafka stream, in this case messages, is a stream of tuples (key,value).
And as the compile error points out, there's no split method on tuples.
What we want to do here is:
messages.map{ case (key, value) => value.split(','))} ...
A: KafkaUtils.createDirectStream returns a tuple of key and value (since messages in Kafka are optionally keyed). In your case it's of type (String, String). If you want to split the value, you have to first take it out:
val lines =
messages
.map(line => line._2.split(','))
.map(s => (s(0).toString, s(1).toDouble,s(2).toDouble,s(3).toDouble))
Or using partial function syntax:
val lines =
messages
.map { case (_, value) => value.split(',') }
.map(s => (s(0).toString, s(1).toDouble,s(2).toDouble,s(3).toDouble))
|
stackoverflow
|
{
"language": "en",
"length": 259,
"provenance": "stackexchange_0000F.jsonl.gz:857174",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518232"
}
|
918e23963988a317fefe51e1846f6b3d02e4f452
|
Stackoverflow Stackexchange
Q: Lift canvas on top of other canvas(s) in Tkinter I have created number of canvases, and they overlap. I would like to bring a particular canvas to the front.
I don't seem to find a way to do it. The lift method does not seem to work, e.g
import tkinter as Tk
w=tk.Tk()
a=tk.Canvas(w,width=20, height=30)
a.place(x=20, y=30)
b=tk.Canvas(w,width=20, height=30)
b.place(x=25, y=35)
w.lift(b) # try to bring b to the front, but nothing happens
A: Your canvases are there, the problem is, their color is the same as the rest of the window. You can add background colors to differentiate them.
To change stacking orders on widget level, you should use Tkinter.Misc class.
import tkinter as tk #fixed typo in here
w=tk.Tk()
a=tk.Canvas(w,width=20, height=30, bg="red")
a.place(x=20, y=30)
b=tk.Canvas(w,width=20, height=30, bg="blue")
b.place(x=25, y=35)
tk.Misc.lift(a)
w.mainloop() #even if some IDEs adds mainloop, it's always better to add it explicitly
|
Q: Lift canvas on top of other canvas(s) in Tkinter I have created number of canvases, and they overlap. I would like to bring a particular canvas to the front.
I don't seem to find a way to do it. The lift method does not seem to work, e.g
import tkinter as Tk
w=tk.Tk()
a=tk.Canvas(w,width=20, height=30)
a.place(x=20, y=30)
b=tk.Canvas(w,width=20, height=30)
b.place(x=25, y=35)
w.lift(b) # try to bring b to the front, but nothing happens
A: Your canvases are there, the problem is, their color is the same as the rest of the window. You can add background colors to differentiate them.
To change stacking orders on widget level, you should use Tkinter.Misc class.
import tkinter as tk #fixed typo in here
w=tk.Tk()
a=tk.Canvas(w,width=20, height=30, bg="red")
a.place(x=20, y=30)
b=tk.Canvas(w,width=20, height=30, bg="blue")
b.place(x=25, y=35)
tk.Misc.lift(a)
w.mainloop() #even if some IDEs adds mainloop, it's always better to add it explicitly
A: The problem you are having is because you are choosing to use a Canvas. Canvases have a lift method that overrides the default lift function. The lift method of the canvas is for lifting something drawn on the canvas rather than the canvas itself. If you had chosen to use a frame rather than a canvas, your code would have worked.
You can use the lift method that is part of the Misc library in the case of using a canvas:
tk.Misc.lift(a)
|
stackoverflow
|
{
"language": "en",
"length": 230,
"provenance": "stackexchange_0000F.jsonl.gz:857188",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518273"
}
|
3b8f4c351eb14f45fc23b5119e7984e2d17f7785
|
Stackoverflow Stackexchange
Q: PowerShell: How to access a file on an FTP server as a string? I need to download a piece of text from an FTP server via PowerShell and get it as a string. After performing the required task, I need to upload it to the same server as a different file. The file must not be saved to the local storage at any point.
For a file on a regular HTTP server, the code would be (New-Object Net.WebClient).DownloadString($uri); for downloading and (New-Object Net.WebClient).UploadString($uri, $output);" for sending it to the server for processing via a POST request.
A: DownloadString and UploadString, as all WebClient methods, work for ftp:// URLs too:
By default, the .NET Framework supports URIs that begin with http:, https:, ftp:, and file: scheme identifiers.
So unless you need some fancy options, it's as simple as:
$webclient = New-Object System.Net.WebClient
$contents = $webclient.DownloadString("ftp://ftp.example.com/file.txt")
If you need to authenticate to the FTP server, either add credentials to the URL:
ftp://username:[email protected]/file.txt
Or use WebClient.Credentials:
$webclient = New-Object System.Net.WebClient
$webclient.Credentials = New-Object System.Net.NetworkCredential("user", "mypassword")
$contents = $webclient.DownloadString("ftp://ftp.example.com/file.txt")
|
Q: PowerShell: How to access a file on an FTP server as a string? I need to download a piece of text from an FTP server via PowerShell and get it as a string. After performing the required task, I need to upload it to the same server as a different file. The file must not be saved to the local storage at any point.
For a file on a regular HTTP server, the code would be (New-Object Net.WebClient).DownloadString($uri); for downloading and (New-Object Net.WebClient).UploadString($uri, $output);" for sending it to the server for processing via a POST request.
A: DownloadString and UploadString, as all WebClient methods, work for ftp:// URLs too:
By default, the .NET Framework supports URIs that begin with http:, https:, ftp:, and file: scheme identifiers.
So unless you need some fancy options, it's as simple as:
$webclient = New-Object System.Net.WebClient
$contents = $webclient.DownloadString("ftp://ftp.example.com/file.txt")
If you need to authenticate to the FTP server, either add credentials to the URL:
ftp://username:[email protected]/file.txt
Or use WebClient.Credentials:
$webclient = New-Object System.Net.WebClient
$webclient.Credentials = New-Object System.Net.NetworkCredential("user", "mypassword")
$contents = $webclient.DownloadString("ftp://ftp.example.com/file.txt")
|
stackoverflow
|
{
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:857230",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518399"
}
|
d5275b9abfc26ea5b96bc25ebe79a64600b9a771
|
Stackoverflow Stackexchange
Q: Worksheet freezes when scrolling on task pane during task execution We are developping an office Addin with office.js API.
A recurrent problem damage our reputation in the store.
This problem is that the worksheet in a Excel Addin (office.js) is freezed after scrolling over it.
I've written a simple Script Lab snippet code which reproduce the worksheet freezing problem. All the steps to reproduce this are described on it.
The snippet is available at : https://gist.github.com/Nassim33/5eaf0bdb4a5b0b1a8db99f58b6de101e
A: This issue was fixed. I just verified the gist, it's not repro in my side. please validate, kindly create a new issue or report it via office-js issue in github if you still observe unresponsible worksheet.
|
Q: Worksheet freezes when scrolling on task pane during task execution We are developping an office Addin with office.js API.
A recurrent problem damage our reputation in the store.
This problem is that the worksheet in a Excel Addin (office.js) is freezed after scrolling over it.
I've written a simple Script Lab snippet code which reproduce the worksheet freezing problem. All the steps to reproduce this are described on it.
The snippet is available at : https://gist.github.com/Nassim33/5eaf0bdb4a5b0b1a8db99f58b6de101e
A: This issue was fixed. I just verified the gist, it's not repro in my side. please validate, kindly create a new issue or report it via office-js issue in github if you still observe unresponsible worksheet.
|
stackoverflow
|
{
"language": "en",
"length": 114,
"provenance": "stackexchange_0000F.jsonl.gz:857243",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518443"
}
|
bc235c4b19d06ad21f8a3b431954d137be8a107d
|
Stackoverflow Stackexchange
Q: How to show combined regions/continents in TeeChart map series? I am using TeeChart(4.1.2015.8062) for Winforms to display world map in our applicaton. This map can be displayed for predefined regions/continents. Teechart refers to this as world map types. e.g. available map types are -
Africa, Asia, Australia, CentralAmerica, Europe, Europe15, Europe27, MiddleEast, NorthAmerica, SouthAmerica, Spain, USA, USAHawaiiAlaska, World.
We want to show map combined for Asia and Australia. Teechart
currently displays separate maps Asia and Australia, but it does not have any provision to combine and display these two continents.
Does there exist any way to combine and display regions like - asia and australia or asia and middle east?
Thank you,
Sharad
|
Q: How to show combined regions/continents in TeeChart map series? I am using TeeChart(4.1.2015.8062) for Winforms to display world map in our applicaton. This map can be displayed for predefined regions/continents. Teechart refers to this as world map types. e.g. available map types are -
Africa, Asia, Australia, CentralAmerica, Europe, Europe15, Europe27, MiddleEast, NorthAmerica, SouthAmerica, Spain, USA, USAHawaiiAlaska, World.
We want to show map combined for Asia and Australia. Teechart
currently displays separate maps Asia and Australia, but it does not have any provision to combine and display these two continents.
Does there exist any way to combine and display regions like - asia and australia or asia and middle east?
Thank you,
Sharad
|
stackoverflow
|
{
"language": "en",
"length": 114,
"provenance": "stackexchange_0000F.jsonl.gz:857256",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518477"
}
|
535e369271b1a15ea30a0cd8a26e23b145547350
|
Stackoverflow Stackexchange
Q: Qt 5.9 OpenGL buffers cleanup I recently updated to Qt 5.9. The application I'm working on uses QOpenGLWidget and QOpenGLBuffers. I noticed that since Qt 5.9, the QOpenglWidget destruction is really slow and makes the application exits really slowly.
Any suggestions/ideas to help?
[EDIT] It seems to be linked to the number of VAO (QOpenGLVertexArrayObject) created. Qt 5.9 must have changed the way the VAOs are cleaned up. Using the destroy() function does not change anything.
|
Q: Qt 5.9 OpenGL buffers cleanup I recently updated to Qt 5.9. The application I'm working on uses QOpenGLWidget and QOpenGLBuffers. I noticed that since Qt 5.9, the QOpenglWidget destruction is really slow and makes the application exits really slowly.
Any suggestions/ideas to help?
[EDIT] It seems to be linked to the number of VAO (QOpenGLVertexArrayObject) created. Qt 5.9 must have changed the way the VAOs are cleaned up. Using the destroy() function does not change anything.
|
stackoverflow
|
{
"language": "en",
"length": 77,
"provenance": "stackexchange_0000F.jsonl.gz:857258",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518493"
}
|
7f97516a34d9e0a49b137772f8264df806e5528c
|
Stackoverflow Stackexchange
Q: Why can not we use prometheus as billing system? I want to know why prometheus is not suitable for billing system.
the Prometheus overview page says
If you need 100% accuracy, such as for per-request billing, Prometheus is not a good choice as the collected data will likely not be detailed and complete enough.
I don't really understand 100% accuracy. Does it mean "the prometheus's monitoring data is not accurate"?
A: Prometheus prefers reliability over 100% accuracy, so there are tradeoffs where a tiny amount of data may be lost rather than taking out the whole system. This is fine for monitoring, but rarely okay when money is involved.
See also https://www.robustperception.io/monitoring-without-consensus/
|
Q: Why can not we use prometheus as billing system? I want to know why prometheus is not suitable for billing system.
the Prometheus overview page says
If you need 100% accuracy, such as for per-request billing, Prometheus is not a good choice as the collected data will likely not be detailed and complete enough.
I don't really understand 100% accuracy. Does it mean "the prometheus's monitoring data is not accurate"?
A: Prometheus prefers reliability over 100% accuracy, so there are tradeoffs where a tiny amount of data may be lost rather than taking out the whole system. This is fine for monitoring, but rarely okay when money is involved.
See also https://www.robustperception.io/monitoring-without-consensus/
|
stackoverflow
|
{
"language": "en",
"length": 113,
"provenance": "stackexchange_0000F.jsonl.gz:857284",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518575"
}
|
93264b6bd275e97ec9b951ce0f11dfacaacbd42c
|
Stackoverflow Stackexchange
Q: Loop simultaneously over two lists in R I have written a function that takes three arguments:
create.template <- function(t.list, x, y){
temp <- cbind(get(t.list[x]), get(t.list[y]), NA)
}
The output of this function is a data.frame with 11 columns and 17 rows.
Now I would like to create a loop over the function with two lists, one for x and one for y. Thereby
x.list <- list(1,2,3)
y.list <- list(4,5,6)
In the final step I would like to establish something like
for (x in x.list and y in y.list){
create.template(t.list, x, y)
}
and possibly combine the resulting dataframes (3 dataframes with 11 columns each) rowwise in one final dataframe.
I know that you can do this in Python with the zip() function and then append the results easily by append() and concatenate(), but I have not found an equivalent in R so far. Any help is highly appreciated!
A: We can get the values of multiple objects with mget, use either Reduce or do.call to cbind the list of vectors
Reduce(cbind, c(mget(ls(pattern = "\\.list")), NA))
Or
do.call(cbind, c(mget(c("x.list", "y.list")), NA))
|
Q: Loop simultaneously over two lists in R I have written a function that takes three arguments:
create.template <- function(t.list, x, y){
temp <- cbind(get(t.list[x]), get(t.list[y]), NA)
}
The output of this function is a data.frame with 11 columns and 17 rows.
Now I would like to create a loop over the function with two lists, one for x and one for y. Thereby
x.list <- list(1,2,3)
y.list <- list(4,5,6)
In the final step I would like to establish something like
for (x in x.list and y in y.list){
create.template(t.list, x, y)
}
and possibly combine the resulting dataframes (3 dataframes with 11 columns each) rowwise in one final dataframe.
I know that you can do this in Python with the zip() function and then append the results easily by append() and concatenate(), but I have not found an equivalent in R so far. Any help is highly appreciated!
A: We can get the values of multiple objects with mget, use either Reduce or do.call to cbind the list of vectors
Reduce(cbind, c(mget(ls(pattern = "\\.list")), NA))
Or
do.call(cbind, c(mget(c("x.list", "y.list")), NA))
|
stackoverflow
|
{
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:857313",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518654"
}
|
98455fb88130d429e55de015a18a0933331a2150
|
Stackoverflow Stackexchange
Q: How to self describe a REST api with Spring? I'm using spring-mvc to create servlets like:
@RestController
public class MyServlet {
@GetMapping("/test")
public MyRsp test(MyReq req) {
//...
}
}
Now if the user accesses the root of my app localhost:8080/my-app, it should show a list of GET and POST methods available. At best with possible input parameters, acceptable headers etc.
Question: is that possible with any spring framework, like HATEOAS?
I'd expect a framework to auto detect any @RestController and included methods.
Or would I have to create that overview page myself?
A: You should must look into this
To integrate it in spring you can refer this
Swagger is one of the best framework to expose RESTful API's.
|
Q: How to self describe a REST api with Spring? I'm using spring-mvc to create servlets like:
@RestController
public class MyServlet {
@GetMapping("/test")
public MyRsp test(MyReq req) {
//...
}
}
Now if the user accesses the root of my app localhost:8080/my-app, it should show a list of GET and POST methods available. At best with possible input parameters, acceptable headers etc.
Question: is that possible with any spring framework, like HATEOAS?
I'd expect a framework to auto detect any @RestController and included methods.
Or would I have to create that overview page myself?
A: You should must look into this
To integrate it in spring you can refer this
Swagger is one of the best framework to expose RESTful API's.
A: Swagger 2 is an another option. read the following to know more about swagger and how to set it up.
Setting Up Swagger 2 with a Spring REST API
You can also create swagger definition for your rest apis, which can be used by the clients to generate client classes.
Also the swagger ui can be used to test/invoke your APIs. swagger provides a user interface where you can input all the api inputs such as query params, path params, request body, headers.
Sample Swagger UI
A: You can check this project Spring Restdocs (github), which allows you to generate ready to use REST documentation. It's officially maintained by Spring Team:
The primary goal of this project is to make it easy to document
RESTful services by combining content that's been hand-written using
Asciidoctor with auto-generated examples produced with the Spring MVC
Test framework. The result is intended to be an easy-to-read user
guide, akin to GitHub's API documentation for example, rather than the
fully automated, dense API documentation produced by tools like
Swagger.
The other option is to use Swagger, it supports bottom-up approach as well:
A bottom-up approach where you have an existing REST API for which you
want to create a Swagger definition. Either you create the definition
manually (using the same Swagger Editor mentioned above), or if you
are using one of the supported frameworks (JAX-RS, node.js, etc), you
can get the Swagger definition generated automatically for you.
Some examples of swagger are mentioned here: 1 2
|
stackoverflow
|
{
"language": "en",
"length": 373,
"provenance": "stackexchange_0000F.jsonl.gz:857326",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44518680"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.