text
stringlengths 0
13M
|
---|
Title: SQL: convert Where clause to Inner Join
Tags: sql;oracle;inner-join
Question: I am trying to manipulate a query by converging to JOIN but it continues with the same errors.
```SELECT TRIM(T007.CUENTA) AS CUENTA,
T007.IDENTCLI AS IDENTTIT,
T013.IDENTCLI AS IDENTADIC,
TRIM(T026.DESTIPT) AS DESTIPT,
T043.PRODUCTO, T043.SUBPRODU,
TRIM(T043.DESPROD) AS DESPROD,
T175.CODESTCTA,
TRIM(T175.DESESTCTA) AS DESESTCTA,
T043.LINEAPR
FROM MPDT007 T007, MPDT013 T013, MPDT043 T043, MPDT175 T175 ,MPDT026 T026
WHERE T007.CUENTA = 000005433752
AND T007.CUENTA = T013.CUENTA
AND T013.NUMBENCTA = 2
AND T013.CODENT = 0001
AND T026.INDTIPT = 3
AND T007.PRODUCTO = T043.PRODUCTO
AND T007.SUBPRODU = T043.SUBPRODU
AND T007.CODESTCTA = T175.CODESTCTA
AND T043.LINEAPR = T175.LINEA
AND T043.LINEAPR <> 0003
```
to
```SELECT TRIM(T007.CUENTA) AS CUENTA,
T007.IDENTCLI AS IDENTTIT,
T013.IDENTCLI AS IDENTADIC,
T043.PRODUCTO AS PRODUCTO,
T043.SUBPRODU AS SUBPRODU,
TRIM(T043.DESPROD) AS DESPROD
FROM MPDT007 T007
JOIN MPDT013 T013 ON (T013.CUENTA = T007.CUENTA AND T013.NUMBENCTA = 2 AND T013.CODENT = 0001)
JOIN MPDT043 T043 ON (T043.LINEAPR <> 0001 AND T043.PRODUCTO = T007.PRODUCTO AND T007.SUBPRODU = T043.SUBPRODU)
WHERE
T007.CUENTA = 000005433752
```
The result of the following query does not show any fields.perform multiple variations of the inner join (left-right) with no apparent solution.
Thanks for your comments.
Comment: Your queries are not the same because of the parentheses, which change the way things are evaluated. One of your queries is very wrong, but without having the data you're going to need to inspect the results you're getting to figure out which one it is that isn't correct.
Comment: Check your join its in the wrong format.
Here is another answer: Try checking you inner join you put open and close parenthesis.
```SELECT TRIM(T007.CUENTA) AS CUENTA,
T007.IDENTCLI AS IDENTTIT,
T013.IDENTCLI AS IDENTADIC,
T043.PRODUCTO AS PRODUCTO,
T043.SUBPRODU AS SUBPRODU,
TRIM(T043.DESPROD) AS DESPROD
FROM MPDT007 T007
JOIN MPDT013 T013
ON (T013.CUENTA = T007.CUENTA AND T013.NUMBENCTA = 2 AND T013.CODENT = 0001) --This is your ERROR
JOIN MPDT043 T043
ON (T043.LINEAPR <> 0001 AND T043.PRODUCTO = T007.PRODUCTO AND T007.SUBPRODU = T043.SUBPRODU) --This is your ERROR
WHERE T007.CUENTA = 000005433752
```
tyr something like this:
```SELECT TRIM(T007.CUENTA) AS CUENTA,
T007.IDENTCLI AS IDENTTIT,
T013.IDENTCLI AS IDENTADIC,
T043.PRODUCTO AS PRODUCTO,
T043.SUBPRODU AS SUBPRODU,
TRIM(T043.DESPROD) AS DESPROD
FROM MPDT007 T007
JOIN MPDT013 T013
ON T013.CUENTA = T007.CUENTA
AND T013.NUMBENCTA = 2
AND T013.CODENT = 0001
JOIN MPDT043 T043
ON T043.LINEAPR <> 0001
AND T043.PRODUCTO = T007.PRODUCTO
AND T043.SUBPRODU = T007.SUBPRODU
WHERE T007.CUENTA = 000005433752
```
you just need to remove the open and close parenthesis because it treats it as one.
Here is another answer: You seem to be missing tables. I think this is the equivalent:
```SELECT . . .
FROM MPDT007 T007 JOIN
MPDT013 T013
ON T007.CUENTA = T013.CUENTA JOIN
MPDT043 T043
ON T007.PRODUCTO = T043.PRODUCTO AND
T007.SUBPRODU = T043.SUBPRODU JOIN
MPDT175 T175
ON T043.LINEAPR = T175.LINEA JOIN
ON T007.CODESTCTA = T175.CODESTCTA CROSS JOIN
MPDT026 T026
WHERE T007.CUENTA = '000005433752' AND
T013.NUMBENCTA = 2
T013.CODENT = '0001' AND
T026.INDTIPT = 3
T043.LINEAPR <> '0003';
```
Note that ```T026``` does not appear to have a ```JOIN``` condition, only a filtering condition.
I added single quotes for the comparisons to numbers that start with a zero.
|
Title: Specify build branch based on branch used in triggering pipeline Azure Devops
Tags: azure;azure-devops;azure-pipelines
Question: Say I have two pipelines: PL1 and PL2 and that PL2 is triggered by PL1. Say I ran PL1 on the master branch, I want PL2 to also get triggered for the master branch. When PL1 is triggered on another branch, like releases/X.X.X I want PL2 to also be triggered on the releases/X.X.X branch. Can I do that? Right now PL2 is always using master when it gets triggered
Here is the accepted answer: I tried to replicate the scenario.
First pipeline:
```trigger:
- none
stages:
- stage: first
jobs:
- job: firstJob
continueOnError: false
steps:
- bash: echo "first from test branch"
```
Second Pipeline:
```resources:
pipelines:
- pipeline: firstPipeline
source: first-pipeline
trigger:
branches:
include:
- ${Build.SourceBranch}
stages:
- stage: second
jobs:
- job: secondJob
steps:
- bash: echo Test-branch
```
I tested these from two different branches and each time the source code of second pipeline was picked up based on the first pipeline's branch.
PS: Both of my pipeline YAML are in same repository
Comment for this answer: Thank you, I had something similar but it wasnt working, this helped though!
|
Title: arm and thumb instruction set
Tags: arm
Question: How to differentiate arm instruction and thumb instruction? For example:
```add r1, r2, r3 ;add r2 and r3, then store the result in r1 register
```
How does the above instruction work in terms of arm and thumb instruction?
Comment: Are you asking how they are different? Or if they are different? Or how they do the addition? Your actual question is not clear to me. :-)
Here is another answer: Well go to infocenter.arm.com and get the architectural reference manual for the architecture in question, or just get the ARMv7 manual (not the -M but the -A or -R) which will include all instruction codings to date from ARMv4 to ARMv7 including thumb and the most mature thumb2 extensions. (you might need multiple architectural reference manuals and/or technical reference manuals as the encoding of instructions is hit or miss in the arm manuals)
Under thumb instructions look at the register based ADD instruction, there is one encoding with three registers Encoding T1 which is listed as all thumb variants (ARMv4T to the present(ARMv4T, ARMv5, ARMv6, ARMv7 and likely ARMv8))
bits 15 to 9 are 0b0001100 three bits of rm, three bits of rn and three bits of rd (typically thumb instructions are limited to r0-r7 needing three bits to encode, thumb2 extensions and a few special thumb instructions allow higher numbered registers (four bits of encoding)).
The instruction is listed as ADDS rd,rn,rm in the description, the S means save flags which is from the parent ARM instruction from which the thumb instruction was derived, for ARM instructions you have the choice to modify flags or not, thumb instructions you do not (thumb2 has a way to control this but it has limitations (for the add instruction)).
ADDS rd,rn,rm
0001100 rm rn rd
So ADDS r1,r2,r3 would be this chunk of bits
0001100 011 010 001 = 0001100011010001 = 0001 1000 1101 0001 = 0x18D1
looking at the ADD instruction in ARM mode you start with a condition field, as you have written your question this is an ALWAYS or a 1110 pattern (always execute) also as you have written your question you wrote add not adds so not saving flags so the s bit is zero in the encoding
so add rd,rn,shifter operand we start off with the bit pattern 0b111000I01000 then four
bits for rn four for rm and 11 for the shifter operand. Yes that is an I an bit position 25 not a one. the I is part of the shifter operand encoding
Now go to the section of the manual that describes the shifter operand encoding. the encoding that is just a register rm is bit 25 (the I bit) is zero, and 11 to 4 are zero with 3 to 0 being rm so add rd,rn,rm
1110 00 0 01000 rn rd 00000 000 rm
1110 00 0 01000 0001 0010 00000 000 0011 = 1110 0000 1000 0001 0010 0000 0000 0011 = 0xE0812003
Now we can test this, take this program
```add r1,r2,r3
.thumb
add r1,r2,r3
```
call it add.s assemble then disassemble
```arm-none-eabi-as add.s -o add.o
arm-none-eabi-objdump -D add.o
```
and get
```Disassembly of section .text:
00000000 <.text>:
0: e0821003 add r1, r2, r3
4: 18d1 adds r1, r2, r3
```
which matches the hand encoding.
Now if you are trying to disassemble a chunk of bytes that you dont know what kind they are, that is a different story, this can be very difficult at best, ideally you want to disassemble the whole binary by following the execution and mode changes (which you might not be able to figure out without simulating the execution). One clue is that ARM instructions generally use the ALways condition which is 0xE at the beginning of the instruction so if you see lots of 32 bit words in the form 0xExxxxxxx those are likely arm instructions and not data and not thumb instructions. Pure thumb will have a not so typical pattern say 0x6xxx and 0x7xxx but also a mixture of all other starting values. Thumb2 extensions can start on either halfword boundary and will have a more distinctive start pattern for 32 bit words but because they are mixed with non-thumb2 extensions and not always aligned on 32 bit boundaries thumb with or without thumb2 extensions are not so easy to visually isolate from data only the ARM instructions are easy to visually isolate.
Comment for this answer: +1, great answer. I was trying something different.. I was compiling add.s with "gcc -mthumb" and it still didn't create a thumb binary. Do you know why?
Comment for this answer: I just noticed something else to do diff between arm/thumb in elf files. If you `readelf -s add.o`, in .symtab you get $a or $t depending on the encoding, see http://sourceware.org/binutils/docs/as/ARM-Mapping-Symbols.html
Comment for this answer: notice the .thumb in my code that tells the assembler that the code that follows is thumb. .code 32 tells the assembler that the code is arm. There is also the common syntax stuff and other things you can look up on your own. Also notice that I used gnu assembler not gnu C compiler to assemble the assembly language. (as instead of gcc) even though gcc is going to pass it on to as, there may be pre-processing and the experience can be different than straight assembly.
Here is another answer: In practice, no reason to compile a library as arm unless you intentionally choose to make everything harder.
Switching between the arm and thumb modes takes some nanoseconds, it's hardware backed, and is by the way much faster than switching between the kernel and user mode you usually experience.
If you ask me why the whole set of Google's libraries is arm, I'll tell you that there's absolutely no reason albeit that they should keep the things backward compatible and consistent.
|
Title: Set the value of second Bootstrap slider based on change the value of first bootstrap slider dynamically
Tags: jquery;twitter-bootstrap
Question: I have implemented two bootstrap slider in my project. and I want to change the value of second slider based on slide the first slider dynamically. but it is not working.
How I can achieve this?
My HTML code:-
```<td><input id="ex1" type="text" value="" class="slider form-control" data-slider-min="0" data-slider-max="1500" data-slider-step="5" data-slider-value="1000" data-slider-orientation="vertical" data-slider-selection="after" data-slider-tooltip="show" data-slider-id="red"></td>
<td><input id="ex2" type="text" value="" class="slider form-control" data-slider-min="0" data-slider-max="2000" data-slider-step="5" data-slider-value="1200" data-slider-orientation="vertical" data-slider-selection="after" data-slider-tooltip="show" data-slider-id="blue"></td>
```
My Jquery Code as follow: but none of following method is working
```$("#ex1").on("slide", function() {
$("#ex2").attr('data-slider-value', '200');
});
$("#ex1").on("slide", function() {
$("#ex2").val('200');
});
$("#ex1").on("slide", function() {
$("#ex2").attr('value', '200');
});
```
Comment: Does the event fire? E.g. test it with an alert for example.
Comment: This should help you: http://stackoverflow.com/questions/33658932/bootstrap-slider-set-value-issue
Comment: Well you should be able to remove it...
Comment: Yes @frankenapps, Event is working
Comment: Thanks @Frankenapps its working fine. but using this code there are showing one extra slider in my page.
Here is the accepted answer: Now I have fixed my problem like this:
```$("#ex1").on("slide", function() {
var minSliderValue = $("#ex2").data("slider-min");
var maxSliderValue = $("#ex2").data("slider-max");
$('#ex2').slider({
value : 0,
reversed:true,
formatter: function(value) {
return 'Current value: ' + value;
}
});
$("#aqua").hide();//hide duplicate slider based on id
var val = Math.abs(parseFloat(ownesAmt, 10) || minSliderValue);
this.value = val > maxSliderValue ? maxSliderValue : val;
$('#ex2').slider('setValue', val);
```
});
|
Title: SetInputToDefaultAudioDevice throws 'System.InvalidOperationException' occurred in Microsoft.Speech.dll
Tags: c#;speech-recognition;microsoft-speech-api
Question: Executing this code on Windows 10 Pro with microphone enabled throws an exception.
Any idea why?
```using Microsoft.Speech.Recognition;
...
static void Main(string[] args)
{
// Create a SpeechRecognitionEngine object for the default recognizer in the en-US locale.
using (
SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(
new System.Globalization.CultureInfo("en-GB")))
{
// Create a grammar for finding services in different cities.
Choices locations = new Choices(new string[] { "office" });
Choices devices = new Choices(new string[] { "lights" ,"shades"});
Choices actions = new Choices(new string[] { "off" , "on", "up", "down"});
GrammarBuilder findServices = new GrammarBuilder("Jarvis");
findServices.Append(locations);
findServices.Append(devices);
findServices.Append(actions);
// Create a Grammar object from the GrammarBuilder and load it to the recognizer.
Grammar servicesGrammar = new Grammar(findServices);
recognizer.LoadGrammarAsync(servicesGrammar);
// Add a handler for the speech recognized event.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Configure the input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// throws:
// An unhandled exception of type 'System.InvalidOperationException' occurred in Microsoft.Speech.dll
// Cannot find the requested data item, such as a data key or value.
// Start asynchronous, continuous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
}
```
Microphone enabled:
Exception:
```'System.InvalidOperationException'
{"Cannot find the requested data item, such as a data key or value."}
at Microsoft.Speech.Recognition.RecognizerBase.SetInputToDefaultAudioDevice()
at Napoleon.Program.Main(String[] args) in C:\voice_recognition\Program.cs:line 42
at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args)
at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)
at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
```
Comment: Can you be more specific about the exception , a stack trace would be helpful as well.
Comment: Already asked http://stackoverflow.com/questions/32961817/microsoft-speech-recognition-setinputtodefaultaudiodevice-throws-exception and http://stackoverflow.com/questions/33318596/microsoft-speech-speechrecognitionengine-setinputtodefaultaudiodevice-method-t
Comment: You need to use System.Speech API, not Microsoft.Speech API to work with a microphone.
Comment: updated the question. not much in it.
Comment: @NikolayShmyrev if you post it as an answer I will accept it.
|
Title: Replace Property-Setter Method Ptr
Tags: c#;pointers;replace;setter;methodinfo
Question: ```public delegate void SetProp(object obj);
void Main()
{
TestObj obj = new TestObj();
SetProp setPropDel = (SetProp)SetProp.CreateDelegate(typeof(SetProp),
obj, obj.GetSetPropDelegate().GetMethodInfo());
MethodInfo setPropMethod = setPropDel.GetMethodInfo();
///Replacing Count Set-Method Pointer with new Method
typeof(TestObj).GetProperty("Count").GetSetMethod().ReplaceMethodPtr(setPropMethod);
obj.Count = 1;
```
}
```public class TestObj
{
public SetProp GetSetPropDelegate() => SetPropValue;
private void SetPropValue(object obj) => Console.WriteLine(obj); ///<--- NullReferenceException obj is null.
public int Count { get; set; }
```
}
Hi, I was trying to replace the private set-Method for the auto-Property "Count" in my class TestObj.
The replacement itself works. Because when I use the setter like 'obj.Count = 1' the new method is being called.
My question is: Is it possible to pass a parameter into this new method?
I'd at least need the new value that is being allocated to the Count-Property. In this case: 1.
My goal is to make it possible to replace Set-Methods of automatic properties to raise an event, when the new Set-Method is being called, and also keep basic function of the Set-Method.
I hope it's clear what I want to achieve. I might've run into something impossible here, but I'm sure there are lots of people have a much better understand of what's going on.
Thanks for reading.
Here is another answer: I want to share how I solved this problem.
I wrote a whole library with more functionalities, like events that are being raised on set or get occasions. But for the answer I want to break it down into the simplest example.
```void Main()
{
MethodInfo targetMethod = typeof(TargetObj).GetProperty("Count").GetSetMethod();
MethodInfo injectMethod = typeof(InjectObj).GetMethod("SetCount");
targetMethod.InjectMethod(injectMethod);
var targetInstance = new TargetObj();
targetInstance.Count = 3;
}
public class TargetObj
{
public int Count { get; set; }
}
public class InjectObj
{
public void SetCount(int value)
{
GetBackingField(this, "Count").SetValue(this, value);
Console.WriteLine($"Count has been set to [{value}]");
}
}
public static FieldInfo GetBackingField(object obj, string propertyName) => obj.GetType().GetField($"<{propertyName}>k__BackingField", BindingFlags.Instance | BindingFlags.NonPublic);
```
If this code is run, the console-output is:
Count has been set to [3]
The possibilities using this approach are endless but I wanted to give a quick look in how I managed to achieve this.
Note that I did this example using Linqpad.
I don't want to lengthen the post unnecassarily, but I need to include the MethodInfo Extension Method "InjectMethod" for a full understanding of what is going on.
```public static class MethodInfoExtensions
{
public static void InjectMethod(this MethodInfo target, MethodInfo inject)
{
RuntimeHelpers.PrepareMethod(inject.MethodHandle);
RuntimeHelpers.PrepareMethod(target.MethodHandle);
unsafe
{
if (IntPtr.Size == 4)
{
int* inj = (int*)inject.MethodHandle.Value.ToPointer() + 2;
int* tar = (int*)target.MethodHandle.Value.ToPointer() + 2;
if (Debugger.IsAttached)
{
byte* injInst = (byte*)*inj;
byte* tarInst = (byte*)*tar;
int* injSrc = (int*)(injInst + 1);
int* tarSrc = (int*)(tarInst + 1);
*tarSrc = (((int)injInst + 5) + *injSrc) - ((int)tarInst + 5);
}
else
{
*tar = *inj;
}
}
else
{
long* inj = (long*)inject.MethodHandle.Value.ToPointer() + 1;
long* tar = (long*)target.MethodHandle.Value.ToPointer() + 1;
if (Debugger.IsAttached)
{
byte* injInst = (byte*)*inj;
byte* tarInst = (byte*)*tar;
int* injSrc = (int*)(injInst + 1);
int* tarSrc = (int*)(tarInst + 1);
*tarSrc = (((int)injInst + 5) + *injSrc) - ((int)tarInst + 5);
}
else
{
*tar = *inj;
}
}
}
}
}
```
Using this kind of method-replacement I'd recommend only for test-purposes or maybe as a sort of debug-tool.
I'm pretty sure implementing this into a real releasing software would generate more problems than it solves.
|
Title: Detect which figure was closed with Matplotlib
Tags: python;matplotlib
Question: Im using matplotlib embedded in a GUI application using the qt4 backend.
I need to store a list of figures that the user plots and keeps open ie multiple figures are able to be plotted separately with different clicks of the plot button.
However when the user closes a figure I need to remove it from the list of figures.
How do I tell which figure is closed?
I am using the event handler to detect a figure has been closed but I cannot tell which one.
Here is some trivial example code:
```from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
figs = []
figNum = len(figs)
def handle_close(evt):
evt.canvas.figure.axes[0].has_been_closed = True
print ('Closed Figure')
fig = plt.figure()
figs.append(fig)
ax = figs[figNum].add_subplot(1, 1, 1)
ax.has_been_closed = False
# fig2 = plt.figure()
# ax2 = fig2.add_axes([0.15, 0.1, 0.7, 0.3])
t = np.arange(0.0, 1.0, 0.01)
s = np.sin(2*np.pi*t)
line, = ax.plot(t, s, color='blue', lw=2)
# fig2 = plt.figure()
# figs.append(fig2)
# fig2.canvas.mpl_connect('close_event', handle_close)
fig.canvas.mpl_connect('close_event', handle_close)
plt.show()
print (ax.has_been_closed)
```
Comment: @tcaswell: Are you suggesting creating a list and appending figure numbers to the list? If so then when closing the figure how would I know which figure number or label to pop from the list if Im managing it myself?
Comment: If you are embedding I would suggest _not_ using pyplot and doing the figure state management your self. You can then use the standard qt singnals/slots for this sort of thing.
Comment: The qt window closed signal
Here is the accepted answer: The ```evt``` that is passed to your handler has a member ```canvas```, which in turn has a member ```figure```, which of course is the figure the event refers to.
So if you want to remove that figure from your list, you would do
```figs.remove(evt.canvas.figure)
```
If you want to get the figure number, you access
```evt.canvas.figure.number
```
and you could get the name from
```evt.canvas.figure._label
```
However, the leading ```_``` probably means you shouldn't rely on that.
Comment for this answer: @Daniel can you please explain how that would be used to return the figure name/number or some sort of reference as to which figure was closed ie there may be multiple figures open
Comment for this answer: @DanielSk Thanks so much. Thats what I was looking for. Not entirely clear in the documentation
Comment for this answer: @RicDavimes: I hope this makes the answer more clear. If not, please update your question so we can see what exactly you are having trouble with.
Comment for this answer: Welcome to SO. Generally it is considered good practice to explain your answer and not just add links. You can refer to [this guide](http://stackoverflow.com/help/how-to-answer)
|
Title: Create wordpress shortcode for jQuery UI Tabs
Tags: php;jquery;wordpress;jquery-ui;shortcode
Question: I Am trying to create a wordpress shortcode that allows users to creat jQuery UI tabs. I found this great script online which claims to do it - however it has some quite unwelcome results in my page.
Firstly, the script I am currently using is as follows:
```add_shortcode( 'tabgroup', 'jqtools_tab_group' );
function jqtools_tab_group( $atts, $content )
{
$GLOBALS['tab_count'] = 0;
do_shortcode( $content );
if( is_array( $GLOBALS['tabs'] ) ){
foreach( $GLOBALS['tabs'] as $tab ){
$tabs[] = '<li><a class="" href="#">'.$tab['title'].'</a></li>';
$panes[] = '<div class="pane"><h3>'.$tab['title'].'</h3>'.$tab['content'].'</div>';
}
$return = "\n".'<!-- the tabs --><ul class="tabs">'.implode( "\n", $tabs ).'</ul>'."\n".'<!-- tab "panes" --><div class="panes">'.implode( "\n", $panes ).'</div>'."\n";
}
return $return;
}
add_shortcode( 'tab', 'jqtools_tab' );
function jqtools_tab( $atts, $content )
{
extract(shortcode_atts(array(
'title' => 'Tab %d'
), $atts));
$x = $GLOBALS['tab_count'];
$GLOBALS['tabs'][$x] = array( 'title' => sprintf( $title, $GLOBALS['tab_count'] ), 'content' => $content );
$GLOBALS['tab_count']++;
}
```
However, the HTML output I am TRYING to achieve is this:
```<div class="tabs">
<ul>
<li><a href="#tabs-1">Tab 1</a></li>
<li><a href="#tabs-2">Tab 2</a></li>
<li><a href="#tabs-3">Tab 3</a></li>
</ul>
<div id="tabs-1">
<p>Content</p>
</div>
<div id="tabs-2">
<p>Content</p>
</div>
<div id="tabs-3">
<p>Content</p>
</div>
</div>
```
The shortcode php gives a slightly different output to what I need in order to make this work - here is my current outputted HTML:
```<!-- the tabs -->
<ul class="tabs">
<li><a class="" href="#">Tab 0</a></li>
<li><a class="" href="#">Tab 1</a></li>
<li><a class="" href="#">Tab 2</a></li>
</ul>
<!-- tab "panes" -->
<div class="panes">
<div class="pane">
<h3>Tab 0</h3>Content
</div>
<div class="pane">
<h3>Tab 1</h3>Content
</div>
<div class="pane">
<h3>Tab 2</h3>Content
</div>
</div>
```
Finally, the shortcode I am using looks like this:
```[tabgroup]
[tab title="tab1"]Content[/tab]
[tab title="tab2"]Content[/tab]
[tab title="tab3"]Content[/tab]
[/tabgroup]
```
My question is, how do I need to change my php code to make the outputted HTML look like the HTML needed to make the tabs work?
Comment: Please ask the question on wordpress.stackexchange.com ,you will get best and quick answer
Here is the accepted answer: OK this is what you have to do.
```if( is_array( $GLOBALS['tabs'] ) ){
foreach( $GLOBALS['tabs'] as $k=>$tab ){
$tabs[] = '<li><a href="#tab-'.$k.'">'.$tab['title'].'</a></li>';
$panes[] = '<div id="tab-'.$k.'"><p>'.$tab['content'].'</p></div>';
}
$return = "\n".'<!-- the tabs --><div class="tabs"><ul>'.implode( "\n", $tabs ).'</ul>'."\n".'<!-- tab "panes" -->'.implode( "\n", $panes ).'</div>'."\n";
}
```
|
Title: How to gracefully shutdown android service with threads running in the service
Tags: android;android-intent;service;android-activity;message
Question: The Situation:
Note: This is a followup of a previous question that takes care of how to setup the handler/message communication between activity/service. See: link
Activity connects to a started Service using binding. Activity gets a local binder with a reference to a service Handler back. Activity and Service exchange Message objects through eachothers Handlers. When user is done with App, user signals Service to quit the started Service (shutdown the service). Within the service (=mainThread) another thread is running, the serviceThread. The serviceThread is capable of running more subthreads, fully controlled by the serviceThread. Communication is handled between activity and the serviceThread, not through the mainThread of the service!!!
The Problem:
How do I gracefully shutdown the Service when inside the Service several threads are running endlessly (until I signal a message saying: "pack your backs, go home!", aka: EXIT_SERVICE.
Solution candidates:
Scenario 1: from the activity side, send a EXIT_SERVICE message to the serviceThread that is running within the service (not the mainThread!). When all subthreads of serviceThread have been cleanup/stopped send a message back to the activity indicating that it is now safe to call the stopService(Intent) method which actually stops the service. The activity can now call finish() and the App is exited gracefully.
Scenario 2: from the activity side, send a EXIT_SERVICE message to the serviceThread which will cleanup all subthreads. After that is done the serviceThread sends a message to the activity that the service is to be shutted down completely and after that message is sent, sends a message to the mainThread handler of the service to actually shutdown the service. The service receives the shutdown-message and cleans up varialbles and calls stopSelf(int).
The service is stopped and the activity knows that it can stop too, WITHOUT calling stopService(Intent)! The App is exited gracefully.
Scenario 3: from the activity side, call the stopService(Intent) method, which will deliver the Intent to the service to stop the service. In the service this Intent is intercepted (I don't know if this is possible and how to do that yet... but assuming this can be done) before the actual service code that stops the service is executed. Before the service actually stops, other code is executed first which cleans up the threads by sending a EXIT_SERVICE message to the serviceThread. After the serviceThread has cleaned up, it sends a message back to the mainThread (the service itself) and the code continues to execute the normal code that was normally executed when the Intent to stop the service wasn't intercepted. The App is exited gracefully.
So, I have three options on how to gracefully stop the Service. The problem is which scenario is the "best" (less error prone, quickest in shutting down, easiest to implement). For example: what happens when the activity is destroyed because the user switched portrait/landscape mode right at the moment the "stop service" message or Intent was sent?
I think scenario 3 is a nice solution, because it doesn't need a lot of extra coding on the activiy side, only stopService(Intent) and finish(). The service can also be in the process of stopping while the GUI is already gone. Only thing is how to intercept the stop Intent and act upon that signal.
Please share your thoughts....
Comment: i would opt on #3.... or 2
Comment: maybe you should override onDestroy, see its dpcumentarion?
Comment: they are right: "dont call us, we will call you", in onDestroy you should clean your threads up and afaik its async operation (i mean cleaning)
Comment: sounds good (if it works as expected of course)
Comment: hmm if you join main thread with service thread you can end up with ANR but i am not sure if it applies to services
Comment: i didnt work with services that much that i could answer your questions, sorry, but there is something you get it wrong: super.onDestroy does nothing - its empty method, see Service#onDestroy implementation
Comment: That's weird: can't seem to locate where stopService(Intent) is leading to in code, only Context.stopService() which is abstract. Can't find an implementation in Eclipse Type Hierarchy for stopService(Intent)? If I can't figure out option #3, then I go for #2 because "Stop the service, if it was previously started. This is the same as calling stopService(Intent) for this particular service.", see [link](http://developer.android.com/reference/android/app/Service.html#stopSelf%28%29)
Comment: I understand that I could call onDestroy(), but it doesn't wait for all the threads to finish on time. That means the service is already gone before the threads have cleaned up. Or doesn't that matter? I could wait with calling super.onDestroy() so that I "buy some time" to finish/cleanup threads, but the docs say: "... Do not call this method directly." It doesn't say why that is. Another thing is the "public **final** void stopSelf ()". it's final, so it has to be done in onDestroy(). Is there a problem with delaying the onDestroy method before I call the super.onDestroy()?
Comment: That means the following: I receive a msg with an EXIT_SERVICE int, handle the message in the handleMessage() call stopSelf() and return true in the handleMessage() method. The android code takes over from there: it will call some internal code and then reaches the overridden onDestroy() in my service. I send msg to serviceThread "EXIT_ALL_THREADS" which takes care of msg being sent to all subthread and waits for them to respond with "I_AM_GONE" msg. Then the serviceThread knows it can safely exit the while(!cancelled) endless-loop by setting cancelled=true. Something like that?
Comment: The only thing that "worries" me is that the threads are still trying to cancel and the mainthread is already finished (super.onDestroy() is async). I hope not that the process in which the service runs is killed by android because the mainthread has stopped running while other threads (that do not sent a msg to the mainthread) are still trying to stop. Is that possible?
Comment: I could of course call stopService(Intent) also instead of sending an EXIT_SERVICE msg to the service/serviceThread. They both end up calling stopSelf().
Comment: Shouldn't I join the mainthread with the serviceThread together: one waits for the other, so that when the serviceThread ends, mainThread ends and NOT vice versa.
Comment: Good thinking! Services also run on the ui-thread == main thread. So I guess that is not an option. Can't I just NOT finish in the onDestroy with a super.onDestroy() call, but set a boolean to false, and when the serviceThread then sends an I_AM_GONE msg to the mainthread, I call the stopSelf() method again, which this time executes for the second time the onDestroy() in my service code that now also executes the if (!cancelled) { super.onDestroy() } code, making the service REALLY stop (gracefully and 100% sure in the correct order)??? What if stopSelf() implementation changes over time?
Comment: Grr. assumptions...I do that all the time (thinking how I would implement it instead of how it is implemented). So that is not going to work. When I have called stopSelf() there is no way back (it seems) because super.onDestroy() docs say: Upon return, there will be no more calls in to this Service object and it is effectively dead. Just assume that process is still alive even when mainthread is already dead?
Comment: I think I found my answer: "**In cases like this, stopService() or stopSelf() does not actually stop the service until all clients unbind.**" (see [link](http://developer.android.com/guide/components/services.html#Lifecycle)). This is good news, because I bind to the service from the MainActivity, call stopService(Intent) from MainActivity (= last activity before user exits), service enters onDestroy(), in that method I clean up my thread mess, service no longer useful (it's still there, but stopped). I call finish() in MainActivity. activity unbinds, destroys --> service gone too.
Comment: Maybe I can even send back ONE last message from serviceThread to activity (both still "in the air") that it is safe to finish the activity. That way, I know everything is cleaned up in the service. When the last message is sent, I can't communicate any longer with the serviceThread from the activity because that one is gone too and there is no need for communicating any longer. So I guess I just found my solution :)
|
Title: Use @Path-Annotation from abstract Superclass in Swagger
Tags: java;resteasy;wildfly;swagger
Question: I am currently testing the integration of swagger in my webframework. For better understanding i will first describe the architecture a little bit:
We have a basic web-api as a maven project which provides basic webservices based on Generic-Types. Then i have an API maven project where an abstract websevice class extends the webservice described above and provides the generic types (timeseries in my case) and adds a few additional webservices. And third i have my example implementation where a webservice class with a @Path- Annotation on class level extends the abstract class from the API and hence gets the Paths from the superclasses (create, get and other standard operations from the first framework and context relevant operations from the second API).
For documentation of this API i want to use swagger. I added the needed dependencies for swagger-jaxrs and swagger-annotations (both in version 1.5.3-M1) and put the BeanConfig in my Application. I also get the generated API if, and only if, i annotate my implementating classes method with @Path - Annotations. The problem with this is that i can not access my API because Wildfly can´t resolve the method to use (because i declared the @Path at two different levels i guess).
So my question is: is such a setup possible with swagger or do i have to have @Path-Annotations at the level of the @ApiX-Annotations of swagger?
Side-Note: The setup is wanted to have an API that has the given paths as a MUST so you can change the implementations but don´t have to change the clients (e.g. for different databases or business models)
Thanks for any answers :)
EDIT: ok so i found my initial error: if i specify the path at the method in the subclass i need to specify all other jax-rs annotations as well. But my question stays the same: can i get swagger to use the jax-rs annotations of the superclasses in any way?
|
Title: How to check each header file includes required include files?
Tags: c++;cmake;dependencies;include
Question: I'm developing my application using C++ and cmake.
I'd like to check each C++ header file includes required include files correctly.
Here is an example:
a.hpp
```inline void func_a() {
}
```
b.hpp
```// #include "a.hpp" is missing
inline void func_b() {
func_a();
}
```
main.cpp
```#include "a.hpp"
#include "b.hpp"
int main() {}
```
Demo: https://wandbox.org/permlink/kZqoNHMYARIB3bc1
b.hpp should include a.hpp. Let's say b.hpp missing include a.hpp. If main.cpp include a.hpp before b.hpp, no compile error is occurred. If include order is opposite, compile error is occurred.
I'd like to check this kind of problem.
I'm using fly-check on emacs. It checks this problem well. I'd like to some checking mechanism into my cmake build system.
For example, if I execute ```make depcheck```, then compile error is detected.
I think that if I setup a cmake target that compiles all header files individually but not link, the expected compile error would be reported.
I couldn't find how to setup that, so far.
Is there any way to do that? Or other ways to achieve the goal ?
my header file inclusion policy
Each header file should include header files that contain required element. In other words, each header file should compile individually.
What I want to achieve
I want to know the way to automatically detect b.hpp is missing `#include "a.hpp" by tool assist. The tool means not editor. I guess that cmake could do that. I'm trying to find the way.
Comment: try https://include-what-you-use.org/
Comment: @AlanBirtles, thank you. I will check it.
Comment: @VTT, I guess that fly-check compiles currently opened file directly on background. It doesn't compile from main.cpp. So it can detect func_a() is [email protected].
Comment: @RobertAndrzejuk, My goal is how to check missing include file. Not what is the best inclusion rule.
Comment: related: https://stackoverflow.com/q/3644293/4770166 https://stackoverflow.com/q/41214159/4770166
Comment: *"I'm using fly-check on emacs. It checks this problem well."* - are you sure that it does? How does it deal with problem of indirect includes (when `b.hpp` includes `c.hpp` that includes `a.hpp`) and with problem of private includes (when `a.hpp` must not be included directly and `c.hpp` must be included instead)?
Comment: Possible duplicate of [Google C++ Style Guide include order](https://stackoverflow.com/questions/54347804/google-c-style-guide-include-order)
Here is the accepted answer: @StoryTeller 's answer
```
Conventional wisdom is to add source files to every header.
```
is appropriate way to achieve the goal. It requires adding many source files. It is annoying work especially I develop a header only library.
How to automate that process?
I found a way to check missing include file on cmake. The strategy is compile each header files individually and directly.
Here is CMakeLists.txt
```cmake_minimum_required(VERSION 3.8.2)
project(test_checker)
add_custom_target(chkdeps)
file(GLOB HDR_ROOT "*.hpp")
FOREACH (HDR ${HDR_ROOT})
message(STATUS "${HDR}")
get_filename_component(HDR_WE ${HDR} NAME_WE)
SET(CHK_TARGET "${HDR_WE}.chk")
add_custom_target(
${CHK_TARGET}
COMMAND ${CMAKE_CXX_COMPILER} -c ${HDR}
VERBATIM
)
add_dependencies(chkdeps ${CHK_TARGET})
ENDFOREACH ()
```
To check missing include files, execute ```make chkdeps```.
In order to do only compile, I use add_custom_target. The custom target name is ```chkdeps``` (check dependencies). This is the target for all header files dependency checking.
I get the list of *.hpp using ```file(GLOB HDR_ROOT "*.hpp")```. For each got files, I add custom target for only compile using add_custom_target.
I add the extension ```.chk``` to avoid conflict. For example if the file name is ```a.hpp``` then the target name is ```a.chk```.
I execute the COMMAND ```${CMAKE_CXX_COMPILER}``` with ```-c``` option. The ```-c``` option is for only compile. I only tested the cmake on Linux. I know setting compile option directly is not good for cross platform development. cmake might provides compile only cross platform mechanism. But I couldn't find it, so far.
Then I add dependency to ```chkdeps``` using add_dependencies. Due to this dependency, when I execute ```make chkdeps```, all custom targets (```a.chk``` and ```b.chk```) run.
When I run ```make chkdeps```, then I got the expected error "'func_a' was not declared in this scope" as follows.
```make chkdeps
Built target a.chk
/home/kondo/work/tmp/fly_check/b.hpp: In function 'void func_b()':
/home/kondo/work/tmp/fly_check/b.hpp:3:5: error: 'func_a' was not declared in this scope; did you mean 'func_b'?
3 | func_a();
| ^~~~~~
| func_b
make[3]: *** [CMakeFiles/b.chk.dir/build.make:57: CMakeFiles/b.chk] Error 1
make[2]: *** [CMakeFiles/Makefile2:78: CMakeFiles/b.chk.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:113: CMakeFiles/chkdeps.dir/rule] Error 2
make: *** [Makefile:131: chkdeps] Error 2
```
Here is another answer: Conventional wisdom is to add source files to every header. Even if ```b.cpp``` includes only this line:
```include "b.hpp" // Note, this should be the first include
```
That way, you can compile every cpp file in isolation, and a successful compilation means the corresponding header is self-contained.
Of course, if you have an implementation file already, then moving the corresponding header to be included first goes towards ensuring that.
Comment for this answer: @VTT - One can find chinks in the armor of every method. It's still conventional wisdom that works for non-pathological cases.
Comment for this answer: @StoryTeller, thank you for the answer. But I'm looking for a way that automatically find these kind of missing inclusion. Even if main.cpp successfully compiled by accident.
Comment for this answer: I believe this is what naïve include checkers do. Successful compilation does not actually mean that file includes necessary include files, for example lack of config include or lack of include of some template specialization will cause compilation to yield incorrect result so the header checked is not actually self-contained.
|
Title: Invalid argument: TypeError: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer. Traceback (most recent call last):
Tags: python;tensorflow;google-colaboratory
Question: I am getting the above error while training the models
I am training it on Google Colab
tensorflow version = 1.15
numpy version = 1.18.0
!python3 object_detection/model_main.py \
--pipeline_config_path=/content/models/research/object_detection/samples/configs/ssd_mobilenet_v2_coco.config \
--model_dir=training/
I am totally new to tensorflow and not sure how to solve it. Can anyone please help me
This is error
```Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/numpy/core/function_base.py", line 117, in linspace
num = operator.index(num)
TypeError: 'numpy.float64' object cannot be interpreted as an integer
```
Comment: Post the code where error occurred instead of giving link to notebook
Comment: @Rei Moriaty, Is your issue resolved now? Else, can you share the code to reproduce your issue so that we can help you. Thanks!
Comment: @TensorflowWarriors Yeah the problem has been solved. I changed my numpy version to 1.17.0.
Here is another answer: Providing the solution here (Answer Section), even though it is present in the comments section, for the benefit of the community.
When the user has downgrade ```numpy``` version ```1.18.0``` to ```1.17.0``` has resolved the problem.
```%tensorflow_version 1.x
import tensorflow as tf
print(tf.__version__)
!pip install numpy==1.17.0
```
Here is another answer: find where the linspace command is used in the code, the ```num``` variable needs to be passed as an integer. Shown in below answer:
```np.linspace(0,1,int(10.))
```
Downgrading may not be required if one finds where linspace is used.
|
Title: How to disable conversion when syncing iPod with Banshee
Tags: conversion;mp3;ipod-touch;banshee
Question: Banshee starts a conversion process whenever I copy some songs to my iPod touch. How can I disable that? I just want it to copy the files to the player without conversion.
Here is another answer: Also for me Banshee always perform mp3 to mp3 conversion when copy and paste mp3 to my iPod nano 4th gen. Also sync playlist for me dont work: it hangs.
So I use Banshee to manage my collection and create playlist, when I want sync with iPod I export my playlist to .m3u and then close Banshee and use gtkpod to sync my iPod:
On gtkpod:
Click on your iPod
Right click add playlist to device
Right click on save
Right click on eject
Umount the device if is already connected
I have gtkpod localized so menu label can be different. Also this is my first try to use iPod with Linux so maybe the procedure is not exact but it works for me.
|
Title: Violation of PRIMARY KEY constraint in Entity Framework Core
Tags: c#;entity-framework;blazor-server-side;.net-5;ef-core-5.0
Question: I have a .NET5 Blazor Server application using EFCore5.
I have two entities that have a many-to-many relationship with each other.
Let's call them 'Books' and 'Tags'.
EFCore provides the BookTag join table.
On my page, I have a form that is populated with the book details and has a multi select box for the tags.
I include the tags when I get the book:
```context.Books.Include(x=>x.Tags);```
In debug mode, I see the Book item has several tags. But when I update some details of the book and save the form, calling ```context.Books.Update(book);``` I get an error because of duplicate keys in the BookTag join table.
It seems EFCore tries to insert entries in that table, but it should do a merge or insert/update.
I've read numerous articles about the many-to-many but all of them just show how to retrieve data. I can't find an example of how to update.
Edit:
I'm using the GenericRepository as explained by @carl-franklin in his excellent BlazorTrain series.
Edit2:
Here is some relevant code, I'm now not using the GenericRepo:
Startup.cs
```services.AddDbContextFactory<ApplicationDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DbConnection")));```
ApplicationDbContext.cs has only the ```DbSet```s.
Book.cs
```[Key]
public string Code { get; set; }
public string Name { get; set; } = "";
public virtual ICollection<Tag> TagCollection { get; set; }
```
Tag.cs
```[Key]
public string Code { get; set; }
public string Name { get; set; } = "";
public virtual ICollection<Book> BookCollection { get; set; }
```
Book.razor.cs
```[Inject] private IDbContextFactory<ApplicationDbContext> contextFactory { get; set; }
private ApplicationDbContext context { get; set; }
protected IEnumerable<Book> BookList;
protected override async Task OnInitializedAsync()
{
context = contextFactory.CreateDbContext();
context.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking;
BookList = await context.Books.Include(x => x.TagCollection).ToListAsync();
}
private async Task UpdateBook(Book book)
{
// Called after submitting form
context.Books.Update(book);
await context.SaveChangesAsync();
}
```
Even if I don't change anything, just update I get the duplicate error:
```Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while updating the entries. See the inner exception for details. ---> Microsoft.Data.SqlClient.SqlException (0x80131904): Violation of PRIMARY KEY constraint 'PK_BookTag'. Cannot insert duplicate key in object 'dbo.BookTag'```
In UpdateBook() I see the book has tags, as expected. It looks like EFCore is inserting them in the join table without checking if they already exist.
Comment: Show us your code.
Comment: `Violation of PRIMARY KEY constraint 'PK_BookTag'. Cannot insert duplicate key in object 'dbo.BookTag'` -- Kinda says it all. Not sure why you're turning off change tracking; that's what makes all this work. `Update` does not mean `Upsert` or `Merge`.
Comment: When you initially read data from database, add `.AsNoTracking()`.
Comment: Thanks Ergis. I'm already doing that. In fact I'm doing it on the context level: `context.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking;`
Comment: Thanks @RobertHarvey I've updated my post and added relevant code.
Comment: Change tracking was turned off in the GenericRepo I got from BlazorTrain. When in my extracted code I don't turn it off my problems disappear and I can add the tags without errors. As expected. @RobertHarvey If you turn your comment into an answer I'll accept it.
Here is the accepted answer: Change tracking must be turned on for this to work. Otherwise, you have to track the changes yourself; Entity Framework thinks you're adding a new record, but the key is the same as an existing record, which is why you're getting the error.
|
Title: Return date and time in ("YYYY-MM-DD HH:mm") format when using moment.js (timezone)
Tags: javascript;angularjs;datetime;timezone;momentjs
Question: I am using moment-timezone so I can convert from selected timezone to timezone of a client.
I wasn't able to implement it in a better way than this:
```convertSelectedTimeZoneToClients() {
let timeZoneInfo = {
usersTimeZone: this.$rootScope.mtz.tz.guess(),
utcOffset: this.formData.timeZone.offset,
selectedDateTime: this.toJSONLocal(this.formData.sessionDate) + " " + this.formData.sessionTime
};
let utcTime = this.$rootScope.mtz.utc(timeZoneInfo.selectedDateTime).utcOffset(timeZoneInfo.utcOffset).format("YYYY-MM-DD HH:mm");
let convertedTime = this.$rootScope.mtz.tz(utcTime, timeZoneInfo.usersTimeZone).format("Z");
return convertedTime;
}
```
So basically I am using ```usersTimeZone: this.$rootScope.mtz.tz.guess()```, guess() function to find out timezone from the browser.
Then I get values from datetime picker and dropdown and convert them to UTC value by using utcOffset.
At the end I want to convert that utc value to user timezone value.
I get object like this:
_d represent correct value after conversion. I have tried adding bunch of different .format() paterns on convertedTime variable, but I am not able to retrive time in this format: "YYYY-MM-DD HH:mm". I guess it works differentlly than when using .utcOffset() function.
Can anybody help me with this?
Comment: As general rule when using moment: all properties starting with `_` (like `_d`) are for [internal use](http://stackoverflow.com/a/28132227/4131048) and should not be used. Note that the code you showed is different from the code in the screenshot. Finally I think that could be useful to know what `toJSONLocal` returns (what is the value of `timeZoneInfo.selectedDateTime`?)
Here is the accepted answer: You don't need to guess the client time zone to convert to local time. Just use the ```local``` function.
For example:
```moment.tz('2016-01-01 00:00', 'America/New_York').local().format('YYYY-MM-DD HH:mm')
```
For users located in the Pacific time zone, this converts from Eastern to Pacific and you get an output string of ```"2015-12-31 21:00"```. For users in other time zones, the output would be different, as expected.
You don't need to format to a string and re-parse it, or manually manipulate the UTC offset either. That is almost never warranted.
Comment for this answer: Thanks for the answer. Just one more question. When I am passing this value: 'America/New_York' How can I find it out? That is why I was using .guess(), only to find out the timezone of the user. @MattJohnson
Comment for this answer: I don't understand what you're asking. In the example I gave, `America/New_York` is the *source* time zone - the time zone that the given value is "in". You wouldn't want to guess that, or the whole thing would be no-op.
|
Title: Having CMake build, but not install, an external project
Tags: build;cmake;installation
Question: I'm trying to use this external project in a CMake project of mine; the external project is also a CMake project.
Now, the external project produces static libraries and a bunch of headers; but - I don't want those to be installed anywhere, I just need them to build stuff as part of my main project (and the files are not necessary after build and/or install of the main project).
How do I get CMake to "fetch, build, but not install" an external project? I was thinking I might hackishly do this by forcing an empty install command, but I'm sure there's a better, more elegant way.
Comment: You are aware of CMake's command ExtrenalProject, right? https://cmake.org/cmake/help/latest/module/ExternalProject.html
Comment: @usr1234567: Of course. That's what I'm using.
Comment: @Tsyvarev: Because CMake will still try to "install" the project. I want it to not try that.
Comment: "I was thinking I might hackishly do this by forcing an empty install command, but I'm sure there's a better, more elegant way." - Empty `INSTALL_COMMAND` option is a true way for disable install step. Why do you think of it as a "hackish"?
Comment: ExternalProject's documentation says that "Passing an empty string as the ```` makes the install step do nothing". So there are unlikely other ways.
Here is the accepted answer: As mentioned by @Tsyvarev, to skip the install of an external project, you can simply pass the argument ```INSTALL_COMMAND ""```, this is definitively the recommended approach.
For example:
```ExternalProject_Add(Foo
GIT_REPOSITORY "git://github.com/Foo/Foo"
GIT_TAG "123456"
SOURCE_DIR ${CMAKE_BINARY_DIR}/Foo
BINARY_DIR ${CMAKE_BINARY_DIR}/Foo-build
CMAKE_CACHE_ARGS
-DFOO_ENABLE_BAR:BOOL=1
INSTALL_COMMAND ""
)
```
Then you would configured the dependent external project with ```-DFoo_DIR:PATH=${CMAKE_BINARY_DIR}/Foo``` so that ```find_package(Foo REQUIRED)``` works as excepted. For this to work, the assumption is that the external project provide a working config file for the build tree.
To learn more about config files, see Correct way to use third-party libraries in cmake project
|
Title: SwiftUI doesn't use auto layout, how to create a unique size interface for all devices?
Tags: ios;swiftui;autolayout;widget
Question: I don't use SwiftUI before and try to create the medium widget but I can't recreate the copy of this widget from another program(YouTube Music medium widget), my fourth cells on different screens have a different edge margin, I don't know how to fix this margins because SwiftUI doesn't have auto layout. I posted the code of my widget below, if somebody knows what I did wrong correct me, please.
Screenshot of my Widget and widget What I want:
Code of my widget:
```import WidgetKit
import SwiftUI
import Intents
import Foundation
struct Provider: IntentTimelineProvider {
func placeholder(in context: Context) -> SimpleEntry {...}
func getSnapshot(for configuration: ConfigurationIntent, in context: Context, completion: @escaping (SimpleEntry) -> ()) {...}
func getTimeline(for configuration: ConfigurationIntent, in context: Context, completion: @escaping (Timeline<Entry>) -> ()) {...}
}
struct SimpleEntry: TimelineEntry {...}
struct WidgetTestEntryView : View {
var entry: Provider.Entry
var deeplinkURLFirst: URL {
URL(string: "\(WIDGET_DEEP_LINK)0")!
}
let iconSize: CGFloat = 75.0
var widgetLabel = "Favourites"
let mainColor = Color(red: 0.218, green: 0.215, blue: 0.25)
var body: some View {
VStack(spacing: 0) {
GeometryReader { geo in
HStack(spacing: geo.size.width * 0.4){
Text(widgetLabel).foregroundColor(.white).font(.system(size: geo.size.width * 0.045, weight: .semibold, design: .default)).offset(y: 2)
Image("Label2").resizable().frame(width: geo.size.width * 0.15, height: 15, alignment: /*@START_MENU_TOKEN@*/.center/*@END_MENU_TOKEN@*/)
}.frame(maxWidth: .infinity, maxHeight: geo.size.height * 0.7).background(Color.black).offset(y: -5)
}
GeometryReader { geo in
HStack(spacing: geo.size.width * 0.1 / 7) {
Link(destination: deeplinkURLFirst) {
ZStack {
RoundedRectangle(cornerRadius: 10).foregroundColor(mainColor).frame(width: iconSize, height: iconSize)
Image(base64String:"")?.resizable().frame(width: iconSize, height: iconSize)
.cornerRadius(10)
.background(mainColor).cornerRadius(10)
}.frame(width: iconSize, height: iconSize)
}
Link(destination: deeplinkURLFirst) {
ZStack {
RoundedRectangle(cornerRadius: 10).foregroundColor(mainColor).frame(width: iconSize, height: iconSize)
Image(base64String: "")?.resizable().frame(width: iconSize, height: iconSize)
.cornerRadius(10)
.background(mainColor).cornerRadius(10)
}.frame(width: iconSize, height: iconSize)
}
Link(destination: deeplinkURLFirst) {
ZStack {
RoundedRectangle(cornerRadius: 10).foregroundColor(mainColor).frame(width: iconSize, height: iconSize)
Image(base64String: "")?.resizable().frame(width: iconSize, height: iconSize)
.cornerRadius(10)
.background(mainColor).cornerRadius(10)
}.frame(width: iconSize, height: iconSize)
}
Link(destination: deeplinkURLFirst) {
ZStack {
RoundedRectangle(cornerRadius: 10).foregroundColor(mainColor).frame(width: iconSize, height: iconSize)
Image(base64String: "")?.resizable().frame(width: iconSize, height: iconSize)
.cornerRadius(10)
.background(mainColor).cornerRadius(10)
}.frame(width: iconSize, height: iconSize)
}
}.frame(width: .infinity, height: geo.size.height * 0.9, alignment: .leading).background(Color(red: 0.118, green: 0.118, blue: 0.15)).border(Color.white, width: 0).position(x: geo.size.width * 0.5, y: 20)
}
}.frame(maxWidth: .infinity, maxHeight: .infinity).background(Color(red: 0.118, green: 0.118, blue: 0.15)).onAppear {
print("all good")
}
}
}
@main
struct WidgetTest: Widget {
let kind: String = WIDGET_PROJECT_NAME
var body: some WidgetConfiguration {
IntentConfiguration(kind: kind, intent: ConfigurationIntent.self, provider: Provider()) { entry in
WidgetTestEntryView(entry: entry)
}
.configurationDisplayName("Favourites")
.description("Fast access to favoutires cagetory.")
.supportedFamilies([.systemMedium])
}
}
struct WidgetTest_Previews: PreviewProvider {
static var previews: some View {
WidgetTestEntryView(entry: SimpleEntry(date: Date(), configuration: ConfigurationIntent()))
.previewContext(WidgetPreviewContext(family: .systemMedium))
}
}
extension Image {
init?(base64String: String) {
guard let data = Data(base64Encoded: base64String, options: .ignoreUnknownCharacters) else { return nil }
guard let uiImage = UIImage(data: data) else { return nil }
self = Image(uiImage: uiImage)
}
}
```
Comment: A suggestion? Change the title of your question. I was going to comment that SwiftUI doesn't use AutoLayout but I saw where you already know that. It's probably a language gap, but as is you question reads like you want to use AutoLayout in SwiftUI, not that you are having an issue related to padding of views in an `HStack`. Good luck!
Here is the accepted answer: SwiftUI does the autolayout for you, it is different than UIKit you are supposed to support all screens at once, that is why the spacing changes but your can set "rules"
```//Prevents the repeating of code
struct ImageView: View {
var deeplinkURLFirst: URL
let mainColor: Color
//Add another parameter for the image info I counldn't reprodice that
var body: some View {
Link(destination: deeplinkURLFirst) {
ZStack {
RoundedRectangle(cornerRadius: 10)
.foregroundColor(mainColor)
.overlay(
//Keep image within rectangle bounds
//The systemName stuff is just to replicate an actual image fill in with your image code
Image(systemName: "square")
.resizable())
}
}
//Rule
//Keep the images squares or you can set frame using your iconSize
//for all but the size of an iPhone 7 or SE is not the same
//as a ProMax it is best to set a ratio
//If you fix the size padding will have to give way to adjust for larger/smaller screens.
.aspectRatio(1, contentMode: .fit)
}
}
struct WidgetTestEntryView : View {
//var entry: Provider.Entry //I am just working with the View itself not a widget
var deeplinkURLFirst: URL {
URL(string: "\("WIDGET_DEEP_LINK")0")!
}
let iconSize: CGFloat = 75.0
var widgetLabel = "Favourites"
let mainColor = Color(red: 0.218, green: 0.215, blue: 0.25)
let setSpacing: CGFloat = 4
var body: some View {
//Having multiple of GeometryReader just adds to the confusion look at the View as a whole vs pieces
//Less is more with SwiftUI it is meant to support multiple screens
//Set simple rules
GeometryReader { geo in
VStack(spacing: 0) {
//Top portion
HStack(spacing: geo.size.width * 0.4){
Text(widgetLabel).foregroundColor(.white).font(.system(size: geo.size.width * 0.045, weight: .semibold, design: .default))
//The systemName stuff is just to replicate an actual image fill in with your image code
Image(systemName: "square").resizable().foregroundColor(.white)
.frame(width: geo.size.width * 0.15, height: 15, alignment: .center)
}
//Using to many of these will end up causing conflicts
//SwiftUI does a lot of the work for you
.frame(maxWidth: .infinity, maxHeight: geo.size.height * (1/3))
//Rule:
//This will set the space between the boxes
HStack(spacing: setSpacing)
{
//Add another parameter for the image info I counldn't reproduce that without data
ImageView(deeplinkURLFirst: deeplinkURLFirst, mainColor: mainColor)
ImageView(deeplinkURLFirst: deeplinkURLFirst, mainColor: mainColor)
ImageView(deeplinkURLFirst: deeplinkURLFirst, mainColor: mainColor)
ImageView(deeplinkURLFirst: deeplinkURLFirst, mainColor: mainColor)
}
//Rule:
//Keep the edge of the boxes from the edge of the screen/HStack
//It is just a minimum so this will give if the space requries it to maintain ratio and spacing between boxes
.padding(setSpacing)
//This might need adjusting but the % of the top + the % of the bottom should == 1
.frame(width: geo.size.width, height: geo.size.height * (2/3), alignment: .center)
//This color needs to be adjusted to the right Color
.background(Color(UIColor.darkGray))
}
}
.background(Color(red: 0.118, green: 0.118, blue: 0.15))
//Just to simualte widget size without crating a widget you shouldn't need it in your actual code
.frame(maxWidth: 350, maxHeight: 150)
.cornerRadius(20)
.onAppear {
print("all good")
}
}
}
```
|
Title: Sqlalchemy enum migration update fails saying does not exist
Tags: python;postgresql;enums;sqlalchemy;flask-sqlalchemy
Question: I have following sqlalchemy model:
```class Cart(db.Model):
__tablename__ = 'carts'
#...
cart_status = db.Column(db.Enum('confirmed', 'canceled', name='cart_statuses'))
```
Which generates following migration script:
```"""empty message
Revision ID: c7cbe7d1d686
Revises: 56e9612a77ee
Create Date: 2017-06-21 08:52:00.987769
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'c7cbe7d1d686'
down_revision = '56e9612a77ee'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('carts', sa.Column('cart_status', sa.Enum('confirmed', 'canceled', name='cart_statuses'), nullable=True))
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('carts', 'cart_status')
# ### end Alembic commands ###
```
When I try to upgrade, I get following error:
```(ecom_bot) root@logicandthoughts:~/ecom/ecombot# python manage.py db upgrade
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade 56e9612a77ee -> c7cbe7d1d686, empty message
Traceback (most recent call last):
File "manage.py", line 37, in <module>
manager.run()
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/flask_script/__init__.py", line 412, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/flask_script/__init__.py", line 383, in handle
res = handle(*args, **config)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/flask_migrate/__init__.py", line 247, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/alembic/command.py", line 254, in upgrade
script.run_env()
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/alembic/script/base.py", line 421, in run_env
util.load_python_file(self.dir, 'env.py')
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/alembic/util/compat.py", line 75, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "migrations/env.py", line 87, in <module>
run_migrations_online()
File "migrations/env.py", line 80, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/alembic/runtime/environment.py", line 817, in run_migrations
self.get_context().run_migrations(**kw)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/alembic/runtime/migration.py", line 329, in run_migrations
step.migration_fn(**kw)
File "/root/ecom/ecombot/migrations/versions/c7cbe7d1d686_.py", line 21, in upgrade
op.add_column('carts', sa.Column('cart_status', sa.Enum('confirmed', 'canceled', name='cart_statuses'), nullable=True))
File "<string>", line 8, in add_column
File "<string>", line 3, in add_column
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/alembic/operations/ops.py", line 1551, in add_column
return operations.invoke(op)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/alembic/operations/base.py", line 318, in invoke
return fn(self, operation)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/alembic/operations/toimpl.py", line 123, in add_column
schema=schema
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/alembic/ddl/impl.py", line 172, in add_column
self._exec(base.AddColumn(table_name, column, schema=schema))
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/alembic/ddl/impl.py", line 118, in _exec
return conn.execute(construct, *multiparams, **params)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 68, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1002, in _execute_ddl
compiled
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1402, in _handle_dbapi_exception
exc_info
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/root/ecom/ecom_bot/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) type "cart_statuses" does not exist
LINE 1: ALTER TABLE carts ADD COLUMN cart_status cart_statuses
^
[SQL: 'ALTER TABLE carts ADD COLUMN cart_status cart_statuses']
```
Here is the accepted answer: I had to update to following:
```"""empty message
Revision ID: 51aa3bff68d6
Revises: c7cbe7d1d686
Create Date: 2017-06-21 09:02:55.252361
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision = '51aa3bff68d6'
down_revision = 'c7cbe7d1d686'
branch_labels = None
depends_on = None
def upgrade():
cart_status = postgresql.ENUM('user_unconfirmed', 'user_confirmed', 'client_unconfirmed', 'client_confirmed', name='cart_status')
cart_status.create(op.get_bind())
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('carts', sa.Column('cart_status', sa.Enum('user_unconfirmed', 'user_confirmed', 'client_unconfirmed', 'client_confirmed', name='cart_status'), nullable=True))
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('carts', 'cart_status')
# ### end Alembic commands ###
cart_status = postgresql.ENUM('user_unconfirmed', 'user_confirmed', 'client_unconfirmed', 'client_confirmed', name='cart_status')
cart_status.drop(op.get_bind())
```
Comment for this answer: Any idea why this error would appear suddenly? I ran one migration with an enum field and it worked just fine. When I went to run another migration that simply renamed the enum and the column name, it gave me this error.
However, following the instructions in this answer did fix the migration. So yay for good answers!
|
Title: problems with validation files Spring mvc 3
Tags: validation;tomcat;spring-mvc
Question: I'm having some problem with the Validation files in my Spring 3 project.
I have a very basic validation project for tests thats the bean:
```public class User {
@NotEmpty(message="no blank name")
@Size(min=2, max=20)
private String name="";
@NotEmpty(message="no blank email")
@Email
private String email="";
```
......getters and setters......
The function within the controller that accepts the request from the Form page and makes the necessary validation is:
```@RequestMapping(value="/displayUser",method=RequestMethod.POST)
public String displayUser(@Valid User user, Model model,BindingResult result){
if(result.hasErrors()){
return "form";
}
userList.add(user);
model.addAttribute("user",user);
return "redirect:displayUser";
}
```
But i dont think the code is the cause of the problem, as as soon as i start the server and run the project "which has always been working, since i test other spring things in there" i get the following Exceptions:
```org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.validation.beanvalidation.LocalValidatorFactoryBean#0': Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1401)
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:512)
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450)
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290)
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287)
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189)
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:557)
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:842)
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:416)
org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:443)
org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:459)
org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:340)
org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:307)
org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:127)
javax.servlet.GenericServlet.init(GenericServlet.java:212)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857)
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
java.lang.Thread.run(Thread.java:613)
```
Do you have any idea where thre problem could be?
I'm trying to figure out but nothing,
P.S.: I use Tomcat 6 and i have just downloaded the:
```hibernate-validator-4.0.2.FINAL.jar
```
and
```validation-api-1.0.0.GA.jar
```
Here is another answer: It looks to me like the class ```org.springframework.validation.beanvalidation.LocalValidatorFactoryBean``` cannot be resolved on your classpath. You need to add the JAR file ```spring-context.jar``` to your classpath.
Comment for this answer: It looks like this time it's a different class though that cannot be found right? In the end you just got to make sure you straighten out the dependency issues. What build tool (e.g. Ant/Maven) are you using to create your application?
Comment for this answer: There's certainly a JAR missing in the classpath. It might just be a transitive dependency. I think it would get easier with Maven or Gradle if you are familiar with these tools already. What JARs do you have in the directory `WEB-INF/lib` inside of your WAR file atm?
Comment for this answer: Hhmm, that is weird. Spring 3 should support JDK 5 and above. So you got your issue fully resolved?
Comment for this answer: Hi Benjamin, added the jar to classpath, refreshed the project, restarted the server but the problem is still there :java.lang.NoClassDefFoundError
org.hibernate.validator.HibernateValidator.createGenericConfiguration(HibernateValidator.java:41)
javax.validation.Validation$GenericBootstrapImpl.configure(Validation.java:269)
org.springframework.validation.beanvalidation.LocalValidatorFactoryB
Comment for this answer: Hi benjamin, i'm not using any, i have a pure project in Eclipse Galileo with all necessay(i think so) jars in classpath and lib folder, i'm very new to Spring, i just switched from Stripes framework. well basically the error its still the same, Tomcats error report page gives me these 2 root-cause errors the above mentioned and this one, ..... googleing does not really give good hints, and it seems like all files and jars are there
|
Title: child node is treated as grand child node when trying to structure
Tags: python;django;django-templates;django-views;django-mptt
Question: I am using ```django-mptt``` library for the category. I could show the list of categories in template but I wanted to indent it properly so user can know which is a main category and which one is sub-category and etc. The way i tried to structure is
```{% recursetree nodes %}
<li class="node">
<a href="/category/{{ node.get_absolute_url }}"
class="{% if not node.is_leaf_node and not node.is_root_node and node.is_child_node %} child_parent {% elif node.is_leaf_node and not node.is_root_node %}leaf_node{% endif %}">
{{ node.name }}
{% if node.is_root_node %}
<span class="badge">{{ node.get_count_children }}</span>
{% endif %}
</a>
{% if not node.is_leaf_node %}
<ul class="children">
<li>{{ children }}</li>
</ul>
{% endif %}
</li>
{% endrecursetree %}
```
This yields the following design of category
Here Dressing Table is a child of Bedroom Items like Bed and Almirah not a child of Bed. How could i fix this? I know the problem is here
```<a href="/category/{{ node.get_absolute_url }}"
class="{% if not node.is_leaf_node and not node.is_root_node and node.is_child_node %} child_parent {% elif node.is_leaf_node and not node.is_root_node %}leaf_node{% endif %}">
```
but could not fix this issue
Not the dressing table in the screenshot
Comment: Do you mean `Dressing Table` is a `root` node?
Comment: Then then render is correct already. `dressing table ` is a child of bedroom node. What is your expected structure?
Comment: OK. I got the situation now. He needs to determine the leaf node. But that leaf node is the 2nd level from the root node. So he needs `Dressing Table` to be a `cyan` color.
Comment: He needs a customized `tag` that make use method `get_children` or pass `get_children` that return `boolean` in the `context`.
Comment: @Serenity I am now following your question, but get this issue. If you know how to solve this. It will speed up me reaching your question. https://github.com/django-mptt/django-mptt/issues/614
Comment: No, `dressing table` is a child of bedroom items (to quote)
Comment: Its not me, but I'm guessing its supposed to be rendered like `Bed` and `Almirah`, again I quote from the line under the image.
Comment: I have updated my question with the screenshot on how it should look like.
Comment: @Sarit exactly. I want Dressing Table to be a cyan color with the same indentation level a bed and almirah has because its a child node(child node and also a leaf node because it has no children).
Here is the accepted answer: According to you updated answer.
```Dinning Set, Kitchen Rack, and Kitchen Setup(Modular Kitchen)``` are supposed to be ```cyan``` since they are second level.
If my understanding is correct.
Here is my hacked solution. If anybody found the better one please raise.
Add extra method to the ```Model``` instance
I have to add ```nodes``` to the ```context```. (This would be an optional if you are using Django2.0 like mine)
Use the instance method in the template
```models.py```
```from django.db import models
from mptt.models import MPTTModel, TreeForeignKey
class Genre(MPTTModel):
name = models.CharField(max_length=50, unique=True)
parent = TreeForeignKey('self', null=True, blank=True, related_name='children', db_index=True,
on_delete=models.CASCADE)
class MPTTMeta:
order_insertion_by = ['name']
def __str__(self):
return f'{self.name}'
def is_second_node(self):
return True if (self.get_ancestors().count() == 1) else False
```
```views.py```
```from django.views.generic import ListView
from genres.models import Genre
class GenreListView(ListView):
model = Genre
template_name = 'genre_list.html'
def get_context_data(self, *, object_list=None, **kwargs):
"""Get the context for this view."""
queryset = object_list if object_list is not None else self.object_list
page_size = self.get_paginate_by(queryset)
context_object_name = self.get_context_object_name(queryset)
if page_size:
paginator, page, queryset, is_paginated = self.paginate_queryset(queryset, page_size)
context = {
'paginator': paginator,
'page_obj': page,
'is_paginated': is_paginated,
'object_list': queryset
}
else:
context = {
'paginator': None,
'page_obj': None,
'is_paginated': False,
'object_list': queryset
}
if context_object_name is not None:
context[context_object_name] = queryset
context.update(kwargs)
context['nodes'] = context.get('object_list')
return super().get_context_data(**context)
```
```genre_list.html```
```<!DOCTYPE html>
{% load mptt_tags %}
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Genre ListView</title>
</head>
<style>
.root {color: purple}
.child{color: cyan}
.leaf {color: gray}
</style>
<body>
{% recursetree nodes %}
<div class="
{% if node.is_root_node %}
root
{% elif node.is_child_node and not node.is_leaf_node or node.is_second_node%}
child
{% elif node.is_leaf_node and not node.is_root_node%}
leaf
{%endif%}">
{{node.name}}
{{node.is_second_node}}
</div>
{% if not node.is_leaf_node%}
<ul>{{children}}</ul>
{% endif %}
{% endrecursetree %}
</ul>
</body>
</html>
```
Comment for this answer: This is what I was struggling for. Thanks for your effort and knowledge.
|
Title: how to intercept innerHTML changes in javascript?
Tags: javascript;addeventlistener;interception
Question: I need to intercept any changes in the content of a cell inside my webpage.
The following code shows me that addEventListener does not work.
```function modifyText() {
alert("!");
}
var el=document.getElementById("mycell");
el.innerHTML="a"
el.addEventListener("change", modifyText, false);
// After next instruction I expect an alert message but it does not appear...
el.innerHTML="Z";
```
The code is just a toy example. In my real case the changes in the page (and therefore in the cell, too) are made by a webapp that I have NO control over.
Comment: What browser type are you testing this in? Remember IE uses the non-standard .attachEvent() method.
Comment: How are the changes happening in the first place? When you say "a web app" that implies to me a server-side application which builds and returns the page content. If that's the case then the change wouldn't be detectable by the JavaScript code because the "change" took place on the server before the scope of the JavaScript was even applicable. From the perspective of the JavaScript on the page, there was no change.
Here is the accepted answer: You can't listen to a DOM element ```change``` that way. ```change``` event is mostly for ```input```s
There is some other new DOM 3 events that would help you on this.
Here is some:
DOMCharacterDataModified //Draft
DOMSubtreeModified
Comment for this answer: unfortunately all Mutation Events are deprecated! is there any other magical event like DOMCharacterDataModified to listen to innerHTML value changes?
Here is another answer: There is a modern way to catch innerhtml changes:
https://developer.mozilla.org/en-US/docs/Web/API/MutationObserver/observe
Example:
```// identify an element to observe
elementToObserve = window.document.getElementById('y-range').children[0];
// create a new instance of 'MutationObserver' named 'observer',
// passing it a callback function
observer = new MutationObserver(function(mutationsList, observer) {
console.log(mutationsList);
});
// call 'observe' on that MutationObserver instance,
// passing it the element to observe, and the options object
observer.observe(elementToObserve, {characterData: false, childList: true, attributes: false});
```
childList mutation fires on innerHTML change.
Comment for this answer: Best answer, works even for iframe children
Here is another answer: Does this Most efficient method of detecting/monitoring DOM changes? help?
It seems like there aren't any 100% cross browser solutions, and one of the workarounds is to poll the elements of interest to see if their innerHTML.length changes!
|
Title: Historical Data in MySql
Tags: mysql
Question: Good day, I have a table of electrical test results, 3 - 6 - and 12 Month intervals. The client has decided he would like to keep trends on some of the data like current leakage over 5 yrs (drop down box or similar)to see if items are deteriorating. Is it possible to have 5yrs of data in the same field or do I need to create historic tables with a trigger? Thanks G Styles
Comment: I have a similar project where I use a stored procedure that returns aggregated data with min, max and average values for a given time interval, e.g. 1 hour or 2 days etc. By doing so I don't need another table.
Comment: with field you actually mean field or table ?
Here is another answer: 5 years of data does not mean much. What is important is the size of data, which will vary depending on what data you save for each interval.
My suggestion is to keep the data as is in the table, and if you have performance problems because of the size of data, take a look at MySQL Partitioning
Here is another answer: ```
Is it possible to have 5yrs of data in the same field
```
Do you mean in the same table? Then yes, of course. For example, you could modify your data access layer to always perform inserts and never updates, and to always select the most recent record as the "current" record. (So you'd need time stamps on the records to order them, of course. Maybe even a view to make the selection easier.) This would (should) be entirely transparent to the rest of the code. Just remember that the table would be perpetually growing, so performance over time would be a consideration to make, depending on how often this data actually "changes" and how many records are being written.
```
do I need to create historic tables with a trigger
```
A history or audit table is another perfectly valid approach. Especially if performance over time is a potential concern, because the table can be indexed/optimized differently to accommodate more writes than reads, for example. A trigger can certainly be used to maintain this, as could another simple change to the data access layer that's transparent to the rest of the code. Either approach is fine, it's mostly a matter of how you organize and maintain your code and which one would be more obvious when supporting the code at a later time.
Comment for this answer: Thank you to all that have answered. I didn't mean the same table I meant the same field,ideally with a drop box or such so that the record would have ex. 0.11, 0.11,0.12, 0.13 etc. I suspect that this is not possible, I am not a MySql expert by any means.The Db is on a shared web server so I do not have access to some of the functions. Thanks G Styles
Comment for this answer: I use Fabrik to access the db in the website and under certain circumstances in the form view of the data drop boxes,radio buttons,check boxes etc are available to me. I take your point that data should be unique and will use an historic table for the changes I spoke about. No doubt I shall ask for help later on with the trigger. Thanks to all,G Styles.
Comment for this answer: @user1731603: It doesn't make any sense to store data "in the same field." Any given field in any given record contains one value. You can't store "records" in "fields". And I'm not sure what "a drop box" has to do with MySQL. MySQL is just a database for storing data, it's not a UI.
Comment for this answer: @user1731603: If you're going to be relying on a specific framework to manage the database for you then that's going to be *very* useful information in future questions :) By itself, MySQL is perfectly capable of storing data in tables. What some framework is capable of, that's another story...
|
Title: How to recognize user before and after logging in Symfony 2?
Tags: php;session;authentication;symfony
Question: I’m using Symfony2.1, a very simple login form based on documentation (http://symfony.com/doc/current/book/security.html#using-a-traditional-login-form) and a custom authentication success handler.
Anonymous user can do some action which are stored in database with the user's session ID. Now the user is logging into the system and I want to update saved actions with user's ID so that logged user can continue its work . Unfortunately in success handler I have already an updated session ID and I don’t know which records in action's table belongs to user (since they are stored with old session ID that I can’t access to [or can I?]).
What is the best practice to handle this kind of situations. Should actions be saved in database with token stored in cookie instead of session id or is there a build in mechanism and I’m trying reinvent the wheel or maybe I’m asking wrong question and therefore I can’t find answer.
Comment: Jonathan Wage recently wrote about a very similar use case on his blug: http://jwage.com/post/54943645180/tracking-new-member-origination-with-symfony2
Here is the accepted answer: The default mechanism of generating a new session id on an access level change is best practise. You could write your own authentication that does something with the new and old session ID. But unless you really know what you are doing security and authentication code is best left alone.
Best method would be as you suggest to save a token in the database and in a cookie and track your users with that. Don't forget to clean up the used tokens in the database and cookies if you no longer need them.
|
Title: Apache Beam Java 2.26.0: BigQueryIO 'No rows present in the request'
Tags: google-bigquery;google-cloud-dataflow;apache-beam
Question: Since the Beam ```2.26.0``` update we ran into errors in our Java SDK streaming data pipelines. We have been investigating the issue for quite some time now but are unable to track down the root cause. When downgrading to ```2.25.0``` the pipeline works as expected.
Our pipelines are responsible for ingestion, i.e., consume from Pub/Sub and ingest into BigQuery. Specifically, we use the ```PubSubIO``` source and the ```BigQueryIO``` sink (streaming mode). When running the pipeline, we encounter the following error:
```{
"code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "No rows present in the request.",
"reason" : "invalid"
} ],
"message" : "No rows present in the request.",
"status" : "INVALID_ARGUMENT"
}
```
Our initial guess was that the pipeline's logic was somehow bugged, causing the ```BigQueryIO``` sink to fail. After investigation, we concluded that the ```PCollection``` feeding the sink is indeed containing correct data.
Earlier today I was looking in the changelog and noticed that the ```BigQueryIO``` sink received numerous updates. I was specifically worried about the following changes:
BigQuery’s DATETIME type now maps to Beam logical type org.apache.beam.sdk.schemas.logicaltypes.SqlTypes.DATETIME
Java BigQuery streaming inserts now have timeouts enabled by default. Pass ```--HTTPWriteTimeout=0``` to revert to the old behavior
With respect to the first update, I made sure to disable all ```DATETIME``` in the resulting ```TableRow``` objects. In this specific scenario, the error still stands.
For the second change, I'm unsure how to pass the ```--HTTPWriteTimeout=0``` flag to the pipeline. How is this best achieved?
Any other suggestions as to the root cause of this issue?
Thanks in advance!
Comment: The BigQueryOptions is defined in https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryOptions.java and can be used like https://beam.apache.org/releases/javadoc/2.28.0/org/apache/beam/sdk/options/PipelineOptions.html. Does setting this flag help?
Comment: This is for the Java SDK, right? I'm guessing so based on the changes you called out, but it might be good to specify at the beginning of the question.
Comment: I did! Initially ignored the issue and assumed it would be fixed in the upcoming version. Issue persists in `2.28.0`.
Comment: Indeed, Java SDK. I Will update the post accordingly.
Comment: Thanks for the responses, gents. I outlined my findings in the answer below.
Comment: Did you try using `2.28` - the latest version of beam?
Here is the accepted answer: We have finally been able to fix this issue and rest assured it has been a hell of a ride. We basically debugged the entire BigQueryIO connector and came to the following conclusions:
The ```TableRow``` objects that are being forwarded to BigQuery used to contain enum values. Due to these not being serializable, an empty payload is forwarded to BigQuery. In my opinion, this error should be made more explicit (why was this suddenly changed anyway?).
The issue was solved by adding the ```@value``` annotation to each enum entry (```com.google.api.client.util.Value```).
The same ```TableRow``` object also contained values of the type ```byte[]```. This value was injected in a BigQuery column with the ```bytes``` type. While this was working without explicitly computing a base64 before, it was now yielding errors.
The issue was solved by computing a base64 ourselves (this setup is also discussed in the following post).
Comment for this answer: Are there changes to Beam that could have made this easier to debug, or possibly the workaround unneeded entirely?
Comment for this answer: Hi Robert, thanks for the reply. The thing that made this particularly hard to debug, was the fact that we did not get an error regarding the `TableRow` not being serializable. I feel like this error should throw an exception that is visible to the user. I'm unsure whether this is part of Beam or an implementation detail of the executor though.
Here is another answer: ```--HTTPWriteTimeout``` is a pipeline option. You can set it the same way you set the runner, etc. (typically on the command line).
|
Title: what is difference between use of boolean array and class object with list in baseadapter class
Tags: android;listview;arraylist;boolean;android-viewholder
Question: i am using check-box with list view by extending base adapter class to "cust_listadapter"(custom adapter). in this when i am using boolean array to save selection of check-box then code is running fine, but when i used objects(beans) with Array List, then code is not running correctly.whenever i select a check box then all check boxes seen select.
below,i have shown my code.
``` public class MainActivity extends Activity {
ListView l1;
cust_listadapter custom_adapter;
List<Integer> l_items;
List<State_Refresh> check_state;
State_Refresh ob_state;
Boolean b_check[]=new Boolean[50];
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
l_items=new ArrayList<Integer>();
ob_state=new State_Refresh();
check_state=new ArrayList<State_Refresh>();
for(int i=1;i<=50;i++)
{
l_items.add(i);
b_check[i-1]=false;
ob_state.setState(false);
ob_state.setButtonValue("OFF");
check_state.add(ob_state);
}
l1=(ListView) findViewById(R.id.list1);
custom_adapter=new cust_listadapter();
l1.setAdapter(custom_adapter);
l1.setOnItemClickListener(new OnItemClickListener() {
@Override
public void onItemClick(AdapterView<?> arg0, View arg1, int arg2,
long arg3) {
// TODO Auto-generated method stub
Toast.makeText(getApplicationContext(), "huluhulu "+arg2, 0).show();
}
});
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
class cust_listadapter extends BaseAdapter
{
@Override
public int getCount() {
// TODO Auto-generated method stub
return l_items.size();
}
@Override
public Object getItem(int arg0) {
// TODO Auto-generated method stub
return arg0;
}
@Override
public long getItemId(int arg0) {
// TODO Auto-generated method stub
return arg0;
}
@Override
public View getView(final int position, View convertView, ViewGroup parent) {
final View_Holder viewHolder;
if(convertView==null)
{
convertView=getLayoutInflater().inflate(R.layout.list_items,null);
viewHolder=new View_Holder();
viewHolder.ch1=(CheckBox) convertView.findViewById(R.id.check_box1);
viewHolder.tv1=(TextView)
convertView.findViewById(R.id.textview1);
viewHolder.ch1.setFocusable(false);
viewHolder.ch1.setFocusableInTouchMode(false);
convertView.setTag(viewHolder);
viewHolder.ch1.setTag(check_state.get(position).getState());
}
else
{
viewHolder=(View_Holder) convertView.getTag();
viewHolder.ch1.setChecked(check_state.get(position).getState());
//getting value from boolean array
// viewHolder.ch1.setChecked(b_check[position]);
}
viewHolder.ch1.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
// TODO Auto-generated method stub
if(viewHolder.ch1.isChecked())
{
check_state.get(position).setState(true);
b_check[position]=true;
}
else
{
b_check[position]=false;
check_state.get(position).setState(false);
}
}
});
return convertView;
}
}
class View_Holder
{
CheckBox ch1;
TextView tv1;
}
class State_Refresh
{
Boolean check_value;
public void setState(Boolean check_value)
{
this.check_value=check_value;
}
public Boolean getState()
{
return check_value;
}
}
}
```
Here is another answer: Set the position as tag for every line, read it in the listener:
```public View getView(final int position, View convertView, ViewGroup parent) {
final View_Holder viewHolder;
if(convertView==null)
{
convertView=getLayoutInflater().inflate(R.layout.list_items,null);
viewHolder=new View_Holder();
viewHolder.ch1=(CheckBox) convertView.findViewById(R.id.check_box1);
viewHolder.tv1=(TextView)
convertView.findViewById(R.id.textview1);
viewHolder.ch1.setFocusable(false);
viewHolder.ch1.setFocusableInTouchMode(false);
convertView.setTag(viewHolder);
viewHolder.ch1.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
// TODO Auto-generated method stub
int pos = (Integer)v.getTag();
check_state.get(pos).setState(((CheckBox)v).isChecked());
b_check[pos]=((CheckBox)v).isChecked();
}
});
}
else
{
viewHolder=(View_Holder) convertView.getTag();
}
viewHolder.ch1.setTag(position);
viewHolder.ch1.setChecked(check_state.get(position).getState());
return convertView;
}
```
Btw. please do not code the variable type in the variable name. b_check[] should be check[].
Comment for this answer: thanks for answering my question. i applied your code in my project, but still i faced same problem.
|
Title: Job 20:7 How approriate is the Ancient Hebrew translation of the word "refuse" or "dung" that is used in Job 20:7?
Tags: hebrew;job
Question: ```
Job 20:7 New American Standard Bible (NASB)
7 He perishes forever
like his refuse; Those who have seen him will say, ‘Where is he?’
Job 20:7 English Standard Version (ESV)
7 he will perish forever like his own dung;
those who have seen him will say, ‘Where is he?’
Job 20:7 New King James Version (NKJV)
7 Yet he will perish forever like his own refuse; Those who have seen
him will say, ‘Where is he?’
Job 20:7 King James Version (KJV)
7 Yet he shall perish for ever like his own dung: they which have
seen him shall say, Where is he?
```
I was just curious as to whether the ancient Hebrew translation of the word
"dung" or "refuse that is mentioned Job 20:7 was appropriate when the ancient Hebrew bible was written. Therefore, I searched the internet for the "Hebrew OT: Westminster Leningrad Codex" translation of Job 20:7 verse.
Was the corresponding ancient Hebrew word appropriate as colloquy during the time period of the ancient Israelites?
How appropriate is the ancient Hebrew translation of the word "dung" or "refuse that is mentioned Job 20:7? (LOL, is it like saying the word "s***" which is bad language nowadays? )
Here is another answer: The Hebrew word in question is גֵּ֫לֶל (gelel, a variation of galal) and occurs in just four places, namely, Job 20:7, Eze 4:12, 15, Zeph 1:17.
In all cases the meaning is clear, it means "dung" or excrement as per Strong's Lexicon and Brown-Driver-Briggs; [גֵּל] noun [masculine] dung.
Further, all the above references except Zeph 1:17 refer specifically to human excrement. The figure in Job 20:7 is simply that excrement vanishes (is actually consumed by various bugs) over time and that is what Job is saying about the godless person - they vanish and never return.
Comment for this answer: And in all four cases it's used negatively, not like fertiliser or guano.
Comment for this answer: @curiousdannii Therefore, are you saying that in terms of level of inappropriateness, saying גֵּ֫לֶל (gelel, a variation of galal) is like saying "s***" which is bad language nowadays?
Comment for this answer: The problem is the ritual uncleanness which carries the overtones of some negativity. Then as now, people had to wash after coming in contact with excrement of any kind. The figure in Job 20:7 is simply that excrement vanishes (is actually consumed by various bugs) over time and that is what Job is saying about the godless person - they vanish and never return.
|
Title: Getting error while sending email using Python
Tags: python;email
Question: When I try sending an email from Python I get an error message. By the way I made less secure configuration and IMAP Enable on gmail account. Here is the code below:
```import smtplib
sender_email="[email protected]"
receveir_email="[email protected]"
password=input("please enter your password")
message="this email from python "
server=smtplib.SMTP_SSL('smptp.gmail.com',587)
server.ehlo
server.starttls()
server.login(sender_email,password)
print("login success")
server.sendmail(sender_email,receveir_email,message)
print("email has been sent to",receveir_email)
server.close()
```
My error message:
```Traceback (most recent call last):
File "C:/Users/Administrator/PycharmProjects/codebook/Training0718.py", line 23, in <module>
server=smtplib.SMTP_SSL('smptp.gmail.com',465)
File "C:\Program Files\Python38\lib\smtplib.py", line 1034, in __init__
SMTP.__init__(self, host, port, local_hostname, timeout,
File "C:\Program Files\Python38\lib\smtplib.py", line 253, in __init__
(code, msg) = self.connect(host, port)
File "C:\Program Files\Python38\lib\smtplib.py", line 339, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "C:\Program Files\Python38\lib\smtplib.py", line 1040, in _get_socket
new_socket = socket.create_connection((host, port), timeout,
File "C:\Program Files\Python38\lib\socket.py", line 787, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "C:\Program Files\Python38\lib\socket.py", line 918, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
```
Comment: You have a typo in the SMTP host.
Here is another answer: I was able to make it work with the following, note that this will only succeed if you have created and are using an app password
```import smtplib
password = input('please enter your password')
to = '[email protected]'
_from = '[email protected]'
message = 'hello'
try:
server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
server.ehlo()
server.login(_from, password)
server.sendmail(_from, to, message)
server.close()
except Exception as err:
print(err)
```
|
Title: Storage Linking Issue in Laravel 8 on localhost
Tags: php;laravel;upload;localhost;laravel-8
Question:
Laravel Version: 8.35.1
PHP Version: 8.0.0
Description:
I'm uploading an image with laravel using this code:
```$product_image = $request->file('product_image');
$product_image_extension = $product_image->extension();
$product_image_name = time() . '.' . $product_image_extension;
$product_image->storeAs('/media/product_images', $product_image_name);
$model->product_image = $product_image_name;
```
It works fine, the file is uploaded to storage/app/media/product_images/.
Then, I run the command
```
php artisan storage: link
```
to create the symlink in public.
The Command Execute Like this:
```Local NTFS volumes are required to complete the operation.
The [E:\Complete Programming Bootcamp\laravel Work\ecom-project\cyber_shopping\public\storage] link
has been connected to [E:\Complete Programming Bootcamp\laravel Work\ecom-
project\cyber_shopping\storage\app/public].
The links have been created.
```
I am Using This Code To Display Image:
```{{asset('storage/media/product_images/' . $list->product_image)}}
```
But Image is not displaying on the frontend.
Also, The Storage Folder Is not created in the public folder.
PLz, Help Me.
Thanks
Comment: try to access that image directly from browser, does the image accessible from that url?
Comment: Try this rm -rf public/storage Then run php artisan storage:link
Comment: You have to store the files under `storage/app/public/media/product_images/` to make them publicly accessibles through `asset('storage/media/product_images/' . $list->product_image)`. So change `$product_image->storeAs('/media/product_images', $product_image_name);` to ` $product_image->storeAs('public/media/product_images', $product_image_name);`
Comment: yes, I try this but the page not found error occurs. When I Copy the storage/app/public/[media(this folder)] paste it in the public/storage(created by myself) folder the images displayed. But This process is not dynamic. I need that when I upload an image it also stores in the public/storage folder and the display.
Comment: I try this, but it does not work for me.
Comment: Everything works fine. The issue is 1) images are stored in the storage/app/public/media folder but not in the public/storage folder. 2) There is no folder in public with name storage after execution of `php artisan storage: link`. When I copy all images from storage/app/public/media and paste them in public/storage(created by myself)/media/ Then all images display this process is static i want to do it dynamically. i tried many queries but it not works.
Here is the accepted answer: According to https://github.com/photoncms/cms/issues/8, you are trying to symlink on fat32 drive on which it does not work. Try to test it on NTFS drive
Comment for this answer: Thanks, This Problem is solved by running the command `php artisan storage:link` on the NTFS drive. Now Everythings Goes Right.
Here is another answer: Step 181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16 Store Image
```$path = ‘’;
if( $request->has('product_image') ) {
$path = $request->file('product_image')->store('media/product_images');
}
$model->product_image = $path;
```
Step 181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16 Check Store File Path
The File Will Be Store In Path181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16
————————————————————————————————
```Storage/app/public/media/product_images/
```
Step 181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16 Link Storage In Public Folder
Run The Storage Link Command and remove storage link folder from the public if already exist
```php artisan storage:link
```
Step 181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16 Create Global Function To Access Images Main Controller.php File Create Global Storage Image Getting Function Like This
```public static function getUrl($path)
{
$url = "";
if( !empty($path) && Storag181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16disk('public')->exists($path) )
$url = Storag181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16url($path);
return $url;
}
```
Step 181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16 Use Function In Assest To Display Image
```<img src="{{ getUrl($list->product_image) }}" />
```
Comment for this answer: what is the image src path?
Comment for this answer: Thanks, Brother To Help Me. But Image is Still Not Found
Comment for this answer: {{asset('storage/media/product_images/' . $list->product_image)}} => admin/storage/media/product_images/shirt.png
|
Title: Grunt concut/uglify
Tags: javascript;angularjs;gruntjs;concat
Question: I'm getting rid with Grunt, Yeoman and Bower, and here are some simple questions I have:
Why yeoman generators run the grunt concat task before uglifying (since uglify knows how to concat)?
Why someone uses concat with cssmin and uglify in the project?
// Is there any grunt plugin for angular js to convert this:
```angular.module('MyApp')
.controller('searchResultsCtrl', function($scope, $filter, $rootScope, $stateParams {...});
```
into this:
```angular.module('MyApp')
.controller('searchResultsCtrl', ['$scope', '$filter', '$rootScope', '$stateParams', function($scope, $filter, $rootScope, $stateParams) {...}]);
```
Thanks
Comment: Thank you, 3rd is solved :)
Comment: Check ngmin https://github.com/btford/ngmin
Here is another answer: Concat is used for joining multiple files into one. It is used where you want all your javascripts or stylesheets to be joined into one file to decrease the number of requests made by browser.
Uglify minimizes javascript and css generally by remove redundant spaces and newlines, deleting comments and renaming variables to something shorter. The focus is to minimize the size of a file as much as possible.
Your problem with angular bindings can be solved by ngmin project. Take a [email protected].
|
Title: Spring RestController: custom date format for java.time.LocalDate does not work
Tags: java;spring;spring-boot;jackson
Question: I would like to change the default date format for the JSON serializer in my Spring-boot application. It seems that I really missed something because none of the following methods can work for me: 5 ways to customize Spring MVC JSON/XML output
Environment:
Java 11
Spring boot: 2.1.6.RELEASE
My relevant dependencies:
```<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
</dependency>
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.datatype</groupId>
<artifactId>jackson-datatype-jsr310</artifactId>
<version>2.10.0</version>
</dependency>
```
POJO:
```@Data
@NoArgsConstructor
public class User {
...
@ApiModelProperty(example = "2019/01/01", notes = "date of birth as YYYY/MM/DD")
@Past(message = "Are you really from the future?")
private LocalDate dateOfBirth;
}
```
Spring configuration:
```@Configuration
public class OutputConfiguration {
private static final String PATTERN = "yyyy/MM/dd";
@Bean
@Primary
public ObjectMapper objectMapper(Jackson2ObjectMapperBuilder builder) {
ObjectMapper objectMapper = builder.createXmlMapper(false).build();
objectMapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false);
objectMapper.setDateFormat(new SimpleDateFormat(pattern));
return objectMapper;
}
}
```
If I add ```@JsonFormat(pattern="yyyy/MM/dd")``` into the POJO, before the ```LocalDate``` variable then everything works fine. But because I have a lot of ```LocalDate``` variables I would like to add this configuration to the application level.
This is the error I get:
```org.springframework.http.converter.HttpMessageNotReadableException: JSON parse error: Cannot deserialize value of type `java.time.LocalDate` from String "2019/05/29": Failed to deserialize java.time.LocalDate: (java.time.format.DateTimeParseException) Text '2019/05/29' could not be parsed at index 4; nested exception is com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot deserialize value of type `java.time.LocalDate` from String "2019/05/29": Failed to deserialize java.time.LocalDate: (java.time.format.DateTimeParseException) Text '2019/05/29' could not be parsed at index 4
at [Source: (PushbackInputStream); line: 2, column: 18] (through reference chain: com.remal.gombi.component.domain.User["dateOfBirth"])
```
What I am doing wrong?
UPDATE
If I add the following configuration, then the GET rest request returns with a properly formatted LocalDate: ```"dateOfBirth": "29/05/2019"```, BUT the POST method still expects the ISO date format: ```"dateOfBirth": "2019-01-01"```. It is so strange.
```@Bean
public Jackson2ObjectMapperBuilder customJacksonMapper() {
return new Jackson2ObjectMapperBuilder()
.indentOutput(true)
.serializers(new LocalDateSerializer(DateTimeFormatter.ofPattern(DATE_PATTERN)));
}
```
Comment: According to [this tutorial](https://www.baeldung.com/spring-boot-formatting-json-dates), the @JsonFormat should be enough.
Comment: Have you tried with `spring.jackson.date-format=yyyy/MM/dd` in `application.properties`? This and other `Jackson Properties` can be found [here](https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html)
Comment: You can also write you own application level Seriliazer and add it using `@JsonSerialize`
|
Title: Is the law of conservation of energy still valid?
Tags: general-relativity;energy;cosmology;energy-conservation
Question: Is the law of conservation of energy still valid or have there been experiments showing that energy could be created or lost?
Comment: http://motls.blogspot.de/2010/08/why-and-how-energy-is-not-conserved-in.html
Comment: Related: https://physics.stackexchange.com/q/2838/2451 , https://physics.stackexchange.com/q/10309/2451 and links therein.
Comment: There is no global energy conservation in cosmology, if that is what you mean. Locally energy is still conserved ($\nabla_\mu T^{\mu\nu}=0$).
Here is another answer: Energy has been shown to be conserved in all circumstances where it is currently possible to test it experimentally. It is also conserved according to theory in any system with time translation invariance. This is the case in all known physics including general relativity.
Some people have tried to argue that energy is not conserved in general relativity, or that conservation is approximate, trivial or meaningless. This is not the case. A variety of fallacious arguments are used to support non-conservation, e.g. some theorists say that general relativity does not have time-translation invariance because the gravitational field is not invariant. The obvious solution to this is to include the gravitational field as a dynamical field with its own time-translation invariance. Despite this, such incorrect arguments have even made their way into textbooks written by well known cosmologists. This is not something where you should rely on the word of authority. Check the maths and the logic yourself.
This subject has been discussed on physics stackexchange several time before so I won't expand on this answer. Suffice to say that energy is conserved in all established physical laws including general realtivity. It is not approximate, or trivial or true only in special cases.
Comment for this answer: As I said, it has been discussed here before. Here are three other answers and two papers with more maths https://physics.stackexchange.com/questions/2597/energy-conservation-in-general-relativity/3409#3409
https://physics.stackexchange.com/questions/41662/why-does-no-physical-energy-momentum-tensor-exist-for-the-gravitational-field/68332#68332
https://physics.stackexchange.com/questions/259759/conservation-of-energy-vs-expansion-of-space/260845#260845
http://www.prespacetime.com/index.php/pst/article/view/81
https://pdfs.semanticscholar.org/36d2/4be2c4a8f1c40a6bcf0321a80ce1585be471.pdf
Comment for this answer: no math deduction?
Here is another answer: The energy conservation becomes vacuous or invalid in the general theory of relativity and especially in cosmology. See
```
http://motls.blogspot.com/2010/08/why-and-how-energy-is-not-conserved-in.html
```
Why and what does it imply? First of all, Noether's theorem makes the energy conservation law equivalent to the time-translational symmetry. In general backgrounds in GR, the time-translational symmetry is broken (especially in cosmology), so the corresponding energy conservation law is broken, too, despite the fact that the energy conservation law (and the corresponding time-translational symmetry) is an unassailable principle in all of pre-general-relativistic physics.
One example of a possible subtlety we have to be careful about: $\nabla_\mu T^{\mu\nu}=0$ holds in GR but because it contains the covariant derivative $\nabla$, this law can't be brought to the equivalent integral form. The extra Christoffel symbol terms explicitly measure how much the energy conservation law is violated at the given point. There's no way to redefine $T_{\mu\nu}$ so that the conservation law would hold with partial derivatives $\partial_\mu$ but the energy would still retain a coordinate-independent value that actually constrains the final state in any way.
If one views the background as variable and appreciates that the underlying laws as being time-translational-invariant, it doesn't help because the time-translational symmetry is a subgroup of the diffeomorphism group which is a local (gauge) symmetry in GR, and all physical states must therefore be invariant under it. The invariance is the same thing as saying that the generator – the energy itself – identically vanishes. So we may declare that there's a conserved energy in GR but it's zero.
We may see the same point if we try to associate energy to gravitational waves. In general spacetimes, we will fail to find a good formula. It's not hard to see why. The total stress-energy tensor comes from the variation of the action with respect to the metric tensor. The variation of the "matter-field" part of the action gives us the matter part of the energy/momentum density. However, the variation of the gravitational part, the Einstein-Hilbert action, gives us an additional term, the Einstein curvature tensor. Of course, the sum of both vanishes – this condition is nothing else than Einstein's equations – because the metric tensor is a dynamical variable in GR and the action has to be stationary under variations of all dynamical fields.
We may also try to invent other definitions of the total energy in general spacetimes. They will either explicitly refuse to be conserved; or they will be identically zero; or they will depend on the chosen spacetime coordinates (in the latter case, it will actually be the case that the whole "beef" of the energy will be just an artifact of the choice of coordinates and there will be no "meaningful piece" that would actually depend on the matter distribution). There's no way to define "energy" in general (cosmological) situations that would be nonzero, coordinate-choice-independent, and conserved at the same moment.
For asymptotically flat or other asymptotically time-translationally-invariant spacetimes, we may again define the total energy, the ADM mass, but it is not possible to exactly say "where it is located" and the cleanest way to determine the ADM mass is from the asymptotic conditions of the spacetime.
Cosmology
In cosmology, the most explicit example of the text above is the FRW uniform and isotropic cosmology. In that case, the total energy stored in dust which has $p=0$, vanishing pressure, is conserved. However, the total energy stored in radiation is decreasing as $1/a$ where $a$ are the linear dimensions of the Universe simply because each photon (or particle of radiation) sees its wavelength grow as $a$ and energy goes like $1/\lambda$ i.e. $1/a$.
There are other states of matter I could discuss such as cosmic strings and cosmic domain walls which obey different power laws. But the most interesting example I will mention is the cosmological constant. It's an energy density of the vacuum. Because the cosmological constant is "constant", this energy density is always and everywhere the same. So because the density is constant and the volume of spacetime grows as $a^3$ in our spacetime dimension, the total energy stored in the Universe grows as $a^3$, too.
Cosmic inflation is driven by a "temporary cosmological constant" so the total energy of the Universe grows with the volume of the Universe, too. In Alan Guth's words, inflation (or the Universe) is the ultimate free lunch. Inflation explains why the mass/energy of the visible Universe is so much hugely larger than the mass scales of particle physics.
For different mixtures of matter obeying different equations of state (roughly speaking, with different ratios of pressure and energy density), one will see the total energy increase or decrease or be constant. Generally, the total energy of the Universe will tend to increase as the Universe expands if the Universe is filled with matter of increasingly negative pressure; the total energy will decrease if it is filled with matter of increasingly positive pressure.
Comment for this answer: Shouldn't the total energy of the Universe be infinite ?
Comment for this answer: Thanks, JCL. ... Dear Richart, I realize that. Still, I believe that I also have the right to quote my own text about the very same topic. ;-) @jjcale: the total energy of the visible Universe (whose current radius if 46 billion light years) is finite. Whether or not the total energy of the whole Universe is finite depends on whether the Universe (its volume) is finite. We don't know. We just know that if finite, it is very large - the curvature radius of the whole Universe is at least hundreds of billions of light years, way greater than the radius of the visible Universe.
Comment for this answer: @jjcale see http://physics.stackexchange.com/q/24017/
Here is another answer: It is still valid.
Only two hypothetical exclusions exist:
1) Quantum uncertainty principle. Energy can be uncertain if time is certain and vice versa. So, virtual particles can violate energy conservation law for a small amounts of time. These violations are averaged in normal scales.
2) General relativity model. Universe are not equal over time. Since energy conservation is a consequence of time uniformity, it is possible that it is violated in cosmological scale.
|
Title: Serialize Javascript array as PHP string for Laravel cache
Tags: javascript;php;laravel;serialization;redis
Question: I need to save a PHP array as serialized data suitable for Laravel's cache system. My code so far using node-php-serialization is:
```var serialize=require("php-serialization").serialize;
var sernew = serialize(result, "string");
```
This produces:
```"s:11:"bob,dave,mark";"
```
But this doesn't work in Laravel's cache. I need:
```"s:27:\"[\"bob\",\"dave\",\"mark\"]\";"
```
Can this package do this? Does anyone know of a way it can be formatted as above?
Many thanks.
Additional details:
The array to be inserted comes from the cache, is converted to Javascript array, altered, then converted back to the cache string:
```rediscache.get(key, function (err, result) {
result = unserialize(result);
result = JSON.parse(result);
for(var i = result.length - 1; i >= 0; i--) {
if(result[i] === username) {
result.splice(i, 1);
}
}
var sernew = serialize(result, "string");
});
```
Here is the accepted answer: The result you want - ```"s:27:\"[\"bob\",\"dave\",\"mark\"]\";"``` - is a JSON string ```["bob","dave","mark"]```.
What you insert back to serialize is just a string. In this exact case, doing this will work:
```var sernew = serialize(JSON.stringify(("bob,dave,mark").split(",")), "string");
```
But it might not work in all cases if you're not just having a simple array.
Can you show the full path of what are you doing with the string. Something might be incorrectly done.
Comment for this answer: I've added some additional details. I'll look into converting the array back to JSON...
Comment for this answer: The result that you get after doing the splicing is an array. You should be turning that array back to a JSON string with JSON.stringify() as I've mentioned and only then serializing. `var sernew = serialize(JSON.parse(result), "string");`
|
Title: Grouping 2D numpy array in average
Tags: python;numpy
Question: I am trying to group a numpy array into smaller size by taking average of the elements. Such as take average foreach 5x5 sub-arrays in a 100x100 array to create a 20x20 size array. As I have a huge data need to manipulate, is that an efficient way to do that?
Comment: Similar to [this](https://stackoverflow.com/questions/18645013/windowed-maximum-in-numpy/18645174#18645174) answer as well.
Here is the accepted answer: I have tried this for smaller array, so test it with yours:
```import numpy as np
nbig = 100
nsmall = 20
big = np.arange(nbig * nbig).reshape([nbig, nbig]) # 100x100
small = big.reshape([nsmall, nbig//nsmall, nsmall, nbig//nsmall]).mean(3).mean(1)
```
An example with 6x6 -> 3x3:
```nbig = 6
nsmall = 3
big = np.arange(36).reshape([6,6])
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]])
small = big.reshape([nsmall, nbig//nsmall, nsmall, nbig//nsmall]).mean(3).mean(1)
array([[ 3.5, 5.5, 7.5],
[ 15.5, 17.5, 19.5],
[ 27.5, 29.5, 31.5]])
```
Comment for this answer: Here is a generalization to N-dimensional arrays https://stackoverflow.com/a/73078468/3753826
Here is another answer: Note that eumiro's approach does not work for masked arrays as ```.mean(3).mean(1)``` assumes that each mean along axis 3 was computed from the same number of values. If there are masked elements in your array, this assumption does not hold any more. In that case, you have to keep track of the number of values used to compute ```.mean(3)``` and replace ```.mean(1)``` by a weighted mean. The weights are the normalized number of values used to compute ```.mean(3)```.
Here is an example:
```import numpy as np
def gridbox_mean_masked(data, Nbig, Nsmall):
# Reshape data
rshp = data.reshape([Nsmall, Nbig//Nsmall, Nsmall, Nbig//Nsmall])
# Compute mean along axis 3 and remember the number of values each mean
# was computed from
mean3 = rshp.mean(3)
count3 = rshp.count(3)
# Compute weighted mean along axis 1
mean1 = (count3*mean3).sum(1)/count3.sum(1)
return mean1
# Define test data
big = np.ma.array([[1, 1, 2],
[1, 1, 1],
[1, 1, 1]])
big.mask = [[0, 0, 0],
[0, 0, 1],
[0, 0, 0]]
Nbig = 3
Nsmall = 1
# Compute gridbox mean
print gridbox_mean_masked(big, Nbig, Nsmall)
```
Here is another answer: Average a 2D array over subarrays of size NxN:
```height, width = data.shape
data = average(split(average(split(data, width // N, axis=1), axis=-1), height // N, axis=1), axis=-1)
```
Comment for this answer: Nice one! Just a clarification that average and split are numpy funtions.
Here is another answer: This is pretty straightforward, although I feel like it could be faster:
```from __future__ import division
import numpy as np
Norig = 100
Ndown = 20
step = Norig//Ndown
assert step == Norig/Ndown # ensure Ndown is an integer factor of Norig
x = np.arange(Norig*Norig).reshape((Norig,Norig)) #for testing
y = np.empty((Ndown,Ndown)) # for testing
for yr,xr in enumerate(np.arange(0,Norig,step)):
for yc,xc in enumerate(np.arange(0,Norig,step)):
y[yr,yc] = np.mean(x[xr:xr+step,xc:xc+step])
```
You might also find scipy.signal.decimate interesting. It applies a more sophisticated low-pass filter than simple averaging before downsampling the data, although you'd have to decimate one axis, then the other.
|
Title: msbuild itemgroup items
Tags: batch-file;msbuild;msbuild-itemgroup
Question: I have msbuild file which is executing batch file.
Msbuild file:
```<PropertyGroup>
<ItemAString>Green;Red;Blue</ItemAString>
<ItemBString>Uno;Due;Tre</ItemBString>
<ItemCString>Song;Movie;Picture</ItemCString>
</PropertyGroup>
<ItemGroup>
<ItemsA Include="$(ItemAString.Split(';'))" />
<ItemsB Include="$(ItemBString.Split(';'))" />
<ItemsC Include="$(ItemCString.Split(';'))" />
</ItemGroup>
<Target Name = "CallBatch">
<!-- THIS DOES NOT WORK -->
<Exec Command="mybatch.bat %(ItemsA.Identity) %(ItemsB.Identity) %(ItemsC.Identity)" />
</Target>
```
Batch file is very simple:
```echo Params = [%1] - [%2] - [%3]
```
I want to get next output:
```Params = Green - Uno - Song
Params = Red - Due - Movie
Params = Blue - Movie - Picture
```
How to achieve this?
Comment: Search around for 'cross-product', answer should be on here somewhere. Example: http://stackoverflow.com/questions/3893467/msbuild-batching-on-three-independent-variables/3893905#3893905 or http://stackoverflow.com/questions/37186867/msbuild-merge-item-groups/37210038#37210038 or ...
Comment: Not sure if thats correct, but you are first accessing variables with `$` and then with `%`. I am not into msbuild but that jumps to my eyes.
Here is the accepted answer: I found solution:
```<Project DefaultTarget="DoTheMagic" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="12.0">
<PropertyGroup Condition=" '$(TFP)'=='' ">
<TFP>$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll</TFP>
<TFP Condition=" !Exists('$(TFP)')">$(MSBuildFrameworkToolsPath)\Microsoft.Build.Tasks.v4.0.dll</TFP>
<TFP Condition=" !Exists('$(TFP)')">$(windir)\Microsoft.NET\Framework\v4.0.30319\Microsoft.Build.Tasks.v4.0.dll</TFP>
</PropertyGroup>
<UsingTask
TaskName="Bukake"
TaskFactory="CodeTaskFactory"
AssemblyFile="$(TFP)" >
<ParameterGroup>
<ItemsA Required="True" ParameterType="System.String"/>
<ItemsB Required="True" ParameterType="System.String"/>
<ItemsC Required="True" ParameterType="System.String"/>
<Result ParameterType="Microsoft.Build.Framework.ITaskItem[]" Output="True"/>
</ParameterGroup>
<Task>
<Code Type="Fragment" Language="cs">
<![CDATA[
string[] itemsA = ItemsA.Split(new char[] {';'}, StringSplitOptions.RemoveEmptyEntries);
string[] itemsB = ItemsB.Split(new char[] {';'}, StringSplitOptions.RemoveEmptyEntries);
string[] itemsC = ItemsC.Split(new char[] { ';' }, StringSplitOptions.RemoveEmptyEntries);
List<TaskItem> items = new List<TaskItem>();
for (int index = 0; index < itemsA.Length; index++)
{
TaskItem item = new TaskItem();
item.ItemSpec = "item";
item.SetMetadata("itemA", itemsA[index]);
item.SetMetadata("itemB", itemsB[index]);
item.SetMetadata("itemC", itemsC[index]);
items.Add(item);
}
Result = items.ToArray();
]]>
</Code>
</Task>
</UsingTask>
<PropertyGroup>
<ItemAString>Green;Red;Blue</ItemAString>
<ItemBString>Uno;Due;Tre</ItemBString>
<ItemCString>Song;Movie;Picture</ItemCString>
</PropertyGroup>
<Target Name = "CallBatch">
<Message Text="$(TFS)" />
<Bukake ItemsA="$(ItemAString)"
ItemsB="$(ItemBString)"
ItemsC="$(ItemCString)">
<Output TaskParameter="Result" ItemName="Dundonja" />
</Bukake>
<ItemGroup>
<PreparedItems Include="@(Dundonja)"/>
</ItemGroup>
<!-- <Message Text="Dundonja: %(Dundonja.Identity) %(Dundonja.itemA) %(Dundonja.itemB) %(Dundonja.itemC)"/> -->
<Exec Command="mybatch.bat Dundonja %(Dundonja.Identity) %(Dundonja.itemA) %(Dundonja.itemB) %(Dundonja.itemC)"/>
</Target>
```
|
Title: Using Linq to object, how can I get one field's value based on another in the same list
Tags: c#;linq;linq-to-objects
Question: I have a class that is defined as following:
```public class MyClass
{
public int ID { get; set; }
public string Name { get; set; }
}
```
Then I have a logic where I build a list of that class:
```MyList = new List<MyClass>;
foreach(MyClassData myClass in Service.MyClassList)
{
MyList.Add(new MyClass{ID = myClass.ID, Name = myClass.Name});
}
```
Now, I need to get an 'ID' based on the 'Name'.
I'm trying the following:
```int id01 = MyList.Where(x => x.FileName.Equals("File01")).Single(x => x.FileID)
```
Doing that I'm getting an error message:
```
Cannot implicitly convert type 'int' to 'bool'
```
How would I do that using LINQ?
Comment: int id01 = MyList.Where(x => x.FileName.Equals("File01")).Select(x => x.ID).FirstOrDefault(); Do not know where you are getting FileID (should be just ID)
Here is the accepted answer: The ```Single``` method's is:
```
```public static TSource Single<TSource>(
this IEnumerable<TSource> source,
Func<TSource, bool> predicate
)
```
```
It gets a predicate that should return a boolean but you are returning an integer. It is used for filtering and not projecting.
What you are looking for is:
```int id01 = MyList.Single(x => x.FileName.Equals("File01")).FileID;
```
Or better use ```SingleOrDefault```:
```int id01 = MyList.SingleOrDefault(x => x.FileName.Equals("File01"))?.FileID;
```
Also see if you want the ```SingleOrDefault``` or the ```FirstOrDefault```
|
Title: How to generate an ear file from a maven-archetype-webapp artifact?
Tags: java;maven-2;web-applications;war;ear
Question: I currently have a project built with maven-archetype-webapp artifact. The default packaging for this project is war.
Is it possible for me to insert the maven-ear-plugin in this webapp pom.xml generate an ear file that contains this project war? I tried that, but the war file doesn't get embedded in the generated ear file. It has everything except the war file.
I read many Maven related articles, and perhaps I could use maven-archetype-j2ee-simple artifact. However, I'm reluctant to this use for 2 reasons:-
This artifact handles ejbs and all the extra features that I don't use. It makes my project looks bloated.
Second, it seems like it requires me to install the web module into the repository first before I can create the ear file. Is this the preferred way to create an ear file?
How do I create an ear file that contains the war file using maven-ear-plugin from my webapp's pom.xml? If this way is not possible, what's the preferred way?
I'm sorry if my questions sound a little novice, I realized I have whole lot more to learn about Maven.
Thanks much.
Here is the accepted answer: The maven ear plugin assumes that any WAR is a separate project, so you need two projects, plus a parent project. It's a function of how maven does packaging. Each project produces one artifact. There are ways around this, but Maven will punish you for your sins. This question goes into some detail on this topic.
|
Title: BinarySearch how to find the value in the array between the two neighbors?
Tags: c#;arrays;algorithm;find;binary-search
Question: I have a sorted array double.
The goal is to Find the index in the Array.
Which contains the value of <= the value of search.
For example the array contains numbers ```{0, 5, 12, 34, 100}``` with index range [0 .. 4].
Search for the value=25. And I want to get the index=2 (range of occurrences of between 12 and 34)
I do not understand how in this case will run a binary search.
``` public class MyComparer : IComparer<double>
{
public int Compare(double x, double y)
{
//<-------- ???
}
}
public double[] spline_x;
MyComparer cmpc = new MyComparer();
int i=Array.BinarySearch(spline_x, x, cmpc);
```
Here is the accepted answer: When binary search does not find item in array, it returns a negative number which is the bitwise complement of the index of the first element that is larger than value. Here is the way you can use it to find range:
```double[] spline_x = { 0D, 5D, 12D, 34D, 100D };
int i = Array.BinarySearch(spline_x, 25);
if (i >= 0)
{
// your number is in array
}
else
{
int indexOfNearest = ~i;
if (indexOfNearest == spline_x.Length)
{
// number is greater that last item
}
else if (indexOfNearest == 0)
{
// number is less than first item
}
else
{
// number is between (indexOfNearest - 1) and indexOfNearest
}
}
```
Comment for this answer: @Mixer keep in mind, that after decrementing *i* you can get `-1` of index of last item of array
Comment for this answer: Thank you.
I'm a bit simplified:
int i=Array.BinarySearch(spline_x, x);
if (i= 0)
{
}
Here is another answer: Not familiar with C#, but a naive binary search does the trick, finding the last number <= N, which is the boundary you described in question.
```int find_last(int num, const vector<int>&v, size_t begin, size_t end) {
if (begin >= end) {
return -1;
}
size_t mid = (begin + end) / 2;
if (v[mid] > num) {
// [mid, end) is bigger than num, the boundary is in [begin, mid)
return find_last(num, v, begin, mid);
}
// v[mid] is qualified as <= N, search [mid+1, end) for
// approaching a better boundary if exists.
size_t index = find_last(num, v, mid+1, end);
return (index == -1 ? mid : index);
}
```
|
Title: How can I make a span stretch the available space between to other spans?
Tags: css;width;html
Question: I want to create an element that consists of three spans within a div. The three spans shall use the whole width provided by the div. The left and right span have a fixed width and the centre one should use the whole available space between them. I've been trying many different things (float, overflow, etc.) but I haven't found an answer yet and I'm running out of ideas...
The code is rather simple:
```<div class="row">
<span class="rowLeft">LEFT</span>
<span class="rowCentre">CENTER</span>
<span class="rowRight">RIGHT</span>
</div>
```
using the following CSS:
```.row {
display: block;
height: 62px;
border: 1px dotted black;
}
.rowLeft {
float: left;
width: 40px;
height: 60px;
border: 1px solid red;
}
.rowCentre {
float: left;
height: 60px;
border: 1px dashed blue;
}
.rowRight {
float: right;
width: 60px;
height: 60px;
border: 1px solid green;
}
```
I've created a jsFiddle for this: http://jsfiddle.net/ezAdf/
Question: Starting from here, how can I make the centre span stretch the available space between left and right span?
Comment: Search for fluid layout with fixed sidebars
Comment: Thanks, that is at least a good starting point.
Here is the accepted answer: Kinda like @j08691's solution, with some changes. (Works in 4 browsers though.)
I removed the floats, added ```display: table-cell``` to span and ```display: table``` plus ```width: 100%``` to .row
Working fiddle here.
Here is another answer: You can use the ```display:table-cell``` CSS property on each span, and then set the width on the center span to 100%.
jsFiddle example
```.row {
display: block;
height: 100px;
border: 1px dotted black;
}
.rowLeft {
width: 40px;
height: 60px;
border: 1px solid red;
display:table-cell;
}
.rowCentre {
height: 60px;
border: 1px dashed blue;
display:table-cell;
width:100%;
}
.rowRight {
width: 60px;
height: 60px;
border: 1px solid green;
display:table-cell;
}
```
Comment for this answer: Thanks, but this solution breaks the defintion of the cell width. Try to set e.g. the width of .rowRight to 100px and you'll see that the width does not change.
Comment for this answer: Thanks for your answer, but the your fiddle clearly shows that the _width_ of the cells is still not correct.
Here is another answer: Just make the following changes to your CSS file:
```.row {
display: block;
height: 62px;
border: 1px dotted black;
position: relative;
}
.rowLeft {
width: 40px;
height: 60px;
border: 1px solid red;
float: left;
}
.rowCentre{
position: absolute;
left: 40px;
right: 60px;
height: 60px;
border: 1px dashed blue;
float: left;
}
.rowRight{
width: 60px;
height: 60px;
border: 1px solid green;
float: right;
}
```
Comment for this answer: Thanks, but this doesn't really help me because of the absolute position of the centre span. I want to create these elements dynamically, so the positions are not known upfront.
|
Title: SQL First transaction of each day
Tags: sql;date;ms-access;greatest-n-per-group
Question: I have a table that contains data from a number of transactions, and I been trying to obtain the earliest record per day, per client, adjusting other solutions I have seen in this website (such as this one), but they have not worked for me.
The table transactions is
Time Id Client Price Quantity
1/2/2013 09:33:20 AM 1 Albert 100.00 5,300
1/2/2013 10:34:20 AM 2 Albert 100.90 4,800
1/2/2013 10:34:20 AM 3 Lewis 80.00 25,987
1/2/2013 11:35:23 AM 4 Benson 251.00 700
1/2/2013 14:36:20 AM 5 Albert 100.00 2,250
1/2/2013 15:31:12 AM 6 Albert 99.50 1,340
1/3/2013 09:33:20 AM 7 Benson 250.00 900
1/3/2013 15:13:12 AM 8 Benson 250.00 800
1/3/2013 16:03:55 AM 9 Lewis 80.00 18,890
1/4/2013 09:01:01 AM 10 Albert 101.00 1,190
1/4/2013 09:01:01 AM 11 Albert 100.99 98,890
1/4/2013 09:01:01 AM 12 Lewis 80.98 6,890
1/4/2013 10:51:00 AM 13 Benson 279.18 190
1/4/2013 10:51:00 AM 14 Albert 99.36 78,053
...
The Id is unique, and is also sorted chronologically by definition. The Time is not unique, meaning there could be 2 transactions that happen exactly at the same time.
The sql query would need to would pull out the first transaction each client did, per day, together with the price and the quantity, something like:
Date Client Price Quantity
1/2/2013 Albert 100.00 5,300
1/2/2013 Benson 251.00 700
1/2/2013 Lewis 80.00 25,987
1/3/2013 Benson 250.00 900
1/3/2013 Lewis 80.00 18,890
1/4/2013 Albert 101.00 1,190
1/4/2013 Lewis 80.98 6,890
1/4/2013 Benson 279.18 190
Can anyone help me on how to do it in SQL?
Comment: The database is in MS Access, but I'm likely going to do the query in MySQL
Comment: What database are you using?
Here is the accepted answer: You don't specify the database. So here is a general approach. The idea will work in most databases, but some of the functions are different.
```select cast(t.time as date) as "date", t.*
from transactions t
where not exists (select 1
from transactions t2
where cast(t2.time as date) = cast(t.time as date) and
t2.client = t.client and
t2.id < t.id
);
```
The expression for getting a date from a time varies. In some databases this might be ```date(time)``` (MySQL) or ```trunc(time)``` (Oracle) or something else.
EDIT:
In Access, this would be:
```select CDATE(t.time) as [date], t.*
from transactions t
where not exists (select 1
from transactions t2
where CDATE(t2.time) = CDATE(t.time) and
t2.client = t.client and
t2.id < t.id
);
```
In MySQL:
```select date(t.time) as "date", t.*
from transactions t
where not exists (select 1
from transactions t2
where date(t2.time) = date(t.time) and
t2.client = t.client and
t2.id < t.id
);
```
Comment for this answer: Thanks, for some reason the CDATE did not worked in MS Access, but in MySQL went fine..
Comment for this answer: This doesn't partition by client as well, though.
Here is another answer: In SQL Server, something like this should work:
```select cast(Time as date), Client, Price, Quantity
from (
select *, row_number()
over (partition by Client, cast(Time as Date) order by Id) [rn]
from transactions
) x where x.rn = 1
```
Here's a sqlfiddle: http://sqlfiddle.com/#!6/0725d/1
|
Title: Firefox OS: How can I turn on the flash of camera?
Tags: firefox-os;flashlight
Question: i would like to keep the camera flash LED on.
is there any specific API?
Comment: This question appears to be off-topic because it shows no prior research nor minimal understanding of the problem being solved
Here is another answer: This is only possible using the mozCameras API which you can use in Firefox OS 2.0 and higher.
|
Title: Amazon availability zones
Tags: amazon-web-services;amazon-s3;amazon-ec2;high-availability
Question: I'm fairly new to Amazon services and wondering what some of the best practices are for clustering/load balancing?
I have a load balancer in my colo (NJ) which may potentially be upgraded to Netscaler.
The application we're hosting on Amazon is nothing crazy and don't expect too much traffic. We're looking at 2 linux instances that would run a Node JS application with a MongoDB replica set. From what I understand, Amazon will evenly divide the traffic amongst the zones. The end-users location has no effect on where they'll be distributed (ie if I have a server in the west coast and one in the east coast, the user in the east coast could be directed to either east or west).
```If I wanted to direct users traffic based on location, a global DNS solution would make more sense?
```
One server would be the master db and the other would be slave with data replicating to each other.
```Anybody have any experience with this and how is the network performance?
```
A question about EC2/S3
```EC2 Instances and S3 buckets can only communicate if they are in the same region, correct?
```
Here is the accepted answer: The load balancer only works within one region. If you want to balance traffic between different regions you will need to look at latency based routing in Route 53. Keep in mind that availability zone and region have different meanings within EC2
MongoDB replica set is a flexible master/slave configuration. If the primary instance fails, a secondary, based on configured priority can automatically become primary. Network within a region is fast, you will have some latency if you use multiple regions.
EC2 instance can access an s3 bucket in any region, you wont pay for outgoing bandwidth if both are in the same region.
Comment for this answer: what he is saying is true.
|
Title: Need If Expression to compare time and date in Jmeter
Tags: jmeter
Question: I have a requirement where I need to compare two dates and time,
I am testing few REST APIs where the dates dynamically change after 8:00 AM for example basically my application provides data for 1 day:
If the UTC time is not 8 AM my application will display data for past one day
"GET data for Assets During 2021-11-28T8:00:00 to 2021-11-29T8:00:00"
But when UTC time is more than 8 AM for current date API date will dynamic change to next day
"GET data for Assets During 2021-11-29T8:00:00 to 2021-11-30T8:00:00"
I am planning to run a Soak test for 48 hr, if any one can provide a statement to compare, I can use timeshift function to either go to set of APIs when time is still not 8AM ${__timeshift(YYYY-MM-dd,-P1D))T08:00:00 and ${__timeshift(YYYY-MM-dd,P1D)) after 8 AM.
Thanks,
Here is the accepted answer: I think you will need to go for __groovy() function and Calendar class, something like:
will execute if current UTC hour is later than 8
```${__groovy(Calendar.getInstance(TimeZone.getTimeZone('UTC')).get(Calendar.HOUR_OF_DAY) > 8,)}
```
will execute if current UTC hour is earlier than 8
```${__groovy(Calendar.getInstance(TimeZone.getTimeZone('UTC')).get(Calendar.HOUR_OF_DAY) < 8,)}
```
should do the trick for you.
More information: 6 Tips for JMeter If Controller Usage
|
Title: How to translate SQL queries to cypher in the optimal way?
Tags: postgresql;neo4j;cypher
Question: I am new in neo4j using version 3.0. I have a huge transactional dataset that I converted to a graph model. I need to translate the below SQL query into cypher.
```create table calc_base as
select a.ticket_id ticket_id, b.product_id, b.product_desc,
a.promotion_flag promo_flag,
sum(quantity) sum_units,
sum(sales) sum_sales
from fact a
inner join dimproduct b on a.product_id = b.product_id
where store_id in (select store_id from dimstore)
and b.product_id in (select product_id from fact group by 1 order by count(distinct ticket_id) desc limit 5000)
group by 1,2,3,4;
```
Here is my ER diagram and corresponding graph model . My relationships for this query are:
```MATCH (a:PRODUCT)
MATCH (b:FACT {PRODUCT_ID: a.PRODUCT_ID})
CREATE (b)-[:HAS_PRODUCT]->(a);
MATCH (a:STORE)
MATCH (b:FACT {STORE_ID: a.STORE_ID})
CREATE (b)-[:HAS_STORE]->(a);
```
My cypher translation for this query is :
```PROFILE
MATCH (b:PRODUCT)
MATCH (a:FACT)
MATCH (c:STORE)
CREATE (d:CALC_BASE {TICKET_ID: a.TICKET_ID, PRODUCT_ID: a.PRODUCT_ID, PRODUCT_DESC: b.PRODUCT_DESC,
PROMO_FLAG: a.PROMOTION_FLAG, KPI_UNITS: SUM(a.QUANTITY_ABS), KPI_SALES: SUM(a.SALES_ABS) })
Q = (MATCH (e:FACT)
WITH count(PRODUCT_ID) AS PRO_ID_NUM , COUNT(DISTINCT TICKET_ID) AS TICKET_ID_NUM
ORDER BY TICKET_ID_NUM DESC)
WHERE b.PRODUCT_ID = Q
ORDER BY TICKET_ID, PRODUCT_ID, PRODUCT_DESC, PROMO_FLAG
```
My main problem is defining ```group by``` and sub queries in cypher.
How can I write this query into cypher in an optimal way?
Comment: Welcome to both Stack Overflow and graph databases! :-) To harness the power of graph databases, you have to map your connections to edges (or _relationships_ in Neo4j's terms). So if `MYTABLE` and `TMP_P` are connected on the `p_id` attribute, you should add a `(:MyTable)-[:REL]->(:TmpP)` relationship. (Note that by convention, node labels are CamelCase and relationships are UPPERCASE.) There excellent guides for RDBMS developers: see this [white paper](https://neo4j.com/resources/rdbms-developer-graph-white-paper/), and this [post](https://neo4j.com/developer/guide-sql-to-cypher/).
Here is the accepted answer: For one, there is no GROUP BY in Cypher, as the grouping columns are implicitly the non-aggregation columns in each row.
I'm assuming you have constraints and indexes set up? You'll need these set up correctly for performant queries.
A major red flag I'm seeing is that there are no relationships at all in these queries and likely in your entire data model. Graph databases are made to model relationships between things, and these tend to replace the concept of foreign keys in relational dbs. I'll speak more on better ways to model your data at the end.
That said, I'll take a stab at translating this with your current data model.
My approach is to go from the inside out. First let's get collections for allowed store_id and b.product_id values.
```// first collect allowed STORE_IDs
MATCH (s:STORE)
WITH COLLECT(s.STORE_ID) as STORE_IDs
MATCH (e:FACT)
// now get PRODUCT_IDs with the most associated TICKET_IDs
WITH STORE_IDs, e.PRODUCT_ID, COUNT(DISTICT e.TICKET_ID) as TICKET_ID_CNT
ORDER BY TICKET_ID_CNT DESC
LIMIT 5000
WITH STORE_IDs, COLLECT(e.PRODUCT_ID) as PRODUCT_IDs
// we now have 1 row with both collections, and will do membership checking with them later
// next get only PRODUCT nodes with PRODUCT_ID in the collection of allowed PRODUCT_IDs
MATCH (b:PRODUCT)
WHERE b.PRODUCT_ID in PRODUCT_IDs
WITH b, STORE_IDs
// now get FACT nodes with STORE_ID in the collection of allowed STORE_IDs
// and associated with PRODUCT nodes by PRODUCT_ID
MATCH (a:FACT)
WHERE a.STORE_ID in STORE_IDs
AND a.PRODUCT_ID = b.PRODUCT_ID
WITH a, b
// grouping is implicit, the non-aggregation columns are the grouping key
WITH a.TICKET_ID as TICKET_ID, b.PRODUCT_ID as PRODUCT_ID, b.PRODUCT_DESC as PRODUCT_DESC, a.PROMOTION_FLAG as PROMOTION_FLAG, SUM(a.QUANTITY) as SUM_UNITS, SUM(a.SALES) as SUM_SALES
CREATE (:CALC_BASE {TICKET_ID:TICKET_ID, PRODUCT_ID:PRODUCT_ID, PRODUCT_DESC:PRODUCT_DESC, PROMO_FLAG:PROMOTION_FLAG, SUM_UNITS:SUM_UNITS, SUM_SALES:SUM_SALES})
```
That should get you what you want.
And now back to the major problem with all this...you're using a graph db for non-graph data and queries. You're using foreign keys and attempting to join nodes rather than modeling these as relationships. You're also using abbreviated names, which makes it hard to figure out the meaning of your data and how it's supposed to relate to each other.
My advice to you is to rethink your data model, especially on how your data connects together. Look for where you're using foreign key joining, and instead think about how to replace that with relationships between your nodes, complete with the nature of those relationships.
Data modeled in a more graph-oriented way with relationships lends itself to more graph-oriented and performant queries, as well as a data model that is easier to understand and communicate to others.
EDIT
Now that you have relationships between different types of nodes, we can simplify the query a bit.
The approach will be similar, we will still go from the inside out rather than some inner subquery (though with Neo4j 3.1, pattern comprehension can be used like an inner query in various cases).
```// first get products with the most tickets (top 5k)
MATCH (f:FACT)
WITH f.PRODUCT_ID as productID, COUNT(DISTICT f.TICKET_ID) as ticketIDCnt
ORDER BY ticketIDCnt DESC
LIMIT 5000
MATCH (p:PRODUCT)
WHERE p.PRODUCT_ID = productID
WITH p
// with those products, get related facts (graph equivalent of a join)
MATCH (p)<-[:HAS_PRODUCT]-(f:FACT)
// ensure the fact has a related store.
// if ALL facts have a related store, you don't need this WHERE clause
WHERE (f)-[:HAS_STORE]->(:STORE)
WITH f.TICKET_ID as TICKET_ID, p.PRODUCT_ID as PRODUCT_ID, p.PRODUCT_DESC as PRODUCT_DESC, f.PROMOTION_FLAG as PROMOTION_FLAG, SUM(f.QUANTITY) as SUM_UNITS, SUM(f.SALES) as SUM_SALES
CREATE (:CALC_BASE {TICKET_ID:TICKET_ID, PRODUCT_ID:PRODUCT_ID, PRODUCT_DESC:PRODUCT_DESC, PROMO_FLAG:PROMOTION_FLAG, SUM_UNITS:SUM_UNITS, SUM_SALES:SUM_SALES})
```
Again, you'll want to make sure there are indexes and unique constraints where appropriate in your data model to speed up your matches.
There are still several areas where you might want to think about modifying your data model (where it makes sense, of course). There is a concept of ticket IDs, but no :Ticket nodes. You have created :CALC_BASE nodes, but have not related them to to :Products or tickets. In general, it's useful to see where you're still using the concept of foreign keys, and seeing if it would be better to model these as relationships to other nodes.
And again on GROUP BY, this is handled for you in Cypher. Your rows are made up of non-aggregation columns, and aggregation columns. The non-aggregation columns are automatically used by Cypher as the grouping key (the equivalent of grouping by those columns). Since SUM_UNITS and SUM_SALES are the result of SUM() operations, which are aggregation functions, all the other columns are automatically used as the grouping key.
Comment for this answer: Thanks @InverseFalcon for the great answer! I have updated my question with more details and better naming convention .
Comment for this answer: Your approach is working thanks. but now by knowing the relationships between FACT, STORE and PRODUCT nodes on condition of FACT.product_id = PRODUCT.product_id and FACT.store_id = STORE.store_id then we dont need to translate all sql conditions into cypher like:
`inner join dimproduct b on a.product_id = b.product_id `
`where store_id in (select store_id from dimstore)`
`and b.product_id in (select product_id from fact)`
Am I right? if yes then by removing the above join and conditions from cypher how to perform `group by`?
Comment for this answer: Good to see relationships in the graph. Is the problem still about subqueries and group by? Because the approach remains pretty much the same: don't try to match the structure of the sql query with inner queries. Instead, work from the inside out. Construct the collections you'll use to check membership later, and use WITH to connect different parts of your queries, passing that data along for use later. As for GROUP BY, that's implicit, based on the non-aggregation columns, so it's pretty much handled for you.
Comment for this answer: Added a query based upon your data model changes.
|
Title: Server-side word automation
Tags: c#;word-automation
Question: I am looking for alternatives to using openxml for a server-side word automation project. Does anyone know any other ways that have features to let me manipulate word bookmarks and tables?
Comment: @JohnBaum you should have mentioned this in the question itself..
Comment: What are you trying to automate and why does openxml not fit your needs, without those details it will be difficult to provide other options.
Comment: Now knowing what you want to do see this question it is basically the same thing you want to do. http://stackoverflow.com/questions/3308299/replace-bookmark-text-in-word-file-using-open-xml-sdk
Comment: i am trying to delete all the text within a bookmark start and bookmarkend. openxml doesnt let me do this as far as i know
Comment: that method does not work for me. The next sibling after my bookmark is not the only element part of the actual bookmark in the doc.
Here is the accepted answer: I am currently doing a project of developing a word automation project for my company and I am using DocX Very simple and straight forward API to work with. The approach I am using is, whenever I need to work with XML directly, this API has a property named "xml" in the Paragraph class which gives you access to the underlying xml direclty so that I can work with it. The best part is its not breaking the xml and not corrupting the resulting document. Hope this helps!
Example code using DocX..
``` XNamespace ns = "http://schemas.openxmlformats.org/wordprocessingml/2006/main";
using(DocX doc = DocX.Load(@"c:\temp\yourdoc.docx"))
{
foreach( Paragraph para in doc.Paragraphs )
{
if(para.Xml.ToString().Contains("w:Bookmark"))
{
if(para.Xml.Element(ns + "BookmarkStart").Attribute("Name").Value == "yourbookmarkname")
{
// you got to your bookmark, if you want to change the text..then
para.Xml.Elements(ns + "t").FirstOrDefault().SetValue("Text to replace..");
}
}
}
}
```
Alternative API exclusively to work with bookmarks is .. http://simpleooxml.codeplex.com/
Example on how to delete text from bookmarkstart to bookmarkend using this API..
``` MemoryStream stream = DocumentReader.Copy(string.Format("{0}\\template.docx", TestContext.TestDeploymentDir));
WordprocessingDocument doc = WordprocessingDocument.Open(stream, true);
MainDocumentPart mainPart = doc.MainDocumentPart;
DocumentWriter writer = new DocumentWriter(mainPart);
//Simply Clears all text between bookmarkstart and end
writer.PasteText("", "YourBookMarkName");
//Save to the memory stream, and then to a file
writer.Save();
DocumentWriter.StreamToFile(string.Format("{0}\\templatetest.docx", GetOutputFolder()), stream);
```
Loading the word document into different API's from memory stream.
```//Loading a document file into memorystream using SimpleOOXML API
MemoryStream stream = DocumentReader.Copy(@"c\template.docx");
//Opening it from the memory stream as OpenXML document
WordprocessingDocument doc = WordprocessingDocument.Open(stream, true);
//Opening it as DocX document for working with DocX Api
DocX document = DocX.Load(stream);
```
Comment for this answer: not out of the box.. but you can get hold of it from the underlying xml directly..depending on your requirement..
Comment for this answer: thats what PasteText method is doing.. writer.PasteText("write some text", "yourBookMarkName"); .. as simple as that..
Comment for this answer: You can use it to erase all the lines spanning that bookmark using PasteText method, and its easy to use OpenXml to instert a table isn't it?
Comment for this answer: yes..I agree. that is a potential problem unless you work with memory streams or save and open the same file within the program separately for achieving different functionality..
Comment for this answer: Both the api's mentioned above support loading directly from a stream, so i think its easy to work with them by passing the document as a memory stream between methods..
Comment for this answer: DocumentReader is class from SimpleOOXML APi, did you reference it properly in your solution?
Comment for this answer: DocumentReader is a helper class in SimpleOOXML api to help loading the document, you don't need it with DocX API, you can just load the document like - DocX.Load(@"c:\yourdoc.docx");
Comment for this answer: Oh.. But thats just a normal stream object, you can just use FileStream from System.IO to do that .. like - FileStream
stream = new FileStream(@"c:\yourdoc.docx", FileMode.Open);
Comment for this answer: Yes.. did you also try SimpleOOXMl, as it is more specific to working with bookmarks, did you try using the PasteText Method in it...its working exactly how you are intending to do with your case..
Comment for this answer: Try commenting out these two line - in CleanNodes() method - var start = _doc.Xml.Descendants(ns + "bookmarkStart").ToList();
var end = _doc.Xml.Descendants(ns + "bookmarkEnd").ToList();
Comment for this answer: I leave the office by 5 (GMT) and that's when u r free :( I guess the code is just working fine and is a bit noddy and most straight forward to what u r looking for. Just refactor it to your needs. All the best..Cheers :)
Comment for this answer: am i able to access the bookmarks in the word doc using docX?
Comment for this answer: can you give an example on how i can use the xml to get at the bookmarks
Comment for this answer: Does this api support finding a bookmark and writing some text directly following it? THats another one of the things i need to accomplish
Comment for this answer: ok, my bookmark is a selection of 3 lines. Based on some criteria i have to either erase all the bookmark selection of lines or enter a new line after the bookmark and write a table there. Which of these two tasks does parse text accomplish?
Comment for this answer: well i was trying to avoid having to use two different api's and have one single solution.
Comment for this answer: if i use two seperate apis, i have difficulty saving as openxml opens the file and docx also opens the file but both cant save to it
Comment for this answer: i actually cant get access to the memory stream and docwriter for the docx api. Is there something i have to iport?
Comment for this answer: Thanks for updating the answer. One thing that i still cant get to work is the documentreader. It cant find that reference and i dont know how to get visual studio to recognize it?
Comment for this answer: I want to use just the docx api. How can i get access to the document reader/writer?
Comment for this answer: i tried this approach just now to replace the bookmark text and it didnt appear to do anything. Did it work in your test?
Comment for this answer: yes i am using the simpleooxml and the pastetext method. It doesnt seem to find the bookmark when i enter the bookmarkname. It says "Sequence contains no elements"
Comment for this answer: let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/9234/discussion-between-john-baum-and-flowerking)
Comment for this answer: the bookmark deletion works now. The inserting a table at a specific point however does not.
Comment for this answer: let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/9328/discussion-between-john-baum-and-flowerking)
|
Title: Android gallery didnot show camera images in 4.4?
Tags: android;android-intent;camera;gallery
Question: I have activity which take picture to my AppFolder using camera, but pictures are not visible from android gallery. I'm almost sure that in below 4.4 version it was good
``` Intent intent = new Intent("android.media.action.IMAGE_CAPTURE");
imageFileName = ...
File imageFileFolder = new File(Environment
.getExternalStorageDirectory(), "/AppFolder/");
imageFileFolder.mkdir();
intent = new Intent(
android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
intent.putExtra(
MediaStore.EXTRA_OUTPUT,
Uri.parse("file:///sdcard/AppFolder/"
+ imageFileName));
photoTimeStamp = System.currentTimeMillis();
startActivityForResult(intent, Constants.TAKE_PICTURE);
```
|
Title: Coroutine acts different when called normally VS. through event handler
Tags: c#;unity3d;admob
Question: The following coroutine, when run on the press of a UI button, will run normally and show "3...2...1...0" and then disappear. However, when run as a part of the event handler delegate HandleRewardBasedVideoClosed (part of AdMob) it will show "3...1..." and then disappear. For the life of me I cannot figure out why it would act differently when called in these two different manners.
```public IEnumerator Countdown() {
timeLeft = 3;
while (timeLeft >= 0) {
countdownText.text = timeLeft.ToString();
yield return new WaitForSecondsRealtime(1.0f);
timeLeft--;
}
if (timeLeft < 0) {
countdownText.gameObject.SetActive (false);
}
}
public void HandleRewardBasedVideoClosed(object sender, EventArgs args){
MonoBehaviour.print("HandleRewardBasedVideoClosed event received");
if (reward) {
StartCoroutine ( gm.Countdown ());
} else
SceneManager.LoadScene (SceneManager.GetActiveScene ().name);
}
```
Comment: Does your print statement fire twice?
Comment: Shouldn't timeLeft be a local variable in the Coroutine scope? Maybe Countdown is being called twice at the same cycle, and as the variable is not local, it's beeing decremented by both routines.
Comment: @MatrixTai I hadn't realized but yes it does seem like time is passing twice as fast and so it only count 3 and 1. Why would this be the case?
Comment: The print statement does not fire twice. WaitForSeconds doesn't work because Time.timeScale is set to 0 at the time this is hit
Comment: May I know if your time passed two time faster? Like just using 1.5s to count 3s when in Admob
Comment: I havent use admob before, so I want you to make another try first. Use `WaitForSeconds` instead of `WaitForSecondsRealtime`. See if the problem fixed.
Here is the accepted answer: I have a suspicion about this one. Is it that the count down is in fact doing "3.2 .. 2.1 .. 0.SetActive(false)" so quickly that you're not seeing it running the coroutine twice? If so the following code will resolve that particular problem (if that's the case):
```private bool isCountingDown = false;
public IEnumerator Countdown()
{
if ( isCountingDown ) return;
isCountingDown = true;
for ( int timeLeft = 3; timeLeft >= 0; timeLeft-- )
{
countdownText.text = timeLeft.ToString();
yield return new WaitForSecondsRealtime(1.0f);
}
countdownText.gameObject.SetActive (false);
isCountingDown = false;
}
public void HandleRewardBasedVideoClosed(object sender, EventArgs args)
{
MonoBehaviour.print("HandleRewardBasedVideoClosed event received");
if (reward) {
// This might be a redundant check, as we're doing this in the
// Coroutine, but we may as well not go through the process of
// calling it in the first place.
if (! isCountingDown ) StartCoroutine ( gm.Countdown ());
} else
SceneManager.LoadScene (SceneManager.GetActiveScene ().name);
}
```
Comment for this answer: I like the idea of using a for loop instead of while to try to ensure it doesn't skip. I'll try this when I get home. I'm still curious why it would act different when run through a button instead of the delegate
Comment for this answer: You can improve this by making `timeLeft` a local variable instead of a member variable. It seems unnecessary and dangerous to not have it this way (which is likely also one of the root causes of his problem).
Comment for this answer: You've got an event handler in place. Your ad manager might call the event twice, for "whatever" reason. In addition, when you press the button, are you calling the coroutine, and then the ad manager ALSO calls the routine? Did you clean up the code after yourself and your tests? Without further information, it's hard to tell, but these are possible reasons. An event or method doesn't get called twice for no reason.
|
Title: How call $digest in this?
Tags: angularjs
Question: Now I write code without context, $scope
```app.controller(function(){
// ..
this.list = [];
// what ever work with a list of
this.$digest(); // is not a function, why?
});
```
before I could
```app.controller(function($scope){
// ..
$scope.list = [];
// what ever work with a list of
$scope.$digest();
});
```
But me need this, how it work?
Comment: @Claies is that a canonical answer? Do you consider this question a duplicate?
Comment: just because you use `this` doesn't mean you can't inject `$scope`.
Comment: here is an answer I wrote a while back to try to explain the concept: http://stackoverflow.com/a/30644370/2495283
Comment: **storing values *directly*** on `$scope` is considered bad practice by **some**, however using `$scope` for what it is intended for is never a problem.
Comment: I would say yes, the answer should be a match to this question.
Comment: it isn't. `$digest` is a function of `$scope`, and therefore, if you need to call `$digest`, you need to inject `$scope`. `this` points to the controller; using `this` (ControllerAs) instead of `$scope` creates an alias of the controller on `$scope`, but doesn't expressly give the controller access to `$scope` methods. `$scope` is a provider that does a lot, and when people say "dont' use `$scope`", they probably should be saying "don't use `$scope` when it isn't necessary". It is *necessary* if you need `$digest`.
Comment: as a side point, there are very few reasons to ever need `$digest`, and if you are so against `$scope` that you are arguing against it here, it's surprising that you aren't trying to find a way not to use `$digest`.
Comment: here is another more in depth analysis of the same topic, maybe this can explain it better (differently, at least). http://stackoverflow.com/a/14168699/2495283
Comment: $scope considered bad practise
Comment: how use $digest in this?
|
Title: maven-clean-plugin is not removing all given directories
Tags: java;maven;maven-3;maven-plugin;maven-clean-plugin
Question: In my project ```.pom``` I set up the ```maven-clean-plugin``` like so:
```<plugin>
<artifactId>maven-clean-plugin</artifactId>
<version>2.6.1</version>
<configuration>
<filesets>
<fileset>
<directory>src/main/webapp/bower_components</directory>
</fileset>
<fileset>
<directory>node_modules</directory>
</fileset>
<fileset>
<directory>node</directory>
</fileset>
</filesets>
</configuration>
</plugin>
```
The plugin purpose is to remove directories which are created by ```frontend-maven-plugin```. Anyway the last one works OK.
Problem
Now the issue is that for no reason, one of the above folders is never removed. And it is always the one "in the middle". I added 3 ```filesets``` and always the 2nd one is not removed, see logs:
```[INFO] Deleting /src/main/webapp/bower_components (includes = [], excludes = [])
[INFO] Deleting /node_modules (includes = [.bindings], excludes = [])
[INFO] Deleting /node (includes = [], excludes = [])
```
If I change folders' order:
```[INFO] Deleting /src/main/webapp/bower_components (includes = [], excludes = [])
[INFO] Deleting /node (includes = [.bindings], excludes = [])
[INFO] Deleting /node_modules (includes = [], excludes = [])
```
And also the 2nd option always contains this part: ```includes = [.bindings]``` which I believe results in the folder not being deleted.
Why is that happening, and how to fix it?
EDIT, debug log
```mvn -X clean``` result, I think this is where it breaks:
http://pastebin.com/ep1i2Bjk
After parsing the ```pom.xml``` it reads the configuration with this parameter. However, I did not put it there.
Comment: run `mvn` command with `-X` option to see full logs.
Comment: @FranMontero see update
Comment: @khmarbaise because I want to remove the directory, not it's content. Why one folder is deleted OK and another is not?
Comment: @khmarbaise You do not need to specify the content of the folder. Obviously, what is inside should be deleted.
Comment: You have not defined what to include nor what's excluded...See the [example in the documentation](http://maven.apache.org/plugins/maven-clean-plugin/examples/delete_additional_files.html). BTW: Just try `mvn -D clean.verbose=true package` Apart from that the output shows all configuration parts inclusive those which are defaults for the plugin.
Comment: Sorry...you wan't to remove a directory without removing it's content ? I assume i misunderstand a thing but from my point of view it does not make sense to try to delete a directory without removing it's content.
Here is the accepted answer: OK I have found the issue! It is actually a bug in the plugin and is already reported. It affects the ```maven-clean-plugin 2.6.1```.
It was my side misconfiguration, read on for the resolution.
Situation
You have a parent project and a child project, both have to use the plugin.
Now;
in parent project you specify some filesets.
in child project you specify some filesets.
What I think is expected:
Plugin read both parent's and child's filesets and cleans them all.
What happens:
Plugin OVERRIDES the parent's filesets with child's filesets. Filesets are overriden in order. HOWEVER the parents configuration for the fileset is not cleared!
My situation:
Parent .pom:
```<filesets>
<fileset>
<directory>test-output</directory>
</fileset>
<fileset>
<directory>${basedir}/</directory>
<includes>
<include>.bindings</include>
</includes>
</fileset>
</filesets>
```
Childs pom:
```<filesets>
<fileset>
<directory>node</directory>
</fileset>
<fileset>
<directory>node_modules</directory>
</fileset>
<fileset>
<directory>src/main/webapp/bower_components</directory>
</fileset>
</filesets>
```
And with the above information; the config read by plugin is:
```<filesets>
<fileset>
<directory>node</directory>
</fileset>
<fileset>
<directory>node_modules</directory>
<includes>
<include>.bindings</include>
</includes>
</fileset>
<fileset>
<directory>src/main/webapp/bower_components</directory>
</fileset>
</filesets>
```
instead of a config with 5 filesets.
Bug report https://issues.apache.org/jira/browse/MCLEAN-64
RESOLVED
```
For configuration there are some magic attributes to instruct how xml elements should be combined. In this case add combine.children="append" to the filesets tag. More details about this can be found on the POM Reference page. http://maven.apache.org/pom.html
```
Worth mentioning is that it is enough to set ```combine.children="append"``` in child's project ```.pom```.
Comment for this answer: On the same note, I had an empty plugin definition on the parent. I deleted the `maven-clean-plugin` from the parent and my child clean worked.
|
Title: PHP on IIS Displaying php code instead of page
Tags: php;iis
Question: I installed PHP 5.3 on IIS (Windows Server 2008), followed the instructions on
http://php.net/manual/en/install.windows.iis7.php
when i am opening php pages now i get the php code instead of the processed page, why does it happens and how do i fix it?
Here is the accepted answer: If you are seeing your PHP source code in the browser, then it will be something like you are using PHP short tags in your code ```( <? instead of <?php )``` and you don't have the short_tags directive turned on in php.ini.
The other possibility is your handler mapping is not correct?
Configure IIS to Handle PHP Requests
For IIS to host PHP applications, you must add a handler mapping that tells IIS to pass all PHP-specific requests to the PHP application framework by using the FastCGI protocol.
Configure IIS to handle PHP requests by using IIS Manager
Open IIS Manager. At the server level, double-click Handler Mappings.
In the Actions pane, click Add Module Mapping.... In the Add Module Mapping dialog box, specify the configuration settings as follows:
Request path: *.php
Module: FastCgiModule
Executable: "C:[Path to your PHP installation]\php-cgi.exe"
Name: PHP via FastCGI
Click OK.
In the Add Module Mapping confirmation dialog box that asks if you want to create a FastCGI application for this executable, click Yes.
Test that the handler mapping works correctly by creating a phpinfo.php file in the C:\inetpub\wwwroot folder that contains the following code:
Open a browser and navigate to //dns-or-ip-to-server/phpinfo.php. If everything was setup correctly, you will see the standard PHP information page.
NOTE: If you do not see FastCgiModule in the Modules: list, the module is either not registered or not enabled. To check if the FastCGI module is registered, open the IIS configuration file that is located at %windir%\windows\system32\config\applicationHost.config and check that the following line is present in the section:
In the same file, also check that the FastCGI module is added to the section:
Comment for this answer: looks like that! checking now
Comment for this answer: that was that! wasted a lot of time to figure what is the problem, thanks!
|
Title: Storing Amazon S3 signed URLs in the database
Tags: python;amazon-web-services;amazon-s3
Question: My application generates a couple of signed URLs, so when the page is loaded, it takes lot of time.
I want to generate those signed URLs only once and store in database. So, I want to retrieve those URLs from the database to show to user.
Is this acceptable at all times or will there be cases where the signed URLs need to be generated freshly?
Comment: Not microseconds as it does for multiple urls and also does api calls with s3 for connection objects etc. As of now, taking around 20 seconds for 3 urls.
Comment: Ok, I will try to improve that. But can you please answer my quesion - whether it needs to generated freshly ?
Comment: And I am signing like this
distro = boto.cloudfront.distribution.Distribution(connection=my_connection....
my_signed_url = my_distro.create_signed_url(url, settings.KEYPAIR_ID, .....
Comment: Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/131858/discussion-between-user2349115-and-michael-sqlbot).
Comment: Define "a lot of time." This shouldn't take should more than a millisecond or two. Please advise. There are a couple of problems with storing signed URLs. Are your signed URLs V2 (`&Signature=` base64) or V4 (`&X-Amz-Signature=` hexadecimal)?
Comment: Signing URLs is all done locally. There's no interaction with the S3 service when signing a URL, but there may be some overhead when instantiating the object doing the signing. Show some code? Perhaps you're not reusing objects where you could, or something along those lines?
Comment: You didn't answer my question, about signature version... but based on what you've just added, the answer is entirely different. These are not S3 signed [email protected]. You are signing CloudFront URLs, and that's completely different than S3. The algorithm is different and the rules are different, even if you're using S3 behind CloudFront. Consider editing the question to reflect what you are actually doing.
|
Title: Ancestor-or-self issue at top level
Tags: xpath
Question: With the following XML, the top level is returning the nodes of all levels. There are no ancestors for the top level, so why am I getting it’s children?
```XML
<?xml version="1.0" encoding="ISO-8859-1"?>
<WBSs>
<WBS GUID="2">
<Name>work</Name>
<WBSs>
<WBS GUID="1">
<Name>Wall</Name>
<ParentWBS>2</ParentWBS>
</WBS>
<WBS GUID="2">
<Name>South Wall</Name>
<ParentWBS>2</ParentWBS>
</WBS>
<WBS GUID="3">
<Name>North Wall</Name>
<ParentWBS>2</ParentWBS>
</WBS>
</WBSs>
</WBS>
</WBSs>
```
XPATH
Note: Apply template is on .//WBS
```<xsl:variable name="wbsCode" select=".//ancestor-or-sel181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16WBS/@GUID[1]"/>
```
Note: I have an xslt instruction immediately following the xpath expression to strinify the nodes and include ‘.’.
Result
(539)825-8923
2.1
2.2
2.3
Desired result
2
2.1
2.2
2.3
XSLT
```<xsl:variable name="WBS_ELEMENT_TABLE">
<xsl:apply-templates select=".//WBS" mode="I_WBS_ELEMENT">
<xsl:with-param name="ProjectId" select="$ProjectId"/>
</xsl:apply-templates>
</xsl:variable>
<xsl:template match="WBS" mode="I_WBS_ELEMENT">
<xsl:param name="ProjectId"/>
<xsl:variable name="wbsCode" select=".//ancestor-or-sel181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16WBS/@GUID[1]"/>
<xsl:variable name="temp2" select="string-join(($wbsCode), '.')"/>
<WBS_ELEMENT>
<xsl:value-of select="$temp2"/>
</WBS_ELEMENT>
</xsl:template>
```
Comment: Instead of writing *Apply template is on...* and *I have an XSLT instruction immediately following...*, please just create include real code in an [mcve] that illustrates your problem. Thanks.
Comment: Here's the relevant xslt code... sorry about that.
Comment: XSLT added above... sorry about that, it started out as an XPATH problem.
Here is the accepted answer: ```//``` means ```/descendant-or-sel181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16node()/```, so ```//ancestor181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16*``` means ```./descendant-or-sel181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16node()/ancestor181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16x``` which finds all ancestors of all descendants, i.e. everything.
Get out of that habit of using ```//``` without thinking about what it means!
|
Title: ecrypted home partition .Private folder missing
Tags: encryption
Question: I have ubuntu installed on a Dell laptop. It was in sleep mode (lid closed) and with low battery charge. When I open it the login screen appear and the laptop turned of (with no power).
After I plug it in and power it on I could not login... looking in forum I found some cases so I block auto mount in ```fstab```, restarted the SO, login from terminal (CTRL +F1) and run
```e2fsck -yc /dev/sda2
```
75/0/0 erros founded and "fixed".
Now, when I mount the sda2 partition (```sudo mount /dev/sda2 /home```) I see Access-Your-Private-Data.desktop and README.txt like spected but the .Private shortcut that points to /home/.ecryptfs/username/.Private is broken... there is no /home/.ecryptfs folder...
```df -h``` show that only 7Gb of 800Gb is free... so my files is there!
and the lost+found folder is full of files and folders named with #somenumber
the ```ecryptfs-recover-private``` says:
```find: ‘/run/user/112/gvfs’: Permission denied
```
and find nothing...
any sugestions?
EDITED:
Thanks guys for the quick return. So... I created another user using only the sda1 partition and the automount still disabled in ```/etc/fstab```
```UUID=cf40cc55-18f3-4a20-a5ab-901522d6d383 / ext4 errors=remount-ro 0 1
#UUID=6ef2647e-1847-4e0e-9a2a-c95ff78a89e4 /home ext4 defaults 0 2
#/swapfile none swap sw 0 0
```
...now I can login with the new user:
What is weird
I have face this problem before, and recover worked well using ```ecryptfs-recover-private``` command like metioned by Seppo Erviälä here , but now I see that the ```.Private``` shortcut points to nothing...
I logged in with the new user and mounted the partition (just clcking in taskbar shortcut). Now sda2 is mounted in ```/media/bocapio/6ef2647e-1847-4e0e-9a2a-c95ff78a89e4```
listing content of this folder
```
ls -lah
```
```drwxr-xr-x 8 root root 4,0K Abr 27 18:53 .
drwxr-x---+ 3 root root 4,0K Abr 28 18:24 ..
drwx------ 164 root root 16K Set 3 2014 lost+found
dr-x------ 2 lucas lucas 4,0K Abr 25 18:00 lucas
```
but there is no .Private folder!
```sudo find /media -type d -name .Private``` returns nothig!
```sudo find /media -type d -name .ecryptfs``` returns nothing!
I'm afraid that the .Private folder was moved to lost+found folder...
sorry for the first poor explanation...
|
Title: In Rust, how do you explicitly tie the lifetimes of two objects together, without referencing eachother?
Tags: opengl;reference;rust;lifetime
Question: The specific case where I ran into this was in using OpenGL, writing ```struct```s for a ```VertexBuffer``` and ```VertexArray```. Each struct is, in essence, a single ```GLuint``` that refers to the OpenGL object. In the simplest case, a ```VertexArray``` has exactly one ```VertexBuffer``` associated with it.
The problem is that the ```VertexArray``` cannot live longer than its associated ```VertexBuffer```. Rust doesn't know this though, since the reference that the ```VertexArray``` holds is internal to OpenGL, so it will have no problem calling the destructor on the ```VertexBuffer``` while an existing ```VertexArray``` references it.
The current solution I have is to put a reference in manually, which goes unused:
```struct VertexArray<'a> {
id: GLuint,
#[warn(dead_code)]
vbo: &'a VertexBuffer
}
```
In more complex cases, the reference might turn out to be necessary, but it feels inelegant and wasteful. A VAO with multiple VBOs could be implemented with an array/vector of references. Being able to change the associated buffers after the VAO has been created might also add this requirement.
Is there any way to achieve this same behaviour without the reference? Or, since the compiler can recognize that the reference is never used and give a warning, will it be optimized out?
Here is the accepted answer: This is one of the primary use cases for the ```PhantomData``` type, as demonstrated there in an example.
Applied to this case, you’ll end up with something like this:
```use st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16marker181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16PhantomData;
struct VertexArray<'a> {
id: GLuint,
vbo_lifetime: PhantomData<&'a VertexBuffer>,
}
```
And instantiation will be something like this:
``` fn make<'a>(&'a self) -> VertexArray<'a> {
VertexArray {
id: …,
vbo_lifetime: PhantomData,
}
}
```
(This is eliding the generic type, allowing it to be inferred; you could also write ```PhantomDat181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16<&'a VertexBuffer>```.)
|
Title: jQuery - find a class exist and return true/false
Tags: javascript;jquery;find
Question: I want to check if class exists inside a child of ```li``` and if class exists return ```true``` or ```defined```.
HTML
```<li class="show-more">
<a> if found selected then add to this</a>
<ul>
<li><a> sub-menu</a></li>
<li><a> sub-menu</a></li>
<li><a class="selected"> sub-menu</a></li>
<li><a> sub-menu</a></li>
</ul>
</li>
```
here is my code, but wont return true or false, it return the url of anchor
```var active_sub_menu = $('li.show-more ul li').find('selected');
alert(active_sub_menu);
if(typeof active_sub_menu == 'defined'){
$('li.show-more > a').addClass('selected');
}
```
please dont suggest to use CSS, I just need javascript
Comment: `find('selected')` and as you can see `'selected'` !== `'.selected'`
Here is the accepted answer: Why not simply go for ```.hasClass()``` or ```length```:
```if ( $("selector").length ) { /*EXISTS (greater than 0) */ }
```
or
```if ( $("selector")[0] ) { /*EXISTS (not undefined) */ }
```
or
```if ( $("selector").hasClass("someClass") ) { /*EXISTS (has class) */ }
```
```$('.show-more').each(function () {
if ( $(this).find("li a.seleted").length ) {
console.log('FOUND');
$(this).children("a").addClass('selected'); // Make main anchor also selected
} else {
console.log('NOT FOUND');
}
});```
```.selected {
background: red;
}```
```<ul>
<li class="show-more">
<a> if found .selected then add also to this anchor</a>
<ul>
<li><a> sub-menu</a></li>
<li><a> sub-menu</a></li>
<li><a class="selected"> sub-menu</a></li>
<li><a> sub-menu</a></li>
</ul>
</li>
</ul>
<script src="//code.jquery.com/jquery-3.1.0.js"></script>```
Comment for this answer: please check updated question, there is an anchor tag just after list tag and that is the place where I want to add selected class if found in its child `ul li a`
Here is another answer: use ```length``` to test if selector returns elements
```if( $('li.show-more ul li').find('selected').length){
$('li.show-more > a').addClass('selected');
}
```
Comment for this answer: it is not working, I tried `alert($('li.show-more ul li').find('qa-nav-main-selected').length);` and it return `0` but there is `selected` class in a child
Here is another answer: ```if($('li.show-more ul li').hasClass('defined')) {
$('li.show-more > a').addClass('selected');
}
```
I assume that's what you intended with ```typeof```?
Another option:
```var active_sub_menu = $('li.show-more ul li').find('.selected');
if(active_sub_menu.length) {
$('li.show-more > a').addClass('selected');
}
```
Comment for this answer: `active_sub_menu.length` also returning `0`??
|
Title: Error when deploy Spring Boot application in Google AppEngine + Non Zero Exit 2
Tags: spring-boot;google-app-engine;google-cloud-platform
Question: I am getting below error when deploying my test maven Spring Boot REST app in GAE -
[ERROR] Failed to execute goal com.google.cloud.tools:appengine-maven-plugin:2.0.0-rc5:deploy (default-cli) on project PublicLibrary: App Engine application deployment failed: com.google.cloud.tools.appengine.operations.cloudsdk.process.ProcessHandlerException: com.google.cloud.tools.appengine.AppEngineException: Non zero exit: 2 -> [Help 1]
App.yaml looks like
```runtime: java
env: flex
runtime_config:
jdk: openjdk8
env_variables:
SPRING_PROFILES_ACTIVE: "gcp"
handlers:
- url: /.*
script: this field is required, but ignored
manual_scaling:
instances: 1
```
pom.xml
```<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.3.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.publiclibrary</groupId>
<artifactId>PublicLibrary</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>PublicLibrary</name>
<description>Demo project for Spring data relationship</description>
<properties>
<java.version>1.8</java.version>
</properties>
<profiles>
<profile>
<id>cloud-gcp</id>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter</artifactId>
<version>1.0.0.RELEASE</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>appengine-maven-plugin</artifactId>
<version>2.0.0-rc5</version>
<configuration>
<deploy.projectId>libraryapi-03312019</deploy.projectId>
<deploy.version>1.0</deploy.version>
</configuration>
</plugin>
</plugins>
</build>
</profile>
</profiles>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-rest</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>25.0-jre</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
```
I would appreciate any help.
Comment: Can you try to deploy your application using [maven](https://cloud.google.com/appengine/docs/flexible/java/using-maven) instead of an app.yaml? it will be much easier, the command should be like: mvn appengine:deploy
Here is another answer: Thanks. I followed the sample app and add the differences to my pom.xml. That alone didn't fix the issue. Additionally, I added legacy health checks that to app.yaml that allowed to deploy the app (which took quite some time) -
health_check:
enable_health_check: True
check_interval_sec: 5
timeout_sec: 4
unhealthy_threshold: 2
healthy_threshold: 2
But after deployment when I try to access the app as https://< project-id>:appspot.com I get below error -
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
UPDATE
I added the resources section to app.yaml as in GAE Java is known to consume high memory as identified in this article
```resources:
cpu: 2
memory_gb: 2.3
disk_size_gb: 10
volumes:
- name: ramdisk1
volume_type: tmpfs
size_gb: 0.5
```
Here is another answer: Check the official Springboot sample of Google Cloud Platform to see what you might be doing different.
For example, I see that in the <properties> section it requires <maven.compiler.source> and <maven.compiler.target>:
``` <properties>
<java.version>1.8</java.version>
<maven.compiler.source>${java.version}</maven.compiler.source> <!-- REQUIRED -->
<maven.compiler.target>${java.version}</maven.compiler.target> <!-- REQUIRED -->
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<appengine.maven.plugin>1.3.2</appengine.maven.plugin>
</properties>
```
Note that you are using version 2.0.0-rc5 for the appengine-maven-plugin so if you add it in the properties like above you should use the same version as in the plugin section:
``` <plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>appengine-maven-plugin</artifactId>
<version>${appengine.maven.plugin}</version>
</plugin>
```
|
Title: Why doesn't rev(factor) work as a way to reverse the wt argument of dplyr181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16top_n()?
Tags: r;dplyr;tidyverse;top-n
Question: Goal: return the row with thing1=F and thing2=MeFirst
Why does this not work?
```tibble(
row = 1:10,
thing1 = c(rep("F",5),rep("L",5)),
thing2 = c(rep("MeSecond",4),rep("MeFirst",2),rep("MeSecond",4))
) %>%
mutate(
thing1 = factor(thing1, levels = c("F", "L")),
thing2 = factor(thing2, levels = c("MeFirst", "MeSecond"))
) %>%
top_n(
.,
n = 1,
wt = rev(thing1)
) %>%
top_n(
.,
n = 1,
wt = rev(thing2)
)
```
Above returns rows 2:5.
I know this does work:
```tibble(
row = 1:10,
thing1 = c(rep("F",5),rep("L",5)),
thing2 = c(rep("MeSecond",4),rep("MeFirst",2),rep("MeSecond",4))
) %>%
mutate(
thing1 = factor(thing1, levels = c("F", "L")),
thing2 = factor(thing2, levels = c("MeFirst", "MeSecond"))
) %>%
top_n(
.,
n = -1,
wt = thing1
) %>%
top_n(
.,
n = -1,
wt = thing2
)
```
But the question is, why doesn't rev(thing2) work?
Here is another answer: The short answer is that you want to reverse the levels of the factor but not the factor itself:
```library(dplyr)
library(forcats)
tibble(row = 1:10,
thing1 = c(rep("F", 5), rep("L", 5)),
thing2 = c(rep("MeSecond", 4), rep("MeFirst", 2), rep("MeSecond", 4))) %>%
mutate(thing1 = factor(thing1, levels = c("F", "L")),
thing2 = factor(thing2, levels = c("MeFirst", "MeSecond"))) %>%
top_n(n = 1, wt = fct_rev(thing1)) %>%
top_n(n = 1, wt = fct_rev(thing2))
# A tibble: 1 x 3
row thing1 thing2
<int> <fct> <fct>
1 5 F MeFirst
```
|
Title: ionic framework FCM plugin
Tags: ionic-framework;firebase;notifications;firebase-cloud-messaging
Question: By now I could send notification too all the apps successfully via firebase console (firebase).The problem is that I cannot send notification to specific device.This is my code in run function :
```.run(function($ionicPlatform) {
$ionicPlatform.ready(function() {
FCMPlugin.getToken(
function(token){
alert("fire base token :) :"+token);
},
function(err){
console.log('error retrieving token: ' + err);
}
)
```
});
I get the token and when I use firebase to send it to specific device it says that the "token is invalid/" What part is wrong? Am I misunderstanding something? My project is ionic framework tabs sample code.
Comment: Are you sending the message from the Firebase Console?
Comment: It's hard to tell with the given details. Can you try and validate the token?
Comment: See the steps [here](https://firebase.google.com/docs/cloud-messaging/server#auth). Specifically, *If you want to confirm the validity of a registration token, you can do so by replacing "ABC" with the registration token*. (No worries. I'll try to help you out as much as I can).
Comment: Sorry for late answer. Yes I am trying to send to specific device via Firebase Console
Comment: How can I validate the token? It seems that the token is right in terms of the format. (I am really new to this notification process)
Here is another answer: Use blow code
```.run(function ($ionicPlatform, $rootScope, $state) {
$ionicPlatform.ready(function() {
if(window.cordova) {
FCMPlugin.onNotification(
function(data){
if(data.wasTapped){
// $state.go('message', {id:data.pageId});
// $location.path('app/userList');
console.log('onNotification tapped true');
} else {
console.log("xxxxxxxxxxxx"+data.message);
}
},
function(msg){
// alert("asdf"+msg)
console.log('onNotification callback successfully registered: ' + msg);
FCMPlugin.getToken(
function(token){
//alert(token);
window.localStorage.setItem("deviceId",token)
},
function(err){
console.log('error retrieving token: ' + err);
}
)
},
function(err){
// alert("fjkjg"+err)
console.log('Error registering onNotification callback: ' + err);
}
);
}
});
```
})
|
Title: Yii2 Rules customization
Tags: yii2
Question: I have a rule like this
```public function rules()
{
return [
[ [ 'movie_id']'required' ],
];
}
```
and in my form if i don't select movie i get 'movie cannot be blank'. how can i change that sentence to a custom one?
``` public function attributeLabels()
{
return [
'id' => Yii181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16t( 'app', 'Movie' ),
];
}
```
i think if i change the value in attributelabels i can change it, is there any other way i can change?
Here is the accepted answer: Try this:
```public function rules()
{
return [
[ [ 'movie_id'], 'required', 'message' => 'your-custom-message' ],
];
}
```
|
Title: Hold until pods are created java-client Kubernetes
Tags: scala;kubernetes
Question: I create a custom object using java-client for Kubernetes API in the beforeAll method of the integration tests. After the custom object is created the pods get created too. However, it only works when I set Thread.sleep for a few seconds. Without it, the object is created and then all the tests executed. I also defined watch on custom object statuses, but it does not help either.
Is there any other way (except for Thread.sleep) to hold for a few seconds until the pods get created?
My code for the custom object creation:
```def createWatchCustomObjectsCalls() = {
client.getHttpClient.setReadTimeout(0, TimeUnit.SECONDS)
val watchCalls: Watch[V1Namespace] = Watch.createWatch(client,
apiInstance.listNamespacedCustomObjectCall(crdGroup, crdVersion, crdNamespace, crdPlural, "true", null, null, true,null, null),
new TypeToken[Watch.Response[V1Namespace]]{}.getType)
watchCalls
}
override def beforeAll(): Unit = {
val creationResourcePath = Source.getClass.getResource("/" + httpServerScriptName).getPath
val serverStartupProcessBuilder = Seq("sh", creationResourcePath, "&") #> Console.out
serverStartupProcessBuilder.run()
val body = convertYamlToJson()
val sparkAppCreation = apiInstance.createNamespacedCustomObject(crdGroup, crdVersion, crdNamespace, crdPlural, body,"true")
println(sparkAppCreation)
}
```
Here is another answer: You can synchronously check in a while loop if pods had been created:
```// while
val currentPodList = getCoreV1Api()
.listPodForAllNamespaces(null /* _continue */,
null /* fieldSelector */,
null /* includeUninitialized */,
null /* labelSelector */,
null /* limit */,
"false" /* pretty */,
null /* resourceVersion */,
null /* timeoutSeconds */,
false /* watch */)
.getItems();
// check items from currentPodList
// end while
```
|
Title: WPF MouseDown on Datagrid => Get Column and Row
Tags: wpf;events;datagrid
Question: how can I recognize where I click on ```DataGrid```?
```<DataGrid x:Name="TheGrid" SelectionMode="Single" SelectionUnit="Cell" MouseDown="CellClick">
private void CellClick(object sender, MouseButtonEventArgs e)
{
foreach (DataGridCellInfo cell in TheGrid.SelectedCells)
{
MessageBox.Show(TheGrid.Items.IndexOf(cell.Item).ToString());
}
}
```
many thanks
Comment: Sorry, I dont understand how?
Comment: I edit my Question. But this only work on >MouseUP< on MouseDown I get the "last" Cell... That's not the clean solution I'd like. is it possible to have a trigger that passes the line?
Here is the accepted answer: You could handle the ```SelectedCellsChanged``` event like this:
```private void TheGrid_SelectedCellsChanged(object sender, SelectedCellsChangedEventArgs e)
{
if (TheGrid.SelectedCells.Count > 0)
{
DataGridCellInfo dgci = TheGrid.SelectedCells[0];
int columnIndex = dgci.Column.DisplayIndex;
DataGridRow row = TheGrid.ItemContainerGenerator.ContainerFromItem(dgci.Item) as DataGridRow;
int rowIndex = row.GetIndex();
MessageBox.Show($"Row {rowIndex} Column {columnIndex}");
}
}
```
Here is another answer: A cell will get selected on MouseUp. To get the cell before this event occurred, you will have to listen on MouseDown on the Datagrid and check which element is under the mouse with VisualTreeHelper.HitTest.
Checkout this answer.
|
Title: Java 8+ Unpacking Primitives From Singleton Objects Using Lambda or Stream? Structured Binding in Java?
Tags: java;lambda;java-stream;structured-bindings
Question: Effectively I need to take a subset of data out of a function return, but I don't actually need the returned object. So, is there a way to simply take what I need with a Stream or a Lambda-like syntax? And yes, I understand you CAN use stream().map() to put values into an array, and then decompose the array into named variables, but that's even more verbose for functionally equivalent code.
In C++, this would be called Structured Binding, but I seem to be missing the Java term for it when I search.
Procedural code I seem to be forced to write:
```HandMadeTuple<Integer, Integer, ...> result = methodICurrentlyCall(input);
int thing1 = result.getFirst();
int thing2 = result.getSecond();
...
//thingX's get used for various purposes. Result is never used again.
```
Declarative code I'd like to be able to write:
```int thing1, thing2;
(methodICurrentlyCall(input)) -> (MyType mt) { thing1 = mt.getFirst(); thing2 = mt.getSecond(); };
```
Or
```int thing1, thing2;
Stream.of(methodICurrentlyCall(input)).forEach((mt) -> { thing1 = mt.getFirst(); thing2 = mt.getSecond();}
```
Comment: If I get your question right, it’s not really about streams, but about [deconstruction](https://cr.openjdk.java.net/~briangoetz/amber/datum.html#records-and-pattern-matching), a language construct currently not supported by Java but planned for the future.
Comment: There’s the fundamental principle that Java does not allow to create references to local variables that could persist longer than the variable itself. You can reference (effectively) final variables because that can be implemented by copying the value, without sacrificing consistency. But you can’t use a lambda expression to assign to local variables of the outer scope. The closest thing you can do, is to use the variables right in the same lambda expression, e.g. `map.for((key,value) -> /*use key and value*/);`. While this example is `void`, similar methods could have a (single) result.
Comment: @AnsarOzden stream().map() doesn't do this either. All it does is output a single value. I could collect the values in a List, but that's worse than useless. I understand "Local variables used in lambda must be final or effectively final," but I have a use case where that constraint is detrimental to producing more succinct, readable code. Is there a bridge, or is this just an unfortunate gap in the Java language?
Comment: @Holger I added some clarification. I believe you're correct that is ONE solution, but I don't necessarily limit the form of the solution to that. Why can't I just give the Complex/Object result of a method call to an in-line function to unpack the Primitives?
Comment: In short - NO, Java doesn't support destructuring yet. But Kotlin (JVM language fully interoperable with Java) does have this feature, so you can rewrite some of your classes in Kotlin and import them in your Java code. But if you plan to use deconstructing across whole application, there is no convenient way of doing it with Java.
Comment: Your approach is discouraged in Java. Local variables used in lambda must be final or effectively final. That means the variables thing1 and thing2 can't be changed in lambda. It's better to use map feature of streams.
Here is the accepted answer: There's a really awkward hack: arrays.
```final int[] thing1 = {0}, thing2 = {0};
Stream.of(methodICurrentlyCall(input))
.forEach(mt -> {
thing1[0] = mt.getFirst();
thing2[0] = mt.getSecond();
});
```
The array reference is final, therefore you can use it in lambdas.
I don't recommend this! It is neither shorter nor easier to understand
(Another option is using ```AtomicReference<T>``` or ```AtomicInteger```/```AtomicLong``` instead of the array, but there's a minimal overhead for guaranteeing atomic updates)
If you are looking for pattern matching or destructuring support for Java, this could be implemented in future versions of Java, see this related question on SO: Java object destructuring (even more details at https://cr.openjdk.java.net/~briangoetz/amber/serialization.html)
Comment for this answer: I just wanted to write a similar comment, `var mt = call(); int a = mt.a, b = mt.b;` is the baseline we have the compare with. Is the alternative simpler? Nah. Does it save us from dealing with the object, we're not interested in? Nah, we're now dealing with even more objects, we're actually not interested in.
Comment for this answer: @patrickjp93 then don't use Java for functional programming. I don't go around complaining that Haskell makes side-effectful programming so difficult and "should clean up its act". Different languages were built with different paradigms in mind.
Comment for this answer: I still don't see why `int a,b; do(call()).extract(result -> { a=result.a; b=result.b; })` would be more desirable than `var result=call(); int a=result.a, b=result.b;` – the latter being shorter to write and easier to understand; the former being convoluted and inherently side-effectful with temporary dependencies (you cannot know when the lambda body will be executed, maybe it is only evaluated lazily?)
Comment for this answer: "with a _temporal_ dependency" (not temporary) is what I meant in my previous comment :)
Comment for this answer: While the AtomicInteger answer ANNOYS me, it's tolerable. I can write in a comment to the tune of "Deficiency in Java requiring functional programming quality to lambdas, normal primitives not allowed (as of J8)."
If anyone from Oracle, or another influencer in the Java world sees this, fix your language and let people write reasonable code, for God's sake.
|
Title: compute only diagonal of product of matrices in Matlab
Tags: matlab;matrix;matrix-multiplication
Question: Is there an efficient way in Matlab to compute only the diagonal of a product of 3 (or more) matrices? Specifically I want
```diag(A'*B*A)
```
When A and B are both very large it can take a long time. If there are only two matrices
```diag(B*A)
```
then I can quickly do it this way:
```sum(B.*A',2)
```
So right now I calculate the diagonal in the case with 3 matrices like this:
```C = B*A;
ans = sum(A'.*C',2);
```
which helps a lot, but the first operation (C = B*A) still takes a long time. The whole thing must be repeated a number of times as well, resulting in weeks to run my code. For example, B is about 15k x 15k and A is about 32k x 15k. And nothing is sparse.
Comment: Yes, I do mean A is 15k x 32k. Thanks for finding that mistake.
Comment: You mean A is about 15k x 32k, right? ;-)
Comment: It looks like you can't avoid doing the first of the full multiplications (`C` in your code), as all of the elements in `C` are needed in order to do the second multiplication
Comment: With matrices that large and not sparse.... I think you're SOL.
Here is the accepted answer: First of all, welcome! To be honest, this seems to be difficult. A little change is at least slightly increasing the speed:
```N = 5000;
A = rand(N,N*2);
B = rand(N,N);
t = cputime;
diag(A'*B*A);
disp(['Elapsed cputime ' num2str(cputime-t)]);
t=cputime;
C = B*A;
sum(A'.*C',2);
disp(['Elapsed cputime ' num2str(cputime-t)]);
% slightly better...
t=cputime;
C = B*A;
sum(A.*C)';
disp(['Elapsed cputime ' num2str(cputime-t)]);
% slightly better than slightly better...
t=cputime;
sum(A.*(B*A))';
disp(['Elapsed cputime ' num2str(cputime-t)]);
```
Results:
```Elapsed cputime 82.2593
Elapsed cputime 28.6106
Elapsed cputime 25.8338
Elapsed cputime 25.7714
```
|
Title: Django AttributeError object has no attribute 'upper'
Tags: django;django-models;django-rest-framework
Question: I am implementing an API for an image labelling app.
I have customised a create() method for Tagging. A Tagging contains a tag, which is an object containing a name and a language. To make tags easier to compare to each other, I have to save all tags either in upper or lower case.
serializers.py
``` def create(self, validated_data):
"""Create and return a new tagging"""
user = None
request = self.context.get("request")
if request and hasattr(request, "user"):
user = request.user
score = 0
tag_data = validated_data.pop('tag', None)
if tag_data:
tag = Tag.objects.get_or_create(**tag_data)[0]
validated_data['tag'] = tag
tagging = Tagging(
user=user,
gameround=validated_data.get("gameround"),
resource=validated_data.get("resource"),
tag=validated_data.get("tag"),
created=datetime.now(),
score=score,
origin=""
)
tagging.save()
return tagging
```
I have tried both this
```tag=validated_data.get("tag").upper()
```
and this:
```tag=validated_data.get("tag").lower()
```
and I get an AttributeError 'Tag' object has no attribute 'upper' or 'Tag' object has no attribute 'lower'.
I want for this tag in the tagging object:
```{
"gameround_id": 65,
"resource_id": 11601,
"tag": {
"name": "TagToTestNow",
"language": "de"
}
}
```
to be saved something like "TAGTOTESTNOW" or "tagtotestnow" to make Tag comparison easier.
How can I achieve this?
Comment: you have already poped the tag, ```tag_data = validated_data.pop('tag', None)```, when you do ```validated_data.get('pop')``` it returns ```None```.
Here is the accepted answer: It's because you already created the tag, so it's not a dict anymore. You can do ```tag=tag``` directly in you ```Tagging``` object creation.
If you want all your tags to have the same case, you have to slightly modify the way you create your Tag in the first place, as follow:
```tag = Tag.objects.get_or_create(
language=tag_data['language'],
name=tag_data['name'].upper()
)[0]
```
Comment for this answer: Thank you so much! This worked.
|
Title: What converts vbscript to machine code?
Tags: interpreted-language
Question: Compiled languages like C# and java, have just in time compilers, that convert them (from byte code) into machine code (0s and 1s). How does an interpreted language like VBScript get converted into machine code? Is it done by the operating system?
Comment: It's done by vbscript runtime (vbscript.dll) which parses&interprets the code on the fly as it's run, and calls own' internal functions such as alert() (which are already in assembly language). So basically it's like compilation, but during the program' run.
Comment: user1227804, I like your response. Can you please post it as an answer?
Here is the accepted answer: They don't necessarily get converted to machine code (and often don't).
The interpreter for that program runs the appropriate actions according to what the program requires.
Some interpreters might generate machine code (using JIT compilers), others might stick to plain interpretation of the script.
Comment for this answer: Not the OS. The interpreter (wscript.exe for example) reads the source script line by line and calls functions to do what the script wants to be done (this is way overly simplified, but it's the idea). The interpreter and the functions it calls are compiled code, like any other compiled program.
Comment for this answer: No, it doesn't have to. All the code is already in the interpreter itself, it doesn't need to produce additional machine code (though it can).
Comment for this answer: How does the processor get instructions? (All work is ultimately done by the processor) Is it that the OS interprets the code and send the appropriate instructions to the processor?
Comment for this answer: Ahh..so interpreter is the one that generates the machine code.
Comment for this answer: I think the fairer analogy is that CPU executing machine code is analogous to an interpretor reading regular code. CPU parses each instruction using circuits and sets various state to which other circuits respond to; while an interpretor parses the whole code into an internal object literal notation that it can actually understand without doing text manipulation on it, then it maps the requests to perform stuff to the actual native dispatchers or its own bytecode that has already been compiled by the interpretor's compiler or that will also get interpreted on demand.
Here is another answer: Referring to this VB Script wikipedia article,
When VB script is executed in a browser it uses ```vbscript.dll``` to interpret VB script.
When VB script file is executed from command-line or a batch file then ```cscript.exe``` is used to interpret VB script.
When VB script is used by Windows OS itself for various purposes like showing error message boxes or yellow colored notification messages in the right corner of the task bar then it is interpreted using ```wscript.exe``` which is a windows service.
Here is another answer: I know this is old, but given that I can't comment (rep), I want to add a clarifying answer:
An interpreter is used to interpret the script (be it VBScript, javascript, python, or any other script) into individual instructions. These instructions can be in the form of machine code or an intermediate representation (that the OS or other program can use). Some interpreters are made for something closer to assembly language and the source code is more or less executed directly.
Most modern scripting languages (eg, Python, Perl, Ruby) are interpreted to an intermediate representation, or to an intermediate representation and into compiled (aka machine, aka object) code. The important distinction (vs compiled languages) is that an interpreter isn't taking an entire body of code and translating its meaning to machine code, it's taking each line at a time and interpreting its meaning as a standalone unit.
Think of this as the difference between translating an entire essay from English to Russian (compiled code) vs taking each sentence in the essay and translating it directly (interpreted code). You may get a similar effect, but the result won't be identical. More importantly, translating an entire essay as a total body of work takes a lot more effort than doing one sentence at a time as a standalone unit, but the whole translation will be much easier for Russian speakers to read than the rather clunky sentence-by-sentence version. Hence the tradeoff between compiling code vs interpreting code.
Source: https://en.wikipedia.org/wiki/Interpreter_(computing), experience
Here is another answer: This is the answer I was looking for. Like javascript engine, there used to be a vbscript engine, that converted human readable code to machine code. This vbscript engine is analogous to the JIT compiler in CLR and JVM. Only that it converts directly from human readable code to machine code. As opposed to C# having an intermediate byte code.
Comment for this answer: The wikipedia link that you have mentioned in your post has no mention of vb script or vb script engine. I believe you wanted to refer [VBScript wikipedia](https://en.wikipedia.org/wiki/VBScript)
|
Title: How do I create a usable certificate-store from several files
Tags: ssl;openssl;keytool;certificate-store
Question: We have a process to request a signed cert from a CA and we get back 3 files:
cert.cer, cert.key, and cert.p12
I now need to build a valid/usable cert store from those files. I have copies of the CA & intermediate certs locally on my server. So I'm trying to import everything by using keytool. But I end up with a store full of about 100 certs plus the cert for my server. But when I try to use them I'm getting an error that the server cert is not valid unless the signing certs are also in the store. Basically there's no chain even though I the server cert says it was issued by the intermediate cert in the store. I use the following commands to import my certs and ca trusts.
```keytool -v -importkeystore -srckeystore "cacerts.p12" -srcstorepass "$CA_PASS" -srcstoretype "pkcs12" -destkeystore "$KEYSTORE_NAME" -deststorepass "$STORE_PW" -deststoretype "jks";
keytool -importkeystore -v -srckeystore "$CERT_NAME.p12" -srcstorepass "$STORE_PW" -srcstoretype "pkcs12" -destkeystore "$KEYSTORE_NAME" -deststorepass "$STORE_PW" -deststoretype "jks";
```
I'm not sure what step I'm missing. This is an Ubuntu 20.04 server.
Here is another answer: ```
How do I create a usable certificate-store ..
```
Usable is the keyword here - what are you trying to use the keystore for? (usually - SSL, client authentication or WS-Security)
```
getting an error that the server cert is not valid unless the signing certs are also in the store
```
There are different files for different purpose:
cert.cer - a public key with a CA-signed certificate
cert.key - a private key
cert.p12 - a keystore, may contain the private key, may contain the public key with its certificate, usually contains both (private key, public key, certificate). So - better validate what does the p12 keystore really contain.
The PKCS#12 keystore usually can be used as it is, often no need to import into a separate JKS. However - depends on the software.
BTW - maybe you could get a keystore-explorer, an opensource gem software giving you a great overview when not understanding the details or cli options.
```
Basically there's no chain even though I the server cert says it was issued by the intermediate cert in the store
```
Depends on the usage, but the best practice is having the CA root or its intermediate certificates imported in the truststore.
To import a CA reply in the keytool, you simply import a CA reply (issued certificate) with the same alias name as its private key. I'm not sure if you can create a whole certificate chain this way, you may have a look at the mentioned keystore-explorer to be sure.
Comment for this answer: It's ultimately to be a jks keystore to secure a WebSphere Application Server instance. I do have keystore-explorer and have looked at the various files the .cer file seems to be fine as it has a 3-level chain (ca, inter & server certs) but the .p12 (which is what I convert into the jks store) says "Path does not chain with any of the trust anchors"
|
Title: SLComposeViewControllerResultDone returned when no connection
Tags: ios;facebook;ios7
Question: When using the SLComposeViewController for posting to Facebook, ```SLComposeViewControllerResultDone``` is returned even when the device has no network connection. Any way around this, so I only display a 'success' message after its actually posted successfully.
```[_shareComposerSheet setCompletionHandler:^(SLComposeViewControllerResult result)
{
switch (result)
{
case SLComposeViewControllerResultCancelled:
break;
case SLComposeViewControllerResultDone:
{
// Show success message
}
break;
default:
break;
}
}];
```
Here is the accepted answer: Import Apples's Reachability files and check with following methods:
```- (BOOL)connected
{
Reachability *reachability = [Reachability reachabilityForInternetConnection];
NetworkStatus networkStatus = [reachability currentReachabilityStatus];
return networkStatus != NotReachable;
}
[_shareComposerSheet setCompletionHandler:^(SLComposeViewControllerResult result)
{
switch (result)
{
case SLComposeViewControllerResultCancelled:
break;
case SLComposeViewControllerResultDone:
{
if (![self connected]) {
// Show internet is not available message
}
else {
// Show success message
}
}
break;
default:
break;
}
}];
```
Here is another answer: From apple's doc
```
The handler to call when the user is done composing a post.
```
Discussion
```
The handler has a single parameter that indicates whether the user
finished or cancelled composing the post. This block is not guaranteed
to be called on any particular thread.
```
From above, you can clear, completion handler will be called once you've done with your composition, but it's not responding for either network connection availability or posting failed. You've to be check with connection before composition.
You can also these two result only possible with this handler.
```typedef NS_ENUM(NSInteger,
SLComposeViewControllerResult) {
SLComposeViewControllerResultCancelled,
SLComposeViewControllerResultDone
};
```
That is, your composer view controller is either post or cancel.
|
Title: unix less - problem with Enter key on numeric keypad
Tags: unix;less
Question: When I use the unix command 'less' I am constantly frustrated that using the Enter key on the numeric keypad doesn't work but instead types 'ESCOM'.
Is there a way to fix that?
Here is another answer: This is because the number pad's Enter key sends a different command then the keyboard Enter key. You could globally re-map the key to send the normal Return/Enter command, but understand that it will effect how the key works in all the other programs too.
Here are some tips on how to remap the key in X-Windows using xmodmap, or in the console/terminal using loadkeys. You probably want to map keycode 104 ("KP_Enter") to the "Return" command.
Comment for this answer: The articles contain too much unrelated information. A simple excerpt would be great!
Here is another answer: Use ```lesskey```.
```~ $ vi .lesskey
#command
\OM forw-line
~ $ lesskey
```
If this character doesn't match the one in your keyboard try using the escaped octal value.
You can also use another action like ```invalid``` (beeps) or ```noaction```.
Comment for this answer: The commands above should be enough, provided you know `lesskey` or read the linked `man` page.
Comment for this answer: I am still lost. How do I find out the octal value? How can I make this change permanent? What does `\OM` mean? What does `forw-line` mean? Why the comment at the top, has it any relevance?
|
Title: How do I delete a downstream message via the HTTP connection server API?
Tags: google-cloud-messaging
Question: I'm sending downstream messages via GCM's HTTP connection server API. The notification payload includes a tag that I'd like to use to delete the delivered notification on the registered Android/iOS device. Is this possible?
Here is another answer: Of course yes.
All you have to do is to handle the deleting procedure client side.
e.g. on android ```OnMessageReceived```
Detect if the received notification contains the delete tag.
Delete the desired notification by ```notificationManager.cancel(NOTIFICATION_ID)``` or all of them by ```notificationManager.cancelAll()```
Comment for this answer: Notifications are handled by the platform (Android in this case). Once the GCM message is received by the device, the processing of the notification is totally up to the client side GCM (the service) plays no further role. Ali is right you need to handle this client side.
Comment for this answer: @Yaniv yes, you may use FLAG_AUTO_CANCLE
Comment for this answer: Of course this is not related to HTTP or XMPP. it is just an automatic cancel, which you have to do cliend side. As far as I know there is no way to remove a delivered message server side, you can only put a `collapsible-key` to replase not-delivered messages.
Comment for this answer: Is it at all possible to do so via the server API rather than require this handling in the clients?
Comment for this answer: For instance, it is possible that the root cause behind a notification might no longer hold true, thereby invalidating the notification itself and requiring it to be deleted. And all this might happen before the client is even opened on the device. I would rather not send data payload since that would require additional processing on the device...
Comment for this answer: I don't believe this is something that can be done with GCM's HTTP or XMPP connection server API.
|
Title: acts-as-taggable-on use kind of scope (tagger?) and context
Tags: ruby-on-rails;acts-as-taggable-on
Question: My Application uses acts-as-taggable-on to tag ```Content```. The content ```belongs_to``` a ```Company```.
Now all created tags should belong to the used company and not be visible to other companies.
How can i solve this with acts-as-taggable-on? As context i currently use ```:content```. Is in my case the company the Tagger (owner)?
Thanks for clarification.
Here is the accepted answer: I solved it by declaring ```Company``` as the tagger.
```class Company < ApplicationRecord
acts_as_tagger
end
```
Controller action looks like the following:
```def create
if params[:content].has_key?(:content_tag_list)
content_tag_list = params[:content][:content_tag_list]
params[:content].delete(:content_tag_list)
end
@content = Content.new(content_params)
if @content.save
if content_tag_list
@content.company.tag(@content, with: content_tag_list, on: :content_tags)
end
redirect_back fallback_location: your_path
else
render :new
end
end
```
|
Title: Searching an Array with the user input using javascript
Tags: javascript;html;ajax
Question: I want to search a value in the array.
array is predefined with some values.
There will be a HTML text box to enter a value. User need to enter the value.
If the value entered by user is there in array. it should display "Valued found" or else not found. It should use AJAX.
My below code is not working.
```<html>
<head>
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
</head>
<body>
<p>Enter a value: <input type="text" id="carInput" onchange="textChanged()"></p>
<p id="onListText"></p>
<script>
var cars = ["abc", "def", "ghi","jkl"];
var textChanged = function(){
var inputText = $("#carInput").text();
if($.inArray(inputText, cars) !== -1){
$("#onListText").text("Value Found");
return;
}
}
</script>
</body>
</html>
```
Comment: @akxlr may he find local cache first, if not found, then query DB?
Comment: What does this have to do with AJAX?
Comment: $("#carInput").text(); should be $("#carInput").val()
Here is another answer: If you want to trigger the event only on lost focus, you just replace text() to val()
```var inputText = $("#carInput").val(); //replace text() to val()
```
using onkeyup(or oninput, suggest by @mohamedrias) rather than onchange event is more suitable in your case.
onchange need blur to fire the event.
```<p>Enter a value: <input type="text" id="carInput" onkeypress="textChanged()"></p>
```
onkeyup pnlkr
Comment for this answer: Even better, OP can bind just `oninput` event
Here is another answer: ```$("#carInput").text();``` should be ```$("#carInput").val()```
```text()``` is the select all the text inside the selected element
```val()``` is the get the val of form control
```var cars = ["abc", "def", "ghi", "jkl"];
var textChanged = function() {
var inputText = $("#carInput").val();
if ($.inArray(inputText, cars) !== -1) {
$("#onListText").text("Value Found");
return;
} else {
$("#onListText").text("Value not Found");
}
}```
```<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<p>Enter a value:
<input type="text" id="carInput" onchange="textChanged()">
</p>
<p id="onListText"></p>```
Here is another answer: Try ```var inputText = $("#carInput").val();``` instead of ```var inputText = $("#carInput").text();```
And use ```oninput``` event.
```<html>
<head>
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
</head>
<body>
<p>Enter a value: <input type="text" id="carInput" oninput="textChanged()"></p>
<p id="onListText"></p>
<script>
var cars = ["abc", "def", "ghi","jkl"];
var textChanged = function(){
var inputText = $("#carInput").val();
if($.inArray(inputText, cars) !== -1){
$("#onListText").text("Value Found");
return;
}
}
</script>
</body>```
Here is another answer: ``` var inputText = $("#carInput").text();
```
This should be
``` var inputText = $("#carInput").val();
```
So your full code will be:
``` var cars = ["abc", "def", "ghi","jkl"];
var textChanged = function(){
var inputText = $("#carInput").val();
console.log(inputText);
if($.inArray(inputText, cars) !== -1){
$("#onListText").text("Value Found");
return;
} else {
$("#onListText").text("Value not Found");
}
}```
```
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<p>Enter a value: <input type="text" id="carInput" oninput="textChanged()"></p>
<p id="onListText"></p>```
|
Title: ¿Cómo ejecutar un procedimiento almacenado en sql desde oracle con linkedserver?
Tags: sql-server;oracle;oracle-sqldeveloper
Question: Buenos días quisiera ejecutar un store procedure de oracle en sql server mediante linkedservers. Agradezco de antemano.
store procedure oracle
```create or replace procedure HORAS_SIGERI
(c1 out sys_refcursor, CODIGO_PROY VARCHAR , FECHA_INICIO_VAL DATE,FECHA_FIN_VAL DATE)
as
begin
open c1 for SELECT
rh.id_colaborador,
rh.nombre_colaborador,
rh.id_unidad_organizativa,
rh.id_proyecto,
rh.descripcion_proyecto,
rh.tipo_actividad,
rh.fecha,
rh.horas_imputadas
FROM
view_horas_sigeri rh
WHERE rh.id_unidad_organizativa =CODIGO_PROY AND rh.fecha BETWEEN fecha_inicio_val AND fecha_fin_val
order by 1,7;
end;
```
He visto ejemplos de ejecutar consultas desde oracle en sql con linkserver pero no un procedimiento almacenado(Aqui un ejemplo de consulta oracle en sql) y como pasarle parametros.
SQL
```SELECT * FROM OPENQUERY(DB_SIGERI, 'SELECT * FROM JIRADBUSER.PROJECT')
```
Pero quiero ejecutar este procedimiento almacenado de SQL que contiene dentro otro procedimiento almacenado en oracle usando openquery y linkedserver pero no funciona.
```CREATE PROCEDURE SP_HORAS_SIGERI2
@NPROYECTO varchar(50),
@FINICIO DATE,
@FFIN DATE
AS
BEGIN
DECLARE
@TSQL varchar(8000)
SELECT @TSQL = 'SELECT * FROM OPENQUERY(BD_SIGERI,''EXEC HORAS_SIGERI c1,''''' + @NPROYECTO + ''''','''''+@FINICIO+''''','''''+@FFIN+''''' '')'
EXEC (@TSQL)
END
```
|
Title: Eclipse not opening Websphere Liberty console
Tags: eclipse;websphere-liberty
Question: I have a Websphere Liberty server installed in another computer that we use as a test server.
My Eclipse IDE is connected to it, i can upload projects/code. I also looked at the Liberty console in Eclipse to track errors. But now Eclipse is not showing the console, i have the 'Console' tab (view), but nothing is showing.
Eclipse is still connected to the Liberty server, as i see it in the Server view, and am able to upload code.
I had restarted the Liberty Server and my Eclipse many times.
Any idea how can i connect to the console again ?
Thanks
SJRM
Here is another answer: I had noticed the console not displaying issue as well. For me it was an intermittent timing issue. It was fixed as of January of this year. Have you tried out the latest Beta version of the Eclipse tools? It fixed the problem for me.
Comment for this answer: Please use comment section to asking questions before answering.
Comment for this answer: I suspect you'll find it works consistently with the latest Beta.
Comment for this answer: Same as you. seems to be intermitent. The console was showing again yesterday... today it dissapeared again ... just after restarting the Liberty Server
|
Title: protobuf-net - list supported types
Tags: c#;protobuf-net
Question: I'm working on a custom ProtoBufFormatter (: MediaTypeFormatter) that is capable of registering own types on the fly to the RuntimeTypeModel used to serialize/deserialze.
To reduce the need of try{}catch{} blocks it would be great to filter out already supported types before adding them to the RuntimeTypeModel. The readme only offers a "vague" list types that are supported by default and the method Model.GetTypes() only returns a list of types that are manually added to the current model.
Readme: https://github.com/mgravell/protobuf-net
I'm using protobuf-net 2.4.0
So I'm wondering if there is any easy way to check if a type is already supported by the current RuntimeTypeModel?
Currently I'm using something like this to prefilter types:
``` private bool IsSimpleType(Type type)
{
return
type.IsPrimitive ||
_additionalSimpleTypes.Contains(type) ||
Convert.GetTypeCode(type) != TypeCode.Object ||
(type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>) && IsSimpleType(type.GetGenericArguments()[0]));
}
private Type[] _additionalSimpleTypes = new Type[]
{
typeof(Enum),
typeof(String),
typeof(String[]),
typeof(Decimal),
typeof(DateTime),
typeof(DateTimeOffset),
typeof(TimeSpan),
typeof(Guid),
typeof(Uri),
typeof(Byte),
typeof(Byte[]),
typeof(Char),
typeof(Boolean),
typeof(Object),
typeof(Version)
};
private Type[] _listTypes = new Type[]
{
typeof(Enum),
typeof(IEnumerable<>),
typeof(List<>),
typeof(IList<>)
};
```
Comment: That would be great. Any other idea how to test if a type is already supported without try'n'error?
Comment: I don't think such a method exists today; we could add one, of course...
Here is the accepted answer: Try:
``` ProtoBuf.Meta.RuntimeTypeModel.Default.CanSerialize(Type type)
```
|
Title: Mongodb, aggregate nr of items in $group
Tags: mongodb;mongodb-query;aggregation-framework
Question: I'm doing some aggregation to get some statistics on a how many products there are in cateories.
After some piping I'm down to this:
```[
{
topCategory: "Computer",
mainCategory: "Stationary"
},
{
topCategory: "Computer",
mainCategory: "Laptop"
},
{
topCategory: "Computer",
mainCategory: "Stationary"
},
{
topCategory: "Computer",
mainCategory: "Stationary"
},
```
]
Wanted output:
```[
{
name: "Computer",
count: 4,
mainCategories: [
{
name: "Laptop",
count: 2
},
{
name: "Stationary",
count: 2
}
]
}
]
```
My query so far:
```let query = mongoose.model('Product').aggregate([
{
$match: {
project: mongoose.Types.ObjectId(req.params.projectId)
}
},
{ $project: {
_id: 0,
topCategory: '$category.top',
mainCategory: '$category.main',
}
},
]);
```
I know I need to use ```$group``` combined with ```$sum```. I tried different way's but can't get it work.
Comment: use group `{
$group : {
_id : "$topCategory",
"count" : { $sum : 1 },
mainCategories: { $push: { name: "$mainCategory"} }
}
}
}`
Here is the accepted answer: In this case first you should group by ```mainCategory``` and then group by ```topCategory``` as below
```db.collection.aggregate({
$group: {
_id: "$mainCategory",
"count": {
$sum: 1
},
"data": {
"$push": "$$ROOT"
}
}
}, {
"$unwind": "$data"
}, {
"$group": {
"_id": "$data.topCategory",
"count": {
"$sum": 1
},
"mainCategories": {
"$addToSet": {
"name": "$_id",
"count": "$count"
}
}
}
}).pretty()
```
|
Title: Getting NullPointer Exception while parsing JSON
Tags: android;json;nullpointerexception
Question: I am trying to parse json from webservices but i get nullpointer exception. But if i hit url in browser then i get results but in code it's returning null.
Here is my url:- http://www.expensekeeper.in/webservice/FetchBillCall.php?userid=7
Here is my code:-
```@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
new LongOperation().execute("");
}
private class LongOperation extends AsyncTask<String, Void, String> {
@Override
protected String doInBackground(String... params) {
String url = "http://www.expensekeeper.in/webservice/FetchBillCall.php?userid=7";
JSONParser jsonParser = new JSONParser();
JSONObject jsonObject = jsonParser.getJSONFromUrl(url);
try {
//JSONArray jsonArray = jsonObject.getJSONArray("");
dueDate = jsonObject.getString("duedate");
description = jsonObject.getString("description");
status = jsonObject.getString("status");
remarks = jsonObject.getString("remarks");
} catch (JSONException e) {
e.printStackTrace();
}
return "Executed";
}
@Override
protected void onPostExecute(String result) {
Toast.makeText(MainActivity.this, dueDate+description+status+remarks, Toast.LENGTH_LONG).show();
}
}
```
Here is my jsonParser class:-
```public JSONObject getJSONFromUrl(String str) {
HttpURLConnection conn = null;
StringBuilder jsonResults = new StringBuilder();
try {
URL url = new URL(str);
conn = (HttpURLConnection) url.openConnection();
InputStreamReader in = new InputStreamReader(conn.getInputStream());
// Load the results into a StringBuilder
int read;
char[] buff = new char[1024];
while ((read = in.read(buff)) != -1) {
jsonResults.append(buff, 0, read);
}
json = jsonResults.toString();
} catch (MalformedURLException e) {
Log.e(LOG_TAG, "Error processing Places API URL", e);
} catch (IOException e) {
Log.e(LOG_TAG, "Error connecting to Places API", e);
} finally {
if (conn != null) {
conn.disconnect();
}
}
try {
jObj = new JSONObject(json);
} catch (JSONException e) {
Log.e("JSON Parser", "Error parsing data " + e.toString());
}
return jObj;
}
```
And here is my stacktrace:-
```04-10 17:15:16.425: E/AndroidRuntime(1388): Caused by: java.lang.NullPointerException
04-10 17:15:16.425: E/AndroidRuntime(1388): at com.example.jsonsample.MainActivity$LongOperation.doInBackground(MainActivity.java:40)
04-10 17:15:16.425: E/AndroidRuntime(1388): at com.example.jsonsample.MainActivity$LongOperation.doInBackground(MainActivity.java:1)
```
I don't understand what going wrong in my code. Why i am getting null value from server.
Please give me any reference.
Thanks in advance
Comment: Where does `(MainActivity.java:40)` point to?
Comment: on `dueDate = jsonObject.getString("duedate");`.
Here is the accepted answer: Edit:
Change type returned by ```getJSONFromUrl(String str)``` from ```JSONObject``` to ```JSONArray```.
And then create JSONArray directly like this
```JSONArray array=new JSONArray(json);``` // json is your json string
return array in the ```getJSONFromUrl(String str)``` method
Your Service returning JSONArray
Based on the above response, your service returning JSONArray , not JSONObject, so it will also throw JSONException, when you try to convert JSONArray to JSONObject.
``` JSONObject jsonObject = jsonParser.getJSONFromUrl(url);
```
So this will return you null,
as your jsonObject is ```null``` and you are trying to access it here
```dueDate = jsonObject.getString("duedate");
```
You will get ```NullPointerException```
Comment for this answer: you can directly create JSONArray with the string, do like this `JSONArray array = new JSONObject(jsonstring);` and return JSONArray in the JSON Parser
Comment for this answer: can you provide me how to read array without having any name... because my array dont have any name.... have you hit this url...
|
Title: remove element of prop in jointJS
Tags: javascript;jointjs
Question: This is how I could add in jointJS an attribute or remove it if ```content``` is empty. Therefore I use ```removeAttr```:
```if (content) graph.getCell(cellId).attr('info/content', content);
else graph.getCell(cellId).removeAttr('info/content');
```
But for custom fields I want to use ```prop()``` (http://jointjs.com/api#joint.dia.Element:prop). But there is no ```removeProp``` function to remove a field if the variable is empty:
```if (content) graph.getCell(cellId).prop('data/info', content);
else graph.getCell(cellId).prop('data/info', '');
```
So how do I remove an element of ```prop```?
|
Title: Symfony - How to remove a specific element from my array from an entity
Tags: php;symfony;doctrine;entity
Question: I have a Packages entity (manyToMany with the entityTypeUser that creates me in the DB the package_des_users entity). My Packages entity therefore contains an array of TypeUser (because a package can be assigned to several types of users).
I want to be able to delete one of the elements that are part of the TypeUser array of one of my packages.
My plan of action was:
Retrieve the package to process
Retrieve the name of the TypeUser to remove from the TypeUser array of the package.
Do a check: If the package contains only one typeUser, the package is deleted directly. If it contains more than one, just delete the chosen TypeUser.
For now, I was able to recover the package to be processed as well as the name of the TypeUser to delete, and the number of TypeUser that contained my package. If it contains only one, it suppresses it well. The problem remains for the other case.
Being a beginner in Symfony 3.4, I have a hard time understanding how to do it right.
My controller.php function :
``` /**
* Deletes a paquet entity.
*
* @Route("/{id}/{type}/delete", name="paquets_delete")
*/
public function deleteAction(Request $request, $id, $type)
{
$em = $this->getDoctrine()->getManager();
$unPaquet = $em->getRepository('PagesBundle:Paquet')->find($id);
$nbTypes = count($unPaquet->getTypeUser());
if($nbTypes == 1)
{
$em->remove($unPaquet);
}
else
{
$am = $this->getDoctrine()->getManager();
$leType = $am->getRepository('PagesBundle:TypeUser')
->findByTypeUtilisateur($type);
$em->$unPaquet->deleteTypeFromTypesUser($leType);
}
$em->flush();
return $this->redirectToRoute('paquets_index');
```
My entity (paquet) function :
```
/**
* @var \Doctrine\Common\Collections\Collection
* @ORM\ManyToMany(targetEntity="TypeUser")
* @ORM\JoinTable(name="Packages_des_TypesUser")
* @ORM\JoinColumn(nullable=false)
*/
private $typeUser;
public function __construct()
{
$this->typeUser = new ArrayCollection();
}
/**
* Get TypeUser
*
* @return Site\PagesBundle\Entity\TypeUser
*/
public function getTypeUser()
{
return $this->typeUser;
}
public function deleteTypeFromTypesUser(TypeUser $type)
{
$this->typeUser->removeElement($type);
}
```
I would therefore like to be able to retrieve the corresponding TypeUser object (the result being unique) from the controller so that I can pass it as a parameter to a function that will take care of removing it from the TypeUser array of my package. Then that translates into the database.
Thanks for your help !
Comment: Yes, that's what I wish if there is only one type left. It works. But if there are several, I want it to just remove the typeUser passed as a parameter.
Comment: Yeah, it's good. Now, I've a new error :
Catchable Fatal Error: Argument 1 passed to Site\PagesBundle\Entity\Paquet181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16leteTypeFromTypesUser() must be an instance of Site\PagesBundle\Entity\TypeUser, array given, called in /usr/local/apache/preprod_8080/htdocs/alatech/src/Site/PagesBundle/Controller/PaquetController.php on line 146 and defined
I have to write the type of my variable at the line : $unPaquet->deleteTypeFromTypesUser($leType);
Comment: Your code is deleting the package itself if theres 1 type.
Comment: Then change this: $em->$unPaquet->deleteTypeFromTypesUser($leType); to $unPaquet->deleteTypeFromTypesUser($leType);
Comment: Check my answer.
Here is the accepted answer: You have to change it on the class, you are calling the $unPaquet property of the entity manager, and then calling to a function that does not exist. Also you forgot to check if there is more than one or you would be deleting from an empty array.
``` /**
* Deletes a paquet entity.
*
* @Route("/{id}/{type}/delete", name="paquets_delete")
*/
public function deleteAction(Request $request, $id, $type)
{
$em = $this->getDoctrine()->getManager();
$unPaquet = $em->getRepository('PagesBundle:Paquet')->find($id);
$nbTypes = count($unPaquet->getTypeUser());
if($nbTypes == 1)
{
$em->remove($unPaquet);
}
else if ( $nbTypes > 1 )
{
$leType = $em->getRepository('PagesBundle:TypeUser')
->findByTypeUtilisateur($type);
$unPaquet->deleteTypeFromTypesUser($leType);
}
$em->flush();
return $this->redirectToRoute('paquets_index');
}
```
The "Catchable Fatal Error: Argument 1 passed to Site\PagesBundle\Entity\Paquet181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16leteTypeFromTypesUser() must be an instance of Site\PagesBundle\Entity\TypeUser, array given " error is due to the find returning an array. You have to change that method. If you need help over there show the code please.
Here is another answer: I've updated my code :
``` /**
* Deletes a paquet entity.
*
* @Route("/{id}/{type}/delete", name="paquets_delete")
*/
public function deleteAction(Request $request, $id, $type)
{
$em = $this->getDoctrine()->getManager();
$unPaquet = $em->getRepository('PagesBundle:Paquet')->find($id);
$nbTypes = count($unPaquet->getTypeUser());
if($nbTypes == 1)
{
$em->remove($unPaquet);
}
else if($nbTypes > 1)
{
$am = $this->getDoctrine()->getManager();
$leType = $am->getRepository('PagesBundle:TypeUser')-
>findByTypeUtilisateur($type);
$unPaquet->deleteTypeFromTypesUser($leType[0]);
}
$em->flush();
```
It's okay now ! Thanks
Comment for this answer: You welcome. Accept my answer if you found it helpful please.
|
Title: Unable to run WebDriverAgent on the real device using Appium Desktop
Tags: ios;automation;appium
Question: I am unable to run WebDriverAgent on a real device using Appium Desktop. I have attached gist for the error log.
I have tried some solution from github/appium but it did not solve my issue.
server log
https://gist.github.com/guowang94/ec923da6a3b77aca06a902c450200238
appium.app error message
https://gist.github.com/guowang94/ef2c5bb5bcc617baa2af19418e9a8b8e
I expect the WebDriverAgent to be running in the real device so that I can proceed to my next task but I am unable to install and run WebDriverAgent on the real device
Comment: Try to use appium 1.12.1 version. I am also unable to run my script on 1.13.0 version. After switching to 1.12.1 it works fine. Also follow [this guide](https://github.com/appium/appium-xcuitest-driver/blob/master/docs/real-device-config.md) to run appium test on real device.
Here is another answer: Your logs have both the problem:
```
An empty identity is not valid when signing a binary for the product type 'UI Testing Bundle'. (in target 'WebDriverAgentRunner')
```
which indicates incorrect XCode configuration with regards to signing the WebDriverAgentRunner-Runner.app
and the solution:
```
Unable to launch WebDriverAgent because of xcodebuild failure: xcodebuild failed with code 65 .... Make sure you follow the tutorial at https://github.com/appium/appium-xcuitest-driver/blob/master/docs/real-device-config.md
```
If the official guide is not clear you can follow steps from the Configuring Appium Environment for iOS Real Device article.
You can also consider Appium Studio which makes the process of setting up Appium testing on real iOS devices much easier.
|
Title: Call procedure return nothing but it returns when execute in mysql 5.5
Tags: php;mysql;stored-procedures
Question: I am working on a procedure for my project but I got a problem like the pictures below.
This is when I call that procedure with mysql code but then it shows nothing
It returns the result I want when I execute it with php admin interface
And here is my procedure
```DELIMITER //
CREATE PROCEDURE baocaodoanhthunam(IN nam integer)
BEGIN
DECLARE thang integer;
DECLARE countcb integer;
DECLARE doanhthu decimal;
SET @thang = 1;
create temporary table test_1 (thangg int, socb int, doanhthuu decimal);
while @thang <= 12
do
SELECT COUNT( * ), SUM(result.tonggiave) into @countcb, @doanhthu
FROM (
SELECT cb.MACHUYENBAY, SUM( ctdv.GIAVE ) AS tonggiave
FROM chuyenbay cb, dondatve ddv, chitietdatve ctdv, ve v
WHERE YEAR( cb.THOIGIANBAY ) = nam
AND MONTH( cb.THOIGIANBAY ) = @thang
AND ctdv.MAVE = v.MAVE
AND v.MACHUYENBAY = cb.MACHUYENBAY
AND ctdv.MADONDATVE = ddv.MADONDATVE
AND ddv.DATHANHTOAN =1
GROUP BY cb.MACHUYENBAY
) AS result;
insert into test_1 values (@thang, @countcb, @doanhthu);
set @thang = @thang + 1;
end while;
select t.thangg as 'Tháng', t.socb as 'Số chuyến bay', t.doanhthuu as 'Doanh thu' from test_1 t;
END //
DELIMITER ;
```
So my question is in mysql 5.5, does it support create temporary table or not?
If it does, then why I got this problem?
Thanks guys for your help! I'm very appreciate for you help.
Comment: you are mixing `socb` and `countcb`, is that correct?
Comment: I declare countcb variable to select count(*) into it, then insert it into the temporary table. The socb is just the name of the column in that temporary table.
|
Title: Windows USB mass storage - Garmin Alpha 200i mounted as a 'Device' but not a 'Drive'
Tags: python;windows;usb;garmin
Question: We have a python program that scans the mounted drive letters (or volumes, for Linux) for a certain file that indicates a Garmin handheld GPS. But, the Garmin Alpha 200i is mounted by windows as a 'Device' and not as a 'Drive', and therefore it has no drive letter and you can't get to it from Windows batch or Powershell in the standard C:/Folder notation.
How can we go about accessing the files on the 'Device' from python (or batch or PowerShell)?
It's definitely a mass storage device and has a directory structure - just not sure how to get to it programmatically:
Thinking that this is a Windows or python question, rather than a Garmin question. This is the first Garmin handheld GPS model we've encountered that mounts as a 'Device' instead of a 'Drive'.
The Garmin manual says that the handheld should be recognized as one or two removable drives, but, that is not the case. Earlier GPS models do mount up as two drives - one for the handheld's internal storage and another for its memory card if any.
Comment: I guess it works in MTP mode: https://stackoverflow.com/questions/11161747/how-to-access-an-mtp-usb-device-with-python
Here is the accepted answer: Modern devices use Media Transfer Protocol (MTP) instead of USB Mass storage.
This protocol is intentionally nerved, however, and cannot provide drive letters.
You can try one of the Python wrappers for LibMTP.
Comment for this answer: Thanks - I never noticed that my phone behaves the same way in that it doesn't mount as a drive letter. Looking at available options for libmtp and its python wrappers on Windows, (pymtp is apparently python2-only; mtpy is apparently linux-only) it doesn't seem like there are any turn-key out-of-the-box solutions, except for MPTDrive.com which is not free. It's interesting that there's no out-of-the-box solution for this by now - or maybe I'm missing something more obvious?
Comment for this answer: Yup - MTP it is, thanks for pointing it out. I was not familiar with MTP before. I've been exploring the best way to talk with it, and the mtpmount project is what I'm going with for now, mainly since it has easy turnkey installation. Currently I'm trying to figure out if the unmount process is strictly required (from the point of view of mtpmount and dokany) since we work in an environment where things are just usually hot-unplugged immediately after the file transfer. Anyway thanks for the solution to this particular question!
|
Title: XML Extension not found even php-xml extension is installed
Tags: php;pecl
Question: ```sudo pecl install trader```
shows error like this ```XML Extension not found```.
```[whitebear@jp2 ~]$ sudo pecl install trader
Warning: Invalid argument supplied for foreach() in Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Notice: Undefined index: honorsbaseinstall in Role.php on line 180
Notice: Undefined index: honorsbaseinstall in Role.php on line 180
Notice: Undefined index: honorsbaseinstall in Role.php on line 180
Notice: Undefined index: honorsbaseinstall in Role.php on line 180
Notice: Undefined index: honorsbaseinstall in Role.php on line 180
Notice: Undefined index: honorsbaseinstall in Role.php on line 180
Notice: Undefined index: honorsbaseinstall in Role.php on line 180
Notice: Undefined index: honorsbaseinstall in Role.php on line 180
Notice: Undefined index: honorsbaseinstall in Role.php on line 180
Notice: Undefined index: installable in Role.php on line 145
Notice: Undefined index: installable in Role.php on line 145
Notice: Undefined index: installable in Role.php on line 145
Notice: Undefined index: installable in Role.php on line 145
Notice: Undefined index: installable in Role.php on line 145
Notice: Undefined index: installable in Role.php on line 145
Notice: Undefined index: installable in Role.php on line 145
Notice: Undefined index: installable in Role.php on line 145
Notice: Undefined index: installable in Role.php on line 145
Notice: Undefined index: phpfile in Role.php on line 212
Notice: Undefined index: phpfile in Role.php on line 212
Notice: Undefined index: phpfile in Role.php on line 212
Notice: Undefined index: phpfile in Role.php on line 212
Notice: Undefined index: phpfile in Role.php on line 212
Notice: Undefined index: phpfile in Role.php on line 212
Notice: Undefined index: phpfile in Role.php on line 212
Notice: Undefined index: phpfile in Role.php on line 212
Notice: Undefined index: phpfile in Role.php on line 212
Notice: Undefined index: config_vars in Role.php on line 49
Notice: Undefined index: config_vars in Role.php on line 49
Notice: Undefined index: config_vars in Role.php on line 49
Notice: Undefined index: config_vars in Role.php on line 49
Notice: Undefined index: config_vars in Role.php on line 49
Notice: Undefined index: config_vars in Role.php on line 49
Notice: Undefined index: config_vars in Role.php on line 49
Notice: Undefined index: config_vars in Role.php on line 49
Notice: Undefined index: config_vars in Role.php on line 49
Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in PEAR/Command.php on line 259
Warning: Invalid argument supplied for foreach() in /usr/share/pear/PEAR/Command.php on line 259
XML Extension not found
```
I googled around and found that this error is related with XML module or dom module.
However,in my php dom or xml is installed.
Does anyone bump into the same error??
```[whitebear@jp2 ~]$ php -m
[PHP Modules]
bz2
calendar
Core
ctype
curl
date
dom
ereg
exif
fileinfo
filter
ftp
gd
gettext
hash
iconv
json
libxml
mbstring
mcrypt
mhash
mysql
mysqli
mysqlnd
openssl
pcntl
pcre
PDO
pdo_mysql
pdo_sqlite
Phar
readline
Reflection
session
SimpleXML
soap
sockets
SPL
sqlite3
standard
tidy
tokenizer
wddx
xdebug
xml
xmlreader
xmlwriter
xsl
Zend OPcache
zip
zlib
[Zend Modules]
Xdebug
Zend OPcache
```
Comment: thanks that helps me.
Comment: [This](http://serverfault.com/questions/589877/pecl-command-produces-long-list-of-errors) may help.
|
Title: Is there a command line argument parsing library that supports parsing java like system argument (-Dproperty=value)?
Tags: c#;command-line-arguments
Question: Basically I would like to specify a number of different argument, and ideally have a Dictionary populated. Something along the lines of:
``` c:\> MyApp.exe -Dproperty1=value1 -Dproperty2=value2
```
Would translate to something like:
``` Dictionary<string,string>() { { "property1", "value1" }, { "property2", "value2" } }
```
I have looked at a number of solutions, and none of them seem to support this type of syntax. Before I create my own, or modify someone else's, I though I would ask.
Here is another answer: You need to write your own parsing algorithm. Anyway, It's not that hard using ```Linq```, for example:
```args.Skip(1) // skip the program name
.Select(x => {
var parts = x.Split('=');
return new KeyValuePair<string,string>(
parts[0].Replace("-D",""),
parts[1]);
}).ToDictionary(x => x.Key,x => x.Value);
```
Comment for this answer: @travis you can add `.Where(x => x.StartsWith("-D"))` before the Select. that will only give you arguments that starts with -D
Comment for this answer: Ya, that would work well if these were the only type of argument that I would need to provide. The app will have other though, filename, logfilename, whatever. I was hoping for a solution that allows me to create a few attribute classes, and let something else deal with the parsing, validation and help generation.
|
Title: Cloning: what's the fastest alternative to JSON.parse(JSON.stringify(x))?
Tags: javascript
Question: What's the fastest alternative to
```JSON.parse(JSON.stringify(x))
```
There must be a nicer/built-in way to perform a deep clone on objects/arrays, but I haven't found it yet.
Any ideas?
Comment: @jishi you mean [`Node`](http://www.w3.org/TR/domcore/#interface-node) has a [`.cloneNode`](http://www.w3.org/TR/domcore/#dom-node-clonenode) method
Comment: JSON.parse(JSON.stringify(x)) is NOT a valid way to clone objects. If you had a method as an Objects member this method will not be copied. Just try to run this code `JSON.parse(JSON.stringify({ a: '1', foo: (a) => a*a }))`. And you will see that the **foo** method is dropped.
Comment: JSON serialization and parsing is painfully slow, so any native methods of creating your objects is probably faster. As a side note, DOM-objects do have a native clone() method.
Comment: possible duplicate of [What is the most efficient way to clone a JavaScript object?](http://stackoverflow.com/questions/122102/what-is-the-most-efficient-way-to-clone-a-javascript-object)
Comment: @Raynos: ah, yes, my bad. To much jQuery.
Comment: This question is confusing - are you talking about general JavaScript objects (there is no such thing as arrays) - or JSON which is a subset of JavaScript objects?
Here is the accepted answer: No, there is no build in way to deep clone objects.
And deep cloning is a difficult and edgey thing to deal with.
Lets assume that a method ```deepClone(a)``` should return a "deep clone" of b.
Now a "deep clone" is an object with the same [[Prototype]] and having all the own properties cloned over.
For each clone property that is cloned over, if that has own properties that can be cloned over then do so, recursively.
Of course were keeping the meta data attached to properties like [[Writable]] and [[Enumerable]] in-tact. And we will just return the thing if it's not an object.
```var deepClone = function (obj) {
try {
var names = Object.getOwnPropertyNames(obj);
} catch (e) {
if (e.message.indexOf("not an object") > -1) {
// is not object
return obj;
}
}
var proto = Object.getPrototypeOf(obj);
var clone = Object.create(proto);
names.forEach(function (name) {
var pd = Object.getOwnPropertyDescriptor(obj, name);
if (pd.value) {
pd.value = deepClone(pd.value);
}
Object.defineProperty(clone, name, pd);
});
return clone;
};
```
This will fail for a lot of edge cases.
Live Example
As you can see you can't deep clone objects generally without breaking their special properties (like ```.length``` in array). To fix that you have to treat ```Array``` seperately, and then treat every special object seperately.
What do you expect to happen when you do ```deepClone(document.getElementById("foobar"))``` ?
As an aside, shallow clones are easy.
```Object.getOwnPropertyDescriptors = function (obj) {
var ret = {};
Object.getOwnPropertyNames(obj).forEach(function (name) {
ret[name] = Object.getOwnPropertyDescriptor(obj, name);
});
return ret;
};
var shallowClone = function (obj) {
return Object.create(
Object.getPrototypeOf(obj),
Object.getOwnPropertyDescriptors(obj)
);
};
```
Comment for this answer: Edge cases being cyclic references and host object references?
Comment for this answer: @katspaugh and special objects like `Array` and it's `.length` magic, I think `Date` also has magic in there somewhere. And Host objects, yes. I don't know what's an easy fix for cyclic references.
Comment for this answer: @AndyE Why you lie. Date has magic [`console.log(Object.create(Date.prototype));
console.log(new Date());`](http://jsfiddle.net/BcNjU/5/)
Comment for this answer: @Raynos: `Date` doesn't have any magic. We've been over this :-p
Comment for this answer: @Raynos: that's like saying String has magic - `console.log((new String()).hasOwnProperty("length"), Object.create(String.prototype).hasOwnProperty("length"))` - the "magic" is clearly done in the constructor.
Here is another answer: I was actually comparing it against ```angular.copy```
You can run the JSperf test here:
https://jsperf.com/angular-copy-vs-json-parse-string
I'm comparing:
```myCopy = angular.copy(MyObject);
```
vs
```myCopy = JSON.parse(JSON.stringify(MyObject));
```
This is the fatest of all test I could run on all my computers
Comment for this answer: With both of you named Alex, I was very confused and thought you were talking to yourself.
Comment for this answer: JSON parse will remove undefined or anonymous empty functions
var orig = {a: "A", b: undefined};
var assigned = Object.assign({}, orig); //{a: "A", b: undefined}
var jsoned = JSON.parse(JSON.stringify(orig)) // {a: "A"}
https://medium.com/@pmzubar/why-json-parse-json-stringify-is-a-bad-practice-to-clone-an-object-in-javascript-b28ac5e36521
Here is another answer: The 2022 solution for this is to use structuredClone
See : https://developer.mozilla.org/en-US/docs/Web/API/structuredClone
```structuredClone(x)
```
Here is another answer: Cyclic references are not really an issue. I mean they are but that's just a matter of proper record keeping. Anyway quick answer for this one. Check this:
https://github.com/greatfoundry/json-fu
In my mad scientist lab of crazy javascript hackery I've been putting the basic implementation to use in serializing the entirety of the javascript context including the entire DOM from Chromium, sending it over a websocket to Node and reserializing it successfully. The only cyclic issue that is problematic is the retardo navigator.mimeTypes and navigator.plugins circle jerking one another to infinity, but easily solved.
```(function(mimeTypes, plugins){
delete navigator.mimeTypes;
delete navigator.plugins;
var theENTIREwindowANDdom = jsonfu.serialize(window);
WebsocketForStealingEverything.send(theENTIREwindowANDdom);
navigator.mimeTypes = mimeTypes;
navigator.plugins = plugins;
})(navigator.mimeTypes, navigator.plugins);
```
JSONFu uses the tactic of creating Sigils which represent more complex data types. Like a MoreSigil which say that the item is abbreviated and there's X levels deeper which can be requested. It's important to understand that if you're serializing EVERYTHING then it's obviously more complicated to revive it back to its original state. I've been experimenting with various things to see what's possible, what's reasonable, and ultimately what's ideal. For me the goal is a bit more auspicious than most needs in that I'm trying to get as close to merging two disparate and simultaneous javascript contexts into a reasonable approximation of a single context. Or to determine what the best compromise is in terms of exposing the desired capabilities while not causing performance issues. When you start looking to have revivers for functions then you cross the land from data serialization into remote procedure calling.
A neat hacky function I cooked up along the way classifies all the properties on an object you pass to it into specific categories. The purpose for creating it was to be able to pass a window object in Chrome and have it spit out the properties organized by what's required to serialize and then revive them in a remote context. Also to accomplish this without any sort of preset cheatsheet lists, like a completely dumb checker that makes the determinations by prodding the passed value with a stick. This was only designed and ever checked in Chrome and is very much not production code, but it's a cool specimen.
```// categorizeEverything takes any object and will sort its properties into high level categories
// based on it's profile in terms of what it can in JavaScript land. It accomplishes this task with a bafflingly
// small amount of actual code by being extraordinarily uncareful, forcing errors, and generally just
// throwing caution to the wind. But it does a really good job (in the one browser I made it for, Chrome,
// and mostly works in webkit, and could work in Firefox with a modicum of effort)
//
// This will work on any object but its primarily useful for sorting the shitstorm that
// is the webkit global context into something sane.
function categorizeEverything(container){
var types = {
// DOMPrototypes are functions that get angry when you dare call them because IDL is dumb.
// There's a few DOM protos that actually have useful constructors and there currently is no check.
// They all end up under Class which isn't a bad place for them depending on your goals.
// [Audio, Image, Option] are the only actual HTML DOM prototypes that sneak by.
DOMPrototypes: {},
// Plain object isn't callable, Object is its [[proto]]
PlainObjects: {},
// Classes have a constructor
Classes: {},
// Methods don't have a "prototype" property and their [[proto]] is named "Empty"
Methods: {},
// Natives also have "Empty" as their [[proto]]. This list has the big boys:
// the various Error constructors, Object, Array, Function, Date, Number, String, etc.
Natives: {},
// Primitives are instances of String, Number, and Boolean plus bonus friends null, undefined, NaN, Infinity
Primitives: {}
};
var str = ({}).toString;
function __class__(obj){ return str.call(obj).slice(8,-1); }
Object.getOwnPropertyNames(container).forEach(function(prop){
var XX = container[prop],
xClass = __class__(XX);
// dumping the various references to window up front and also undefineds for laziness
if(xClass == "Undefined" || xClass == "global") return;
// Easy way to rustle out primitives right off the bat,
// forcing errors for fun and profit.
try {
Object.keys(XX);
} catch(e) {
if(e.type == "obj_ctor_property_non_object")
return types.Primitives[prop] = XX;
}
// I'm making a LOT flagrant assumptions here but process of elimination is key.
var isCtor = "prototype" in XX;
var proto = Object.getPrototypeOf(XX);
// All Natives also fit the Class category, but they have a special place in our heart.
if(isCtor && proto.name == "Empty" ||
XX.name == "ArrayBuffer" ||
XX.name == "DataView" ||
"BYTES_PER_ELEMENT" in XX) {
return types.Natives[prop] = XX;
}
if(xClass == "Function"){
try {
// Calling every single function in the global context without a care in the world?
// There's no way this can end badly.
// TODO: do this nonsense in an iframe or something
XX();
} catch(e){
// Magical functions which you can never call. That's useful.
if(e.message == "Illegal constructor"){
return types.DOMPrototypes[prop] = XX;
}
}
// By process of elimination only regular functions can still be hanging out
if(!isCtor) {
return types.Methods[prop] = XX;
}
}
// Only left with full fledged objects now. Invokability (constructor) splits this group in half
return (isCtor ? types.Classes : types.PlainObjects)[prop] = XX;
// JSON, Math, document, and other stuff gets classified as plain objects
// but they all seem correct going by what their actual profiles and functionality
});
return types;
};
```
Comment for this answer: @benvie Would this be applicable to [my question here](http://stackoverflow.com/questions/35952712/how-to-make-cloned-dom-element-json-serializable)?
Comment for this answer: Almost as terrifying as it is off-topic.
|
Title: How do I create a website Custom Audience from a custom event?
Tags: facebook-graph-api;facebook-php-sdk;facebook-marketing-api
Question: How I can add custom audience based on events like click anywhere in website page.
I am planning to use
```fbq('track', 'Purchase', {
content_name: 'Really Fast Running Shoes',
content_category: 'Apparel & Accessories > Shoes',
content_ids: ['1234'],
content_type: 'product',
value: 199.50,
currency: 'USD'
});
```
from
https://developers.facebook.com/docs/ads-for-websites/tag-api/.
I have 2 confusions.
Is audience connected to ad account or custom audience?
Secondly, As pixel id is connected to ad account so my confusing is when I will track events on page using fbq then how it will be connected to specific custom audience?
Am I thinking in right or wrong way? Any suggestions?
Here is another answer: Accounts can have many Audiences. One type of Audience is Website Custom Audiences (WCA), which are based on pixel activity. Website Custom Audiences are groups of people that have taken actions resulting in pixel fires that match a certain rule, such as "made a Purchase".
So when a pixel fires, Facebook checks for all the WCA (rules) on that pixel. If any of the rules match the data in the pixel, the person is added to the WCA that corresponds to the rule. A single pixel fire could add a person to more than one audience if multiple WCA rules matched the pixel event.
You can create a WCA based on your pixel by going to ```https://business.facebook.com/ads/manager/pixel/facebook_pixel/?act=[Account ID]``` and clicking the "Create Audience" button.
|
Title: How to handle Firebase error messages for versions 17.0.1 and 16.0.1
Tags: java;android;firebase;google-cloud-firestore;build.gradle
Question: I am currently working with Cloud Firestore as well as Cloud Storage and I keep getting these error messages right after adding the necessary dependencies to my app:
```Failed to resolve: firebase-firestore-15.0.0
Failed to resolve: firebase-storage-15.0.0
Failed to resolve: firebase-auth-15.0.0
```
I am pretty sure I have to fix the code lines but I don't know which part I have to edit:
Comment: @Dee Remove `:15.0.0` from the end of each dependency
Here is the accepted answer: You are getting the following errors:
```
Failed to resolve: firebase-firestore-15.0.0
Failed to resolve: firebase-storage-15.0.0
Failed to resolve: firebase-auth-15.0.0
```
Because you are using a wrong dependencies in your code. To solve this, please change the following lines of code:
```implementation 'com.google.firebase:firebase-firestore:16.0.1:15.0.0'
implementation 'com.google.firebase:firebase-storage:16.0.1:15.0.0'
implementation 'com.google.firebase:firebase-auth:16.0.1:15.0.0'
```
to
```implementation 'com.google.firebase:firebase-firestore:18.0.0'
implementation 'com.google.firebase:firebase-storage:16.0.5'
implementation 'com.google.firebase:firebase-auth:16.1.0'
```
Because such a versions ```16.0.1:15.0.0``` do not exist.
Please also add the following dependency which is now mandatory:
```implementation 'com.google.firebase:firebase-core:16.0.6'
```
```
Your app gradle file now has to explicitly list ```com.google.firebase:firebase-core``` as a dependency for Firebase services to work as expected.
```
In your top level ```build.gradle``` file please be sure to have the latest version of Google Service plugin:
```classpath 'com.google.gms:google-services:4.2.0'
```
Comment for this answer: In my opinion it's best to add those dependency manually not using tools.
Comment for this answer: Yes, that worked! But why does Firebase keep adding these flawed dependencies in Android? I did it several times now and it keeps happening.
|
Title: Does multiple calls to LoggerFactory.CreateLogger<...> create a new file?
Tags: asp.net
Question: In each asp.net core page, I make the following call:
```new MyClass(LoggerFactory.CreateLogger<...>);
```
I am wondering whether that is the correct way or should I keep a singleton logger to be passed around.
Here is the accepted answer: Yes. This is a correct usage.
The instance of the Logger that is created is independent of the output Sinks (file/console/debug/etc). Whatever sinks you have configured in your Startup will be used by the Logger instances that are created.
For Reference: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-2.1
|
Title: How to I check whether a given variable value is of type string
Tags: string;types;common-lisp;sbcl
Question: Essentially I'd say that you'll have to use ```(typep var 'string-type)```, but there is no such type as string as far as I known.
Determining a type via type-of results in
```(type-of "rowrowrowyourboat")
> (SIMPLE-ARRAY CHARACTER (17))
```
which is hardy a type I could look for in a generic way as looking for just ```SIMPLE-ARRAY``` wouldn't do any good:
```(typep "rowrowrowyourboat" 'simple-array)
> t
(typep (make-array 1) 'simple-array)
> t
```
And using a intuitive the hack of dynamically determining the type of an example string doesn't do any good either as they will not be of the same length (most of the time)
```(typep "rowrowrowyourboat" (type-of "string"))
> nil
```
So I wonder what is the canonical way to check whether a given variable is of type string?
Here is the accepted answer: Most types has a predicate in CL and even if a string is a sequence of chars it exists a function, stringp, that does exactly what you want.
```(stringp "getlydownthestream") ; ==> T
```
It says in the documentation that that would be the same as writing
```(typep "ifyouseeacrocodile" 'string) ; ==> T
```
Here is another answer: Your question has several misconceptions. Usually in Lisp variables don't have a type.
Repeat: Variables in Lisp have no types.
You can ask if some Lisp value is of some type or what type it has. Lisp objects have types attached.
Common Lisp has no type 'string'? Why don't you look into the documentation? It is easy.
Common Lisp HyperSpec: http://www.lispworks.com/documentation/HyperSpec/Front/index.htm
Symbol Index: http://www.lispworks.com/documentation/HyperSpec/Front/X_Symbol.htm
Symbol Index for S: http://www.lispworks.com/documentation/HyperSpec/Front/X_Alph_S.htm
STRING: http://www.lispworks.com/documentation/HyperSpec/Body/a_string.htm#string
System Class STRING: http://www.lispworks.com/documentation/HyperSpec/Body/t_string.htm
So Common Lisp has a type ```STRING```.
The Strings Dictionary also lists other string types: ```BASE-STRING```, ```SIMPLE-STRING```, ```SIMPLE-BASE-STRING```.
I'm using LispWorks, so the returned types look a bit different:
```CL-USER 20 > (type-of "foo")
SIMPLE-BASE-STRING
CL-USER 21 > (typep "foo" 'string)
T
CL-USER 22 > (stringp "foo")
T
CL-USER 23 > (subtypep 'simple-base-string 'string)
T
T
CL-USER 24 > (let ((var "foo")) (typep var 'string))
T
```
Secret: variables in Common Lisp can be typed, but that is another story.
Comment for this answer: i meant (of course) the value of the variable - will correct that
|
Title: Is it possible to add a Scale Set to existing VNET
Tags: azure;azure-vm-scale-set
Question: I have already created a VNet and subnet. Added other resources for this Vnet. Now I need to add a Scale Set to this subnet. Is this possible?
When I created a VM Scale Set from the Azure Portal, it does not ask for a VNet and it creates it's own VNet/subnet.
Here is another answer: Assume that you have all the resources need for the scale set configuration, below are the steps to create virtual machine scale set.
```#IP configuration
$ipName = "IP configuration name"
#create the IP configuration
$ipConfig = New-AzureRmVmssIpConfig -Name $ipName -LoadBalancerBackendAddressPoolsId $null -SubnetId $vnet.Subnets[0].Id
$vmssConfig = "Scale set configuration name"
#create the configuration for the scale set
$vmss = New-AzureRmVmssConfig -Location $locName -SkuCapacity 3 -SkuName "Standard_A0" -UpgradePolicyMode "manual"
#Add the network interface configuration to the scale set configuration
Add-AzureRmVmssNetworkInterfaceConfiguration -VirtualMachineScaleSet $vmss -Name $vmssConfig -Primary $true -IPConfiguration $ipConfig
$vmssName = "scale set name"
#create the scale set
New-AzureRmVmss -ResourceGroupName $rgName -Name $vmssName -VirtualMachineScaleSet $vmss
```
Refer to https://azure.microsoft.com/en-us/documentation/articles/virtual-machine-scale-sets-windows-create/ for step-by-step guidance.
Notice that you are using the latest version of Azure Powershell otherwise you will miss some Powershell cmdlets.
Hope this helps.
Comment for this answer: can i add this to traffic manager? so i am thinking of 3 vmss in 3 different subnets and then attach them to my traffic manager for HA?
|
Title: How to create different section in angular template (html file) conditionally
Tags: html;angularjs;angularjs-ng-repeat
Question: I am constructing ```three``` different table as following.
```
There is a lot of repetitive code in it.
```
The only different in each table is ```studentPermissions[]``` array which contain three objects for each student. So under each table i am just changing it for each student like ```studentPermissions[0], studentPermissions[1] and studentPermissions[2].```
Is there any further better i can write same code in less no of lines?
```<div class="module-container">
<div>
<table>
<thead>
<tr >
<td>Student</td>
<td>{{studentPermissions[0].SubjectName}}</td>
</tr>
<tr>
<th></th>
<th ng-repeat="permission in permissions"><div><span>{{permission.Name}}</span></div></th>
</tr>
</thead>
<tbody>
<tr ng-repeat="student in studentPermissions[0].Students">
<td>{{student.FirstName}} {{student.LastName}} <small class="" style="color: #999999;">({{student.Role}})</small></td>
<td ng-repeat="permission in student.Permissions">
<input ng-model="permission.Checked" type="checkbox">
</td>
</tr>
</tbody>
</table>
</div>
<div>
<table>
<thead>
<tr >
<td>Student</td>
<td>{{studentPermissions[1].SubjectName}}</td>
</tr>
<tr>
<th></th>
<th ng-repeat="permission in permissions"><div><span>{{permission.Name}}</span></div></th>
</tr>
</thead>
<tbody>
<tr ng-repeat="student in studentPermissions[1].Students">
<td>{{student.FirstName}} {{student.LastName}} <small class="" style="color: #999999;">({{student.Role}})</small></td>
<td ng-repeat="permission in student.Permissions">
<input ng-model="permission.Checked" type="checkbox">
</td>
</tr>
</tbody>
</table>
</div>
<div>
<table>
<thead>
<tr >
<td>Student</td>
<td>{{studentPermissions[2].SubjectName}}</td>
</tr>
<tr>
<th></th>
<th ng-repeat="permission in permissions"><div><span>{{permission.Name}}</span></div></th>
</tr>
</thead>
<tbody>
<tr ng-repeat="student in studentPermissions[2].Students">
<td>{{student.FirstName}} {{student.LastName}} <small class="" style="color: #999999;">({{student.Role}})</small></td>
<td ng-repeat="permission in student.Permissions">
<input ng-model="permission.Checked" type="checkbox">
</td>
</tr>
</tbody>
</table>
</div>
</div>
```
I was not sure about question title so please feel free to update it to make it more relevant with question contents. thanks
Also, I have to show data for each customer in septate table. this is requirement
Comment: @ kaveman, yes i am giving a try right away thanks
Comment: can you do an outer `ng-repeat` on `studentPermissions`?
Comment: can you provide a fiddle or plnkr link ??
Here is the accepted answer: Why not just use another ng-repeat?
```<div class="module-container">
<div ng-repeat="studentPermission in studentPermissions">
<table>
<thead>
<tr >
<td>Student</td>
<td>{{studentPermission.SubjectName}}</td>
</tr>
<tr>
<th></th>
<th ng-repeat="permission in permissions"><div><span>{{permission.Name}}</span></div></th>
</tr>
</thead>
<tbody>
<tr ng-repeat="student in studentPermission.Students">
<td>{{student.FirstName}} {{student.LastName}} <small class="" style="color: #999999;">({{student.Role}})</small></td>
<td ng-repeat="permission in student.Permissions">
<input ng-model="permission.Checked" type="checkbox">
</td>
</tr>
</tbody>
</table>
</div>
</div>
```
Comment for this answer: yes i guess you're absolutely right. lemme give a try before marking it answer. thanks
Comment for this answer: absolutely cool. works fine and code is optimized as well thanks
|
Title: Primality Check
Tags: java;time-complexity;primes
Question: Every prime number is in the form of 6k+1 or 6k-1. In order to check if a number is prime or not we can use the below algorithm. I have seen programs that are written based on these algorithms.
```public boolean isPrime(int n)
{
if (n <= 1) return false;
if (n <= 3) return true;
if (n%2 == 0 || n%3 == 0) return false;
for (int i=5; i*i<=n; i=i+6)
if (n%i == 0 || n%(i+2) == 0)
return false;
return true;
}
```
But I don't understand what would have been the problem if we had written code in the following manner:
```public boolean isPrime(int number){
boolean primeFlag = false;
if(number == 0 || number ==1){
return primeFlag;
}
if(number == 2 || number == 3){
primeFlag = true;
}
if((number+1)%6 == 0){
primeFlag = true;
}
if((number-1)%6 == 0){
primeFlag = true;
}
return primeFlag;
}
```
By this we can reduce the time complexity to O(1) compared to O(root(n)). Please let me know if am heading towards wrong direction.
Comment: No. 2 and 3 are exceptions to your stated rule; neither are of the stated form.. In effect you are looking at a 2,3 wheel factorisation. Some internet research will help.
Here is the accepted answer: It is correct to say that every prime (except for 2 and 3) has a remainder of 1 or 5 when divided by 6 (see this page for a deeper explanation). However, the opposite is not true. Not every number that has a remainder of 1 or 5 when divided by 6 is a prime.
Take 35 for example. It has a remainder of 5 when divided by 6, however it is not a prime (35 = 5 x 7).
Comment for this answer: Thanks a lot. That explains everything.
|
Title: avoiding busy wait with boost181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16sio poll
Tags: c++;boost-asio;busy-waiting
Question: I'm writing a service on linux that uses boost181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16sio181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16io_service with io_servi181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16poll in a while loop. This is a busy wait loop, e.g. it wastes CPU cycles.
```void Application181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16run()
{
try
{
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << "TcpServer starting..\n";
_TcpServer.reset( new TcpServer(_io_service, boost181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16ref(_spState)) );
// _io_service.run();
while( ! _spState->QuitSignalled() )
{
_io_service.poll();
}
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cerr << "quit signalled, TcpServer stopping.. :/" << st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16endl;
}
catch(st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16exception & e)
{
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << e.what() << "\n";
}
}
```
I use poll instead of run to check if another thread in the service has signalled service shutdown.
Is there a way of achieving this without using poll in a busy wait loop ?
The service uses async io on a single thread, and other threads do data processing.
I've added a sleep between iterations of the loop which seems to reduce the waste of cpu time, but I was hoping there might be a more efficient way?
```void Application181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16run()
{
using boost181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16this_thr181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16sleep_for;
static const boost181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16hrono181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16milliseconds napMsecs( 50 );
try
{
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << "TcpServer starting..\n";
_TcpServer.reset( new TcpServer(_io_service, boost181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16ref(_spState)) );
// _io_service.run();
while( ! _spState->QuitSignalled() )
{
_io_service.poll();
boost181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16this_thr181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16sleep_for( napMsecs );
}
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cerr << "quit signalled, TcpServer stopping.. :/" << st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16endl;
}
catch(st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16exception & e)
{
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << e.what() << "\n";
}
}
```
Here is the accepted answer: I'd say, simply take advantage of the fact that ```boost181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16sio181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16io_service``` is fully threadsafe by default, and do
```iosvc.run();
```
And signal service shutdown on "another thread in the service" like you would:
```iosvc.stop();
```
Remember to ```iosvc.reset()``` before you call ```{run,poll}[_one]``` again, as per the documentation.
Of course you can also use other means to signal the actual logical workers to end, but then that's completely independent unrelated to Boost Asio
Comment for this answer: Actually, since `io_service` is threadsafe, you can just _call_ the method instead of posting it. I had a bit of a brainfart there
Comment for this answer: I'm restricted to using c++98 and boost 1_47, but I'll have a look into io_service.post to see if it can do the equivalent with a function object.
|
Title: I m trying to search a string in multiple files and print them out in another file
Tags: python;regex;file
Question: I have multiple .txt files in a source folder of which i have given the path in "src" .I want to search strings which looks like "abcd.aiq" and print them in a file which i named as "fi".
I have written the following code and it doesnt print anything inside the file although it doesnt give any error.
```import glob
import re
import os
src = (C:\Auto_TEST\Testing\Automation")
file_array= glob.glob(os.path.join(src,".txt"))
fi= open("aiq_hits.txt","w")
for input_file in file_array:
fo=open(input_file,"r")
line=fo.readline()
for line in fo:
line=r.strip()
x= re.findall('\S*.aiq\S*',line)
line= fo.readline()
for item in x:
fi.write("%s\n" %item)
fo.close()
fi.close()
```
Comment: Your String is broken inside your `src` tuple => "`src = (C:\Auto_TEST\Testing\Automation")`" ***Note:** Technically it is not a tuple if you only have 1 item and it is not followed by a comma... `foo = ('bar',)`*
Comment: @Finwood: I think it's supposed to be "`line=line.strip()`" See OP's other; very-similar question: [Stack Overflow: I m searching for a specific string in multiple “.txt” files using python](http://stackoverflow.com/questions/29470178/i-m-searching-for-a-specific-string-in-multiple-txt-files-using-python).
Comment: Why `line = fo.readline()` in code-lines 9 and 16? And what is `r` in `line = r.strip()`? And why are you closing `fi` inside your loop?
Comment: Src tuple? It is the path of the folder where files to be checked are kept
Here is another answer: I suppose this is what you are trying:
```import glob
import re
import os.path
src = 'C:/Auto_TEST/Testing/Automation'
file_array = glob.glob(os.path.join(src,'*.txt'))
with open("aiq_hits.txt","w") as out_file:
for input_filename in file_array:
with open(input_filename) as in_file:
for line in in_file:
match = re.findall(r'\S*.aiq\S*', line)
for item in match:
out_file.write("%s\n" %item)
```
Let me quickly describe the changes I've made:
Opening files directly is not always a good idea. If the script crashes, the opened ```file``` object isn't being closed again, which can lead to data loss.
Since PEP 343 Python has the ```with``` statement, wich is generally agreed on being a better solution when handling files.
Calling ```f.readline()``` multiple times results in the script skipping these lines, because ```for line in f:``` reads lines on its own.
Finally, after every matching item you found you've been closing both the input file and the output file, so further reading or writing isn't possible anymore.
Edit: If you might need to tweak your regex, this might be a useful resource.
|
Title: Using C# dynamic typing in Unity 5.3.1f
Tags: c#;unity3d;ironpython;dynamic-typing
Question: I have written code for my game that need to run a function of my python code. I am using Ironpython for my project.
However, when I am trying to use C# dynamic typing to call a function in the code below, it compiles but I get the following error from the Internals:
```
" Assets/Scripts/WordSearchAlgorithm.cs(37,29): error CS1502:
The best overloaded method match for
System.Runtime.CompilerServices.CallSite,object>>.Create(System.Runtime.CompilerServices.CallSiteBinder)'
has some invalid arguments " "
Assets/Scripts/WordSearchAlgorithm.cs(37,29): error CS1503: Argument
'#1' cannot convert 'object' expression to type
'System.Runtime.CompilerServices.CallSiteBinder' " "
Assets/Scripts/WordSearchAlgorithm.cs(37,61): error CS0234: The type
or namespace name 'RuntimeBinder' does not exist in the namespace
`Microsoft.CSharp'. Are you missing an assembly reference? "
Assets/Scripts/WordSearchAlgorithm.cs(37,61): error CS1502: The best
overloaded method match for 'System.Runtime.CompilerServices.CallSite>.Create(System.Runtime.CompilerServices.CallSiteBinder)' has some invalid arguments
```
I think mono doesn't support this. Could you please give me a solution to help me?
```static public void StartSearchAlgorithm()
{
List < string > myList = new List < string > ()
{
"fxie",
"amlo",
"ewbx",
"astu"
};
var ironPythonRuntime = Python.CreateRuntime();
try
{
//Load the Iron Python file/script into the memory
//Should be resolve at runtime
dynamic loadIPython = ironPythonRuntime.UseFile("C:/py.py");
//Invoke the method and print the result
loadIPython.BoggleWords(myList, loadIPython.MakeTrie("C:/words.txt")); // here is my problem to calling function from python that unity logError
// Debug.Log(string.Format("dd", loadIPython.BoggleWords(myList, loadIPython.MakeTrie("C:/words.txt"))));
}
catch (FileNotFoundException ex)
{}
}
```
Here is another answer: Unity uses the Mono 2.0 version of .NET, which is similar to .NET 3.5. ```dynamic``` was introduced in .NET 4.0, so Unity will probably not compile.
There is the option to change Mono 2.0 sub to Mono 2.0 full in the Player Settings, but I don't know if that supports ```dynamic```. At least you can try.
Comment for this answer: Then it is not supported in Mono 2.0. Also see this answer: http://answers.unity3d.com/questions/686244/using-c-dynamic-typing-with-unity-434f1.html
Comment for this answer: Can you find an IronPython library that uses .NET 3.5 maybe?
Comment for this answer: IronPython 2.7.5 seems to have Net35 dll's as well. Try to reference those in your project and see if it doesn't use `dynamic`.
Comment for this answer: I am trying to change Mono 2.0 sub to Mono 2.0 full in the Player Settings but error still exist
Comment for this answer: Thanks Marnix for your Answering , I had seen this topic before but like code above. I want to call my Python function that without using C# dynamic typing do not work . I don't know maybe should change my coding
Comment for this answer: I had used IronPython 2.6 from C#4.0 , but unity had errors for its Libraries
|
Title: ¿Math.sqrt se considera una operación elemental en términos de eficiencia?
Tags: java
Question: Necesitaría saber si Math.sqrt (Java) es una operación elemental (tiempo de ejecución constante, O(1)) en un algoritmo para poder definir el coste computacional de un método. Gracias de antemano.
Here is another answer: Un artículo de stack overflow (en inglés) dice que son implementaciones nativas y no hay mejora en desempeño al implementar java puro para equiparar la función.
https://stackoverflow.com/questions/13263948/fast-sqrt-in-java-at-the-expense-of-accuracy
Comment for this answer: En el artículo hay un vínculo a la implementación, para analizar su coste computacional o la velocidad del algoritmo. https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/lang/Math.java
|
Title: Error:(18) error: resource style/Widget.Design.Snackbar (aka com.jlcsoftware.callrecorder:style/Widget.Design.Snackbar) not found
Tags: java
Question: enter image description hereI am facing this problem when I create a new project this error show in message view please help me or suggestion.
```C:\Users\User\.gradle\caches\transforms-1\files-1.1\design-27.0.2.aar\25b2c273a4f27c6741aa60f11c3823f0\res\layout\design_layout_snackbar.xml
Error:(18) error: resource style/Widget.Design.Snackbar (aka com.jlcsoftware.callrecorder:style/Widget.Design.Snackbar) not found.
C:\Users\User\.gradle\caches\transforms-1\files-1.1\design-27.0.2.aar\25b2c273a4f27c6741aa60f11c3823f0\res\layout-sw600dp-v13\design_layout_snackbar.xml```
```
|
Title: Why do I get C2248 (inaccessible member) with a protected static member?
Tags: c++;inheritance
Question: Let's say I have:
```#include <Windows.h>
#include <iostream>
#include <vector>
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16vector<int> Bas181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16m_intList;
class Base
{
public:
Base();
protected:
static st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16vector<int> m_intList;
};
class Derived : Base
{
public:
Derived();
protected:
bool fWhatever;
};
class MoreDerived : Derived
{
public:
MoreDerived();
private:
HRESULT DoStuff();
};
Bas181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16Base()
{
}
Deriv181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16Derived()
{
}
MoreDeriv181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16MoreDerived()
{
}
HRESULT MoreDeriv181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16DoStuff()
{
for (auto it = m_intList.begin(); it != m_intList.end(); it++)
{
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << *it;
}
}
```
When I try to compile this, I get "m_intList: cannot access inaccessible member declared in class 'MoreDerived'".
Question: Why can't I access a protected static member in the derived class's DoStuff function?
Comment: `Derived` inherits from `Base` by `private`. Is it your intent?
Comment: `st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16vector Bas181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16m_intList;` is an error because `Base` is not declared. Please post the exact code that gave you the error you are asking about
Here is the accepted answer: ```class Derived : Base``` means ```class Derived : private Base```. The behaviour of private inheritance is:
```protected``` members of the base class become ```private``` members of the derived class.
```private``` members of the base class have no access as members of the derived class.
So ```m_intList``` is:
```protected``` in ```Base```
```private``` in ```Derived```
no access in ```MoreDerived```
hence your error.
|
Title: Cupetino navigation bar middle title not showing
Tags: flutter
Question: Hello i am trying to give title to the app bar for cupertinonavigationbar middle widget but the text is not showing up.I tried to change the middle title text color still not showing up.What am i missing here?
```static buildAppBar(
{bool isIOS,
Function onPressedShoppingSearch,
String heroTag,
String middleTitle}) {
return (isIOS == true)
? CupertinoNavigationBar(
middle: (middleTitle != null)
? Text(
middleTitle,
style: TextStyle(color: Colors.grey, fontSize: 18),
)
: Text(''),
transitionBetweenRoutes: false,
automaticallyImplyMiddle: true,
automaticallyImplyLeading: true,
heroTag: heroTag,
backgroundColor: Colors.white,
trailing: Row(
mainAxisAlignment: MainAxisAlignment.end,
children: [
GestureDetector(
child: Icon(
FrinoIcons.f_search_classic,
color: Colors.red,
size: 23,
),
onTap: onPressedShoppingSearch,
),
const SizedBox(width: 6),
const Icon(
FrinoIcons.f_cart,
color: Colors.red,
size: 23,
),
],
),
)
```
Here is the accepted answer: I've just had a similar problem and found that the ```middle``` widget wasn't being displayed because my ```trailing``` widget was deemed (somehow) to be too large for that section of the navigation bar.
There are a few people that seem to be having similar issues due to ```leading``` and ```trailing``` widgets requiring a smaller size: https://github.com/flutter/flutter/issues/36689
Here's the code that I used in order to get it working (both old & new).
ORIGINAL CODE (Which wasn't working):
```@override
Widget build(BuildContext context) {
return CupertinoPageScaffold(
navigationBar: CupertinoNavigationBar(
leading: Container(),
middle: Text(
'Filters'.tr,
),
trailing: _closeButton(),
),
backgroundColor: Colors.white,
child: SafeArea(
child: Stack(
children: [_closeButton()],
),
),
);
}
Widget _closeButton() {
return Align(
alignment: Alignment.topRight,
child: Padding(
padding: EdgeInsets.fromLTRB(0, 0, 0, 0),
child: Container(
height: 32,
width: 32,
child: TextButton(
onPressed: () => _closeView(),
child: Image(
color: Colors.black,
image: AssetImage(SignalAsset.imagePath('icon_cross')),
)),
),
),
);
}
```
NEW CODE (Which works):
```@override
Widget build(BuildContext context) {
return CupertinoPageScaffold(
navigationBar: CupertinoNavigationBar(
leading: Container(),
middle: Text(
'Filters'.tr,
),
trailing: GestureDetector(
child: Image(
color: Colors.black,
image: AssetImage(SignalAsset.imagePath('icon_cross')),
),
onTap: () => _closeView(),
),
),
backgroundColor: Colors.white,
child: SafeArea(
child: Stack(
children: [_closeButton()],
),
),
);
}
```
|
Title: Get sum of paired time differences
Tags: sql;sqlite
Question: I have an sqlite table of card controlled entry door usages.
I need the sum of the even-odd time intervals, so for example the expected sum for card 02 is (12:03 - 08:07) + (16:03 - 14:52) = 03:56 + 01:21 = 05:07
People can move freely in daytime, so there could be many entries for a card.
Test table
The table looks like this:
```card_id | time
----------------------------
03 | 07:55
01 | 08:02
02 | 08:07
03 | 11:56
02 | 12:03
03 | 12:23
02 | 14:52
03 | 15:56
01 | 15:58
02 | 16:03
```
Comment: Well if the first one is missing you are a bit stuffed because there's no way to tell! The second one , default to now, to the end of the day, skip?
Comment: Yeah just thinking, normally I'd do something like this with a CTE but SQLLite doesn't have that.
Comment: I can't do it without. :(. Would be significantly easier if Date and Time were one DateTime Column though
Comment: Thanks @Tony, null is good when there is no matching pair for an entry, but I don't know how to write a query to solve these problems. I can get the time difference between two entries, but this only works when there is only two entries for a card, and there could be any of them, because people can freely move in the daytime.
Comment: Then it seems that I have to solve it in Java, but i am eager to know if it could be somehow done in sqlite (actually I can't do it with a CTE either).
Comment: It could be datetime, or even just time! Actually this is just a memory table I am creating from the data imported from csv files, and they contain only one day per file. Editing the question :)
Here is the accepted answer: Query is not hard but time interval handling is!
```SELECT card_id,
TIME(1721059.5+SUM(JULIANDAY(time)*((SELECT COUNT() FROM t AS _ WHERE card_id=t.card_id AND time<t.time)%2*2-1)))
FROM t
GROUP BY card_id;
```
How it works:
```((SELECT COUNT() FROM t AS _ WHERE card_id=t.card_id AND time<t.time)%2*2-1)``` counts all records before current one and returns ```-1``` or ```1```.
```JULIANDAY(time)``` convert time string in a numeric quantity. Product of former two results will return desired calculation (in days).
```TIME(1721059.5+...)``` will return a properly formatted time string. ```1721059.5``` is required because of ```JULIANDAY``` definition and SQLite date/time functions being only valid for dates after ```0000-01-01 00:00:00```.
EDIT: Looking at @CL answer, I see that using ```JULIANDAY('00:00')``` is more elegant then ```1721059.5```. However, I keep constant as it should perform better then using function call.
Comment for this answer: Yes. I didn't care about it as it was not a requirement.
Comment for this answer: If I understand well this will always result a negative interval when there is no matching pair for an entry (which is good because it is very easy to filter out these cards from the results). Am I right?
Comment for this answer: It's good for me, because I must filter out them. Nice answer, thank you!
Here is another answer: I don't have an SQLite available to try this, but something like this ought to work:
```CREATE TEMP TABLE card_times AS
SELECT time FROM entries
WHERE card_id = 2;
SELECT SUM(strftime("%s", '2000-01-01 ' || x.time || ':00')
- strftime("%s", '2000-01-01 ' || y.time || ':00'))
FROM card_times AS x INNER JOIN card_times AS y
ON x.ROWID = y.ROWID + 1
WHERE (x.ROWID % 2) = 0
```
Get a table of times for the desired card. Join it with itself using row id's to match the right pairs. Filter out the odd pairs. Sum the results.
Note this will fail if the clock wraps to the next day. Taking care of that case in SQLite would be ugly. You really need full date-times in the original data.
This gives the answer in seconds. I let you work out how to get it in the units you need.
Comment for this answer: Empty result set, and I can't figure out.
I have uploaded the test table: http://cdn.virtuosoft.eu/stackoverflow/19393071/entries.sqlite
Comment for this answer: OK, you are right, the problem was that the temporary table was already existed (and it needs `card_id = '02'` instead of `2` so it was empty). It returns 18420, which equals to 5:07. The downside is that it only calculates one card. Thanks, vote went.
Here is another answer: It would be possible to compute which times are even and odd dynamically, but it's easier if we put this information into the table itself:
```ALTER TABLE MyTable ADD odd;
UPDATE MyTable
SET odd = (SELECT COUNT(*) % 2
FROM MyTable AS c
WHERE c.card_id = MyTable.card_id
AND c.time < MyTable.time);
```
Then it is possible to simply sum the times for each card, multiplying the value by ```-1``` for even times.
(Because the result is a time difference, adding ```julianday('00:00')``` is required to get back to an absolute time value.)
```SELECT card_id,
(SELECT TIME(SUM(julianday(time) * CASE odd WHEN 0 THEN -1 ELSE 1 END)
+ julianday('00:00'))
FROM MyTable
WHERE card_id = cards.card_id
) AS total_time
FROM (SELECT DISTINCT card_id
FROM MyTable) AS cards;
03|07:34:00
01|07:56:00
02|05:07:00
```
Comment for this answer: Thank you very much for your answer, but I have accepted LS_dev's because it was simpler.
|
Title: Need assist with a query that looks at 1 table but might need to subquery or join
Tags: sql;join;subquery
Question: I have table with structure as this:
```AssignID, type, PosID
1a, e, a
1b, et, a
2a, e, b
2b, et, b
```
I want my result to look like this:
```AssignID, type, PosID, NewColumn
1a, e, a, 1b
2a, e, b, 2b
```
To give a little extra detail. My data resembles the first data set. Basically PosID will duplicate twice, one with an 'e' record and one with an 'et' record. 'Et' is the child of the 'E', so i would like to eliminate the extra line and display my result as one line per 'PosID'. therefore creating a new column with the "AssignID" of the 'Et' record. hope it makes sense.
Comment: You meant to say `type` will have a duplicate?
Here is another answer: Does this do what you want?
```select AssignID, type, PosID, replace(AssignID, 'a', 'b') as new_column
from t
where type = 'e';
```
Here is another answer: The following query would generate your expected output:
```SELECT
MIN(AssignID) AS AssignID,
MIN(type) AS type,
PosID,
MAX(AssignID) AS ExtraColumn
FROM yourTable
GROUP BY
PosID;
```
But, it is not clear whether the above logic would work with all your actual data. You should elaborate more on why ```e``` appears as the retained ```type```, rather than ```et```.
Demo
Comment for this answer: I don't understand your requirements then. If you want to get here, then you should edit your question and explain the logic better.
Comment for this answer: Thank you for the reply Tim, not quiet what I am looking for. I've revised my original post, I think that explains it better now. lmk your thoughts. thanks
|
Title: how to use Pageable Interface to avoid printing range 1-9999
Tags: java;swing;netbeans;printing
Question: I am trying to print a ```JPanel``` called ```print_p```.It contains a Table and some labels.
The error is in print dialog its range 1-9999
How can I fix this issue?
```private void printCard(){
PrinterJob printjob = PrinterJob.getPrinterJob();
printjob.setJobName(" Test Report ");
printjob.setPrintable (new Printable() {
@Override
public int print(Graphics pg, PageFormat pf, int pageNum){
pf.setOrientation(PageFormat.LANDSCAPE);
if (pageNum > 0){
return Printable.NO_SUCH_PAGE;
}
Graphics2D g2 = (Graphics2D) pg;
g2.translate(pf.getImageableX(), pf.getImageableY());
g2.translate(0f, 0f);
print_p.paint(g2);
return Printable.PAGE_EXISTS;
}
});
if (printjob.printDialog() == false)
return;
try {
printjob.print();
} catch (PrinterException ex) {
System.out.println("NO PAGE FOUND."+ex);
}
}
```
Comment: Using `Printable`, you can't, it's not an error, until the `Printerable` is printed, it has no way to determine the number of pages that might be printed. Take a look at the [`java.awt.print.Book`](http://docs.oracle.com/javase/7/docs/api/java/awt/print/Book.html) API instead...
Here is the accepted answer: When using ```Printable```, this is expected behaviour, as the dialog has no idea of how many pages might be printed, as nothing about the print job has been processed.
You need to use a ```Pagable``` interface. This allows you to collect a series of ```Printable```s, each which represents a single page within the virtual book.
For a ready made implementation, you can take a look at ```java.awt.print.Book```
Updated with example
```import java.awt.FontMetrics;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.print.Book;
import java.awt.print.PageFormat;
import java.awt.print.Printable;
import java.awt.print.PrinterException;
import java.awt.print.PrinterJob;
import java.util.logging.Level;
import java.util.logging.Logger;
public class PrintTest {
public static void main(String[] args) {
PrinterJob pj = PrinterJob.getPrinterJob();
pj.setJobName("Book 'em Danno");
PageFormat pf = pj.defaultPage();
pf.setOrientation(PageFormat.LANDSCAPE);
Book book = new Book();
for (int index = 0; index < 10; index++) {
book.append(new Page(index + 1), pf);
}
pj.setPageable(book);
if (pj.printDialog()) {
try {
pj.print();
} catch (PrinterException ex) {
ex.printStackTrace();
}
}
}
public static class Page implements Printable {
private int page;
public Page(int page) {
this.page = page;
}
@Override
public int print(Graphics graphics, PageFormat pf, int pageIndex) throws PrinterException {
Graphics2D g2 = (Graphics2D) graphics;
g2.translate(pf.getImageableX(), pf.getImageableY());
g2.translate(0f, 0f);
FontMetrics fm = g2.getFontMetrics();
String text = Integer.toString(page);
double y = (pf.getImageableHeight() - fm.getHeight()) + fm.getAscent();
double x = (pf.getImageableWidth() - fm.stringWidth(text)) / 2d;
g2.drawString(text, (float)x, (float)y);
System.out.println(pageIndex);
return PAGE_EXISTS;
}
}
}
```
Comment for this answer: I got the point , but until now I am trying to insert Pagable interface to my code
I added
Book bk = new Book();
bk.append(this, pageFormat);
printJob.setPageable(bk);
put the dialog wasn't run.
Comment for this answer: it's work good now , but the tables had some columns out of the page , can I solve this without resizing the Jpanels contents ?
Comment for this answer: @HabibAl-Ghoul Are you still having problems? I've updated the answer with a runnable example which seems to work just fine for me...
Here is another answer: You can do this with Printable as well just use
```PrintRequestAttributeSet attribs = new HashPrintRequestAttributeSet();
attribs.add(new PageRanges(firstPageIndex + 1, lastPageIndex + 1));
```
ant when you calling PrintDialog
```pj.printDialog(attribs)
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.