Rubens Silva · Jun 2, 2017 go to post

By default, Base64Decode and Base64Encode are functions used to decode and encode datatypes, or best saying... STRING.

Since you want to encode a stream the Decoder must understand that it should continue from the last chunk position instead of assuming a new string, otherwise you'll get a corrupted result.

Here's how XML Writer outputs an encoded binary.

/// <method>WriteBase64</method> encodes the specified binary bytes as base64 and writes out the resulting text.
/// This method is used to write element content.<br>/// Argument:<br>/// - <var>binary</var> The binary data to output. Type of %Binary or %BinaryStream.Method WriteBase64(binary) As %Status{  If '..InRootElement Quit $$$ERROR($$$XMLNotInRootElement)

  If ..OutputDestination'="device" {    Set io=$io    Use ..OutputFilename:(/NOXY)  }

  If ..InTag Write ">" Set ..InTag=0

  If $isObject(binary) {    Do binary.Rewind() Set len=12000    While 'binary.AtEnd {      Write $system.Encryption.Base64Encode(binary.Read(.len),'..Base64LineBreaks)    }  } Else {    Write $system.Encryption.Base64Encode(binary,'..Base64LineBreaks)  }

  If ..OutputDestination'="device" {    Use io  }

  Set ..IndentNext=0

  Quit $$$OK}
Rubens Silva · Jun 2, 2017 go to post

Try generating an error with code 5001 ($$$GeneralError, "your custom message").
This will give you an "#ERROR 5001: your custom message",  $replace that "#ERROR 5001:" part to "" using GetErrorText.

Note that if you have multiple errors inside a single status you'll need to fetch and $replace them individually.
That's because I don't think you can generate custom error codes. I would delegate the errors to your application layer instead.

Rubens Silva · Jun 7, 2017 go to post

2010 confirmed. Anyone with earlier versions?
W $System.Version.GetMajor()
2010
 

Rubens Silva · Jun 14, 2017 go to post

Use #server if you want to wait for a response, but be warned though that JavaScript is one threaded, and using #server with a left-hand side (LHS) variable would lock the current thread.

If you don't specify a LHS you can continue using #call, that will inform the CSP Gateway to execute the request asynchronously.

More details here: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY…


If you need something closer to a callback then must do your callback on the server using &js< /* your javascript code here */ >. This way the server will return a "runtime" JavasScript to execute remaining operations on the client side.
 

Rubens Silva · Jun 20, 2017 go to post

cls/My/Deep/Class.cls

I don't think subdirectories should be applied for routines, originally routines aren't supposed to have submodules or subpackages and dots might be part of their name. Also if you need some complexity, you wouldn't prefer using routines but classes to keep things organized.
 

I DO NOT recommend using src, unless you want to mix both back-end and front-end code. 
Or you want to keep the server code in a separated repository.
Here is a scaffolding based on React for better understanding.
my-app-project /
   package.json
    server
          cls
          mac
          int
          csp <- this is our build path, nothing should be added manually because everything is handle by the bundler.

    scripts /

         test.js
         build.js
        dev.js
   config /
         webpack.config.dev.js
         webpack.config.prod.js
         webpack.config.test.js
   src /

         components
               Header
                   index.js
                   Header.js                   
                   HeaderSearchBar.js
               Footer

                   index.js
                   Footer.js
                   FooterCopyright.js
                AppContainer
                    index.js
                   AppContainer.js
          containers
                App.js
           tests
               components

                  Header
                      Header.js                   
                      HeaderSearchBar.js
                  Footer
                      Footer.js
                      FooterCopyright.js
                   AppContainer
                       index.js
                   AppContainer


                              
You can use folders to separate both client and server codes inside the same project. You can even structure your project

using a monorepo approach if you want to keep multiple application modules together.
                  

Now since React will be using webpack's hot module reloading along with webpack-dev-middleware that builds everything within the memory, your Caché server should only work following SPA conventions and providing consumable data.
There's a catch though, whenever the developer builds a new version (using webpack.config.prod), it's mandatory to delete the older bundle and import the project back to Caché to keep the source in sync on the server and the project.
     

Rubens Silva · Jun 20, 2017 go to post

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY…
Check the "Configuring an ECP Application Server" section. However, I advise you to read everything.
Remote databases should be created on the client instance (A). They represent the connection with the local database on the server instance (B).
If you follow the steps from the guide, when creating a remote database you'll notice that the wizard will ask you for a server, that's where you should select the ECP server you just defined.
Now create a local database on the client instance if you need to store something on it (like, let's say... code). And finally create a namespace, this is where you can define where to use local or remote database (GLOBALS AND/OR ROUTINES).
CAUTION: If you configure globals to use the remote database, any changes you do on the client instance will reflect on it's provider counterpart. The same applies for routines.
EDIT: Answering your question. No, as I far as I know, using this method you can't have more than one remote database for each global or routines. But I think you can workaround it by defining another namespace, from another remote database that links a local database on instance C and map what you want.
But you'll be responsible for dealing with name conflicts.

Rubens Silva · Jun 21, 2017 go to post

Is the file using BOM? If so you can check the header for the following signature: EF BB BF


This can be described as: $c(239, 187, 191)

Now keep in mind that most of editors abandoned the use of BOM in favor of digraphs and trigraphs detection heuristics as a fallback, yes, fallback. Because many assume you're already working with UTF-8 and won't work well with some charsets neither output BOM characters unless you tell it to use the desired charset.
 

You can try checking it against the US-ASCII table that goes from 0 to 127 code points, however that still wouldn't be 100% assertive about the stream containing UTF-8 characters.

Rubens Silva · Jul 3, 2017 go to post

Although Caché does have a %DataTypes layer for SQL, the database engine itself is purely based on globals, which is losely typed.

Thus, what I can say for you is:
Local variables (memory) is by default around 32 kilobytes, you can upgrade this amount up to 50x.

So, straight from the documentation:
 

Caché supports two maximum string length options:

  • The traditional maximum string length of 32,767 characters.

  • Long Strings maximum string length of 3,641,144 characters.

Globals can go way beyond, since it's persisted.
How huge a number can go? I suppose you're talking about floating precision.
This might help you.
And this explains about how Caché manages variables.
Caché also provides you the possibility to redefine the memory allocation size for the context process.

Rubens Silva · Jul 3, 2017 go to post
set key = ""
set var1 = "code2"
for  {
    set key = $order(^myglobal(key), 1, list)
    quit:key=""
    if $listget(list, 1) = var2 kill ^myglobal(key)
}

Use $order to iterate over global subscripts.

Rubens Silva · Jul 4, 2017 go to post

Instead of creating multiple repositories. Can't you just create a single repository and keep all projects inside it?
This is what a monorepository approach does.
Big companies like Google opted-out for using this approach because they started to find it too hard to manage issues across multiple repositories, since one should clone the issued repository along with a tree of dependencies to make it testable. And that's only one use case.
Some few examples:
https://eng.uber.com/ios-monorepo/
https://medium.com/@pejvan/monorepos-85e608d43b57

https://blog.ghaering.de/post/monorepo-march/
Now you must consider if that's a option for your company. Since using monorepo actually seems to be a trend and can lead you into traps.

Rubens Silva · Jul 12, 2017 go to post

"The following example uses relative dot syntax (..) to refer to a method of the current object."
If so, it's pretty misleading. But I accept your answer and will mark it as well.

Rubens Silva · Jul 18, 2017 go to post

If you can't release memory by ajusting your own source code, then what you need is to do is expand it's size by using $zstorage or the configuration.
Also, is that a recursive call?

Rubens Silva · Jul 28, 2017 go to post

Since no one answered that yet, maybe there isn't a known way of doing it.
If that's really the case, I suggest you to create your procedure.
This could work as the following:
 

1 - Routine B watches a global generated from Routine A using a scheduled task.

2 - Process A triggers an exception.

3 - Exception on routine A executes subroutine or method to that uses $stack to capture execution data.

4 - Routine A stores data into a global and quits abnormally.

5 - Routine B watches for new entries in the global, marks it as used/processed.

Rubens Silva · Jul 31, 2017 go to post

Such a roundtrip for knowing something that should be configurable.
I don't think you should depend on INetInfo since normally you know where you're hosting your service. Make the address configurable and this will prevent headaches, after that you just need to concatenate with your CSP file address.

Rubens Silva · Aug 3, 2017 go to post

I finally got it to work as I desired. Here's the source code, take a look on output routine:
This will:
Limit usage of ! to one per write.

Prevent initial write ! as I don't want to skip any line on the beginning.

Display compiler messages correctly.

Prevent from writing new lines for empty buffers.
Include portutils
Class Port.SourceControl.ExtendedHooks [ Abstract ]
{
ClassMethod Call(
sourceControl As %Studio.Extension.Base,
hookName As %String = "",
parameters... As %String) As %Status [ Internal, ProcedureBlock = 0 ]
{
  new sc, content, implementer, alreadyRedirected, currentMnemonic, childSC, expectingContent, firstLine
  
  set sc = $$$OK
  set childSC = $$$OK
  set implementer = ##class(Port.Configuration).GetExtendedHooksImplementer()
  set alreadyRedirected = ##class(%Device).ReDirectIO()
  set expectingContent = 0
  set firstLine = 0
  
  if '##class(%Dictionary.CompiledMethod).%ExistsId(implementer_"||"_hookName) return sc 
  set content = ##class(%Stream.GlobalBinary).%New()
  
  if implementer '= "" {
    write !, "Port: "_$$$FormatMsg("Port Log Messages", $$$RunningCustomHook, hookName, implementer)
    
    try {
      set currentMnemonic = "^"_##class(%Device).GetMnemonicRoutine()
      use $io::("^"_$zname)
      do ##class(%Device).ReDirectIO(1)
      set sc = $classmethod(implementer, hookName, sourceControl, parameters...)
    catch ex {
      set content = "" 
      set sc = ex.AsStatus()
    }
  }
  
  if alreadyRedirected 
    do ##class(%Device).ReDirectIO(1)
    use $io::(currentMnemonic)
  }
  
  if $isobject(content)
    do content.OutputToDevice()
  }
  
  write !
  
  if $$$ISOK(sc) {    
    write "Port: "_$$$FormatMsg("Port Log Messages", $$$HookReturnedOK, hookName)
  else {
    set errorText = $System.Status.GetOneStatusText(sc)
    write "Port: "_$$$FormatMsg("Port Log Messages", $$$HookReturnedError, hookName, errorText)
    set childSC = sc
    set sc = $$$PERROR($$$FailedWhileRunningExtendedHook, hookName)
    set sc = $$$EMBEDSC(sc, childSC)
  }   
  return sc 
  
rchr(c)
  quit
rstr(sz,to)
  quit
wchr(s)
  do output($char(s))
  quit 
wff()
  do output($char(12))
  quit
wnl()
  if firstLine = 0 set firstLine = 1
  else  set firstLine = -1
  do output($char(13,10))
  quit
wstr(s)
  do output(s)
  quit
wtab(s)
  do output($char(9))
  quit
output(s)
  // Skips writing the first !, we leave it to our write.
  if firstLine = 1 quit
  // Remaining writes ! are always a standalone buffer so we can check it's equality.
  if = $c(13,10) {
    // However we can only write if the the next buffer has indeed some content.
    // So we defer it to the next call where we can actually assert it.
    set expectingContent = 1
    // This catches writes with embedded CRLF (like the compiler ones).
  elseif $extract(s, 1, 2) = $c(13,10) {
    set expectingContent = 1
    do output($replace(s, $c(13,10), ""))
    set expectingContent = 0
    quit
  elseif $length(s) > 0 {
    // After deferring it, we can finally write a CRLF and the content, as long as it's not empty.
    if expectingContent = 1 {
      set expectingContent = 0
      do content.WriteLine()
      do content.Write($$$FormatText("Port (%1): ", hookName))
    }
    // Writes without ! must be written on the same line.
    do content.Write(s)
  }
   
  quit
}
}
 

Rubens Silva · Aug 3, 2017 go to post

Since you said it's arbitrary, where did you get that +3? Is that a constant or variable value?
Maybe if you update your line to:
ZBREAK *LINE:"T"::"ZWRITE LINE(I+C)"

Where C should be your other arbitrary data (but provided as variable as well).

Rubens Silva · Aug 4, 2017 go to post

%request.Content will provide you the raw string (or stream) contained on your request payload.

Rubens Silva · Aug 7, 2017 go to post

I only return the result when I'm absolutely sure that the method cannot throw any error. Otherwise I follow the rule:
1 - Obligatory arguments first.

2 - Result by second.

2 - Parameter with initial values third.

3 - Rest parameters for last.
(obligatoryParamA,  obligatoryParamB, obligatoryParamC, result,  optionalA, optionalB,  rest...)

Rubens Silva · Aug 8, 2017 go to post

That's quite a topic for complex discussions.

  • Do you use an issue tracking / collaboration system? If so which one. Any you would recommend or immediately dismiss based on personal experience?

I use Github plus repository issues.

  • How do you keep track of large code bases? Thousdands of folders named backup1, backups2, ..., SVN, git?

Git.

  • Do you have a development server to which you commit and test features there, or do you rather run a local copy of caché and implement features locally first, then push to the server?

Locally implemented and tested. tested and implemented.

  • Bonus question: How do you handle legacy code (and I mean the using lots of $ZUs kind of legacy) code? Leave it untouched and try to implement new features elesewhere? Rewrite the entire thing?

It depends, the more complex the code is, more I consider creating modern API wrappers instead of re-writting it.

Rubens Silva · Aug 8, 2017 go to post

NOTE:
Expert programmers try to keep away from using GOTO because it can break greatly the code workflow consistency. This is a basic concept for using structured programming.
So, before you think about using it, try to render the same effect using subroutines and methods instead.

Rubens Silva · Aug 20, 2017 go to post

Hello Coty.
I noticed that you starred my github repository and I thank you for that. :)


Back to your question, I think you're detecting changes by applying an unusual way to do that, since you said you can't trigger an action when modifying static files. Just so you know, as long as you're working with the Studio's SourceControl API, you should be able to do whatever you want whenever an  item is modified, you're even free to decide how to restrict the implementation, all of this regardless of the item you're updating.
Look at this part to understand how it's done.

About your use-case, we're actually testing Port with this development format. We have one code base, that's our development server, multiple namespaces simulating different customer configurations and mock data (not mock, actually their test data).
Even though this model works, by our analysis it can get pretty frustrating for users coming from a distributed version control, because they notice that multiple developers interacting with their "repository". Still, it's already a step ahead from not versioning at all.
However, the team is expected to migrate all their source to projects, since Port annoys the user about trying to save default projects and even detects items that are already owned by other projects. This forces all the team to prioritize organizing their code base.

Rubens Silva · Aug 29, 2017 go to post

Hello Benjamin,

This is how I binded a class to some git cli commands on the earlier versions of my project.
The main tip here is to use ZF along with ">" to output the result to a file. This way you can make Caché aware about what happened when the command was executed.


You'll notice that you can even create a custom query for operations like log, diff, etc.
If you don't want with the logic behind the outputs, you can simply use RunCommandViaZF from %Net.Remote.Utility.
But remember that older versions haven't this method.
Also, when reading the command string you'll notice a lot of {PLACEHOLDERS}. You don't need to implement it to work, just rewrite it to use static parameters instead.

They are:
{VCS} - The absolute path to the git executable.

{SLASH} - The resolution from \ or / depending the OS.
{Pn} - Where n is a sequential number, they are the parameters needed to call the command.
And finally, here's a sample:
do $zf(-1, "C:\Program Files (x86)\Git\bin\git.exe --work-tree="C:\Projects\Test" --git-dir=C:\Projects\Test\.git add cls 2> ""outputfilepath"" > ""errorfilepath""")

Rubens Silva · Aug 29, 2017 go to post

Hello Athanassios,

I think you're on the right track. But that should be only the beginning.


To capture "writes" from Caché you need to redirect your IO and write the data back to the device you binded your Python interpreter. This is your writer device. If you need help about how to configure your writer device, check the Job Command Example.
Remember that it's your responsibility how you'll implement a buffer and how much it can store before writing it back to the implementer. I do recommend using a stream here.

Now I don't know if simply by calling the binded method you're able to see it's output before returning. If it doesn't, you problably need to open a TCP connection from the implementer's side too and listen to the writer device using a separated thread.

Rubens Silva · Sep 5, 2017 go to post

Hello Kevin,

From my experience you cannot use Studio's output window to ask for input from the user and I think that's because the Studio's output window  is not really a terminal device.
I also tried to read the input from this device without success as well. I hope  I'm wrong and you can find an answer for that (and mine as well).

Rubens Silva · Sep 19, 2017 go to post

Whoa! This is even better than what I suggested. Thank you whoever you are!