Latest Entries »

Schwab is no longer providing price file downloads that some Axys users have relied on for decades.

When I set out to write this, I had some trouble deciding on the title.  At first, I considered “Schwab Hamstrings Pricing for Advent Users”, but that’s inflammatory and not entirely accurate, so I couldn’t do that.

I could just as easily have titled this blog “Stubborn Axys Users Refuse to Embrace Benefits of ACD Interface” or “Axys Users Slow to Hire Consultants to Address Schwab Point-to-Point Interface Changes”, but in truth Schwab is discontinuing their support for Axys in the data they provide directly to their customers via the Schwab download, so I had to go with “Schwab Discontinuing Support of Axys Point-to-Point Interface – Again?”  Besides, picking one of those other titles would have made me write a blog with a different message.  In advance, I’d like to clarify that this issue only impacts the Schwab point-to-point interface and has no effect on those that receive their Charles Schwab data from Advent’s ACD interface. 

I need to apologize to those who have read my blog regularly in the past.  First, I am sorry I haven’t posted anything in a while.  Additionally, I must apologize that this blog may not seem particularly newsworthy for some.  You may even be thinking, Didn’t this happen eons ago.  The answer is yes and no.

About twenty years ago, there was some drama about the point-to-point interface that Charles Schwab provided to its customers and Advent, being Advent, may have been perceived as attempting to screw Charles Schwab and its customers to make more money.  Schwab, being Schwab, sued Advent – to paraphrase the judge said, “You can’t do that to Advent.”

According to what I can dig up now, the firms quit wasting each other’s time and money nearly seven months after that preliminary injunction, coming to a compromise that allowed Schwab to continue to provide their point-to-point data for a period of time.  In my recollection of the events, it seemed much more drawn out.  Fast forward twenty years, and now everyone that is still relying on this particular set of data directly from Schwab’s download is back to square one.

Back in 2020, the underlying issue was that Advent didn’t want Schwab to continue providing the data without going through ACD and Schwab wanted to continue providing the data to satisfy their existing customers, who had grown dependent on getting that data via the point-to-point interface.  From the perspective of those Axys customers, it is easy to understand their position then and now…  it is pretty much free, and it works.  Why would we want to change that?

Somehow, for more years than I would have thought possible these holdouts that either saw no reason to fix something that wasn’t broken or were too cheap to move to ACD continued to do what they had been doing for decades.  I never thought this would have gone on as long as it did.  Alas, as they say, all good things must come to an end.  That is apparently what is happening now.

Schwab is in the process of stopping production of the files that feed Dataport for this subset of Axys users.  Last month, they stopped producing the price (CSMMDDYY.pri) files; they are also planning to stop producing other key files, such as transactions, sometime in 2023.  The sudden inability to create a price file no doubt caused some difficulties for those still dependent on them.  As a result, a couple firms reached out to me.

After a brief discussion with the first firm, I agreed to automate the creation of the missing price file.  According to my customers, both Advent and Schwab were unwilling to assist them with the issue.  Advent’s not planning to make changes to their interface to take in the new Schwab files, and Schwab’s not planning to help clients transform the files into something that can be ingested directly into Dataport.

It sounded way too easy for someone with my experience, and I thought it would only take a “few” minutes.  Somewhat embarrassingly, I spent a few hours creating the automation necessary to do the translation.  However, in a subsequent implementation for another customer, I was able to have a meet-and-greet call with them and a follow-up call to implement and test the solution in their environment very quickly.  All of it was accomplished in a couple hours, and on the very same day the prospective customer contacted me, leading me to believe that future implementation may be performed in a matter of minutes.

Those dealing with what is currently limited to a pricing issue have a handful of choices, none of which are fun to deal with when you need yesterday’s prices now:

  1. The most obvious choice: consider implementing the Schwab Interface via ACD.  It might be worth it.  I am not kidding.  I have plenty of clients that use ACD.
  2. Use a third-party pricing service like IDC/ICE or Telemet.
  3. Key the prices in manually.  I am not recommending this, but it is certainly an option.
  4. Utilize automation to recreate the missing price file (CSMMDDYY.pri) from the security file (CRSYYYYMMDD.SEC) now provided by Schwab.  This isn’t very difficult, and that is what I have done for those who have asked me to resolve the issue for them.

Addressing the pricing issue alone is a stop-gap solution at best.  The larger issue down the road is translating the transaction files, which will need to be done in 2023.  At my clients’ request, I have agreed to look into doing this for them as well, and I will most likely do it.  With my experience building Axys interfaces and doing the requisite transaction mapping et cetera it probably won’t be that big a deal, but it will certainly be more complicated and time-consuming than the Price File Translator was to create.

As always, if this issue is something your firm needs assistance with, please feel free to contact me directly.


Kevin Shea Impact 2010

About the Author: Kevin Shea is the Founder and Principal Consultant of Quartare; Quartare provides a wide variety of technology solutions to investment advisors nationwide.

For details, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.

IMG_8217

I like to think that the greater purpose of much of the work I do with investment professionals is to simply show them what is possible. If you understand what can be done and how, I trust that you will have no problem coming up with many things that you want to do. Process automation, workflow improvements, integration, custom apps, and enhanced reporting are all easier to imagine and implement in your own environment when you have the benefit of the insights and perspective of relevant and significant user experience.

Like many before it, this article was written with that idea in mind. There is a very practical component (the source code below) which a small number of users may be able to take advantage of; the larger audience of managers, investment operations and technology professionals will hopefully come away with the idea of what can be done and roughly how it can be done, without necessarily planning to do this particular thing. More likely, readers will file away the knowledge of this article and revisit it later.

Most readers of this blog will not read it on the day I post it, or any time soon. Some day in the future, when the information in this particular blog becomes relevant to them, they will run a search and eventually find this article.

Audit Trail Usage

The Audit Trail (didpost.aud) file, a critical component of Advent APX and Axys, empowers users to review any transactions posted to portfolios in the system.  It can also facilitate small- and large-scale removal of transactions that are posted in error. Among some Advent users, proactive management of the audit file is almost nonexistent until they hit the wall by letting the file grow to an unmanageable size or find it useful and perhaps even necessary to create subsets of the file as part of an audit.

One approach to proactively managing the file in Axys is to have a routine process whereby the file is copied quarterly or monthly and the current file is deleted.  In this scenario, new transactions posted create a new Audit Trail file.  And if you need to reference older transactions, you can temporarily copy the current file to another file and restore the file you need to access as the didpost.aud.  When you are done accessing the old data, you revert things back by copying the original Audit Trail back to didpost.aud. Though the process is a bit clunky, somewhat risky, and manual, there are probably many firms using this approach.  You could easily automate the processes I have described to create a more elegant solution.

In the past, I created some automation to routinely export the audit trail to a CSV file and create separate audit files of those exported CSVs for pre-selected periods.  The code to do this is not complex, so I doubt I am the only one who has implemented this approach.  This worked fine for many years, but it stopped working at some point.  I am not sure when it happened, but I found that the ability to export the audit trail file in Axys 3.x via IMEX no longer worked reliably.  Any attempt to export the Audit Trail would abend, leaving you with only part of the exported file you needed.  Initially, I wasn’t sure whether the issue was limited to one of my customers, but after reviewing a few sites I work with, I found that the ability to export large audit trail files using IMEX no longer worked across the board. The issue is an Axys issue and not necessarily an APX issue, but I haven’t tried to export the audit trail in APX recently.

This failure to export Audit Trails posed quite a problem.  Though the Audit Trail could be reviewed, and copy functions within the review windows worked, the size of the files made any kind of manual selection process onerous and prone to error.  Besides, what would you do if you did select the data?  I guess you could copy it into an empty trade blotter and save the blotter to a different file, but the trade blotter isn’t an Audit Trail file.

My assumption was that the integrity of the Audit Trail file was intact, but the function to export a larger file was broken.  Usually, I frown upon working with Advent files directly.  I prefer to work with exported files and consider that a best practice to avoid issues that can occur when the format of Advent files ultimately changes in later versions.  In this particular instance, I was forced to read and write AUD files directly since the export mechanism wasn’t working properly, but I was happy with the end result and thought the source code might be useful to others.

Source Code to Fix Axys Audit Trail Export Issues

The code below reads the Audit Trail (didpost.aud) file in its native format, creates a time-stamped backup of the original file, and creates an annual didpost file for each year in a backup folder within the f:\axys3\aud folder.  As some of you consider using this code, you may be concerned. After all, the integrity of the Audit Trail is not really something you want to mess around with lightly. You may even be thinking, Just because you can doesn’t mean you should. Nevertheless, I created, carefully tested, and successfully implemented the code to fix the audit trail for some Axys users that realized they had this issue.

Imports System.IO

Module Module1

  Sub Main()

    ' This subroutine was created to address a problem in Axys 3.x.
     
    ' The routine serves two purposes. First of all, it addresses
    ' a problem where the Audit Trail can no longer be exported in
    ' its entirety.  The secondary purpose of the routine is building
    ' annual Audit Trail files for record-keeping purposes.
   
    ' This process is intended to be run annually.  When run it creates
    ' a backup of the existing Audit Trail and then create separate
    ' Audit Trail files for each of the previous years.  However,
    ' given critical nature of the Audit Trail I recommend that you
    ' make sure you have additional backups of the Audit Trail to
    ' ensure that you can return the to original if necessary. When
    ' the process is completed the current Audit Trail will only have
    ' the records that were posted in the current year.  Please note
    ' this code should be used carefully by users that completely
    ' understand its purpose.

    ' written in Microsoft Visual Studio's Visual Basic by
    ' Kevin Shea (aka AdventGuru) & updated 06/27/2019

    ' Disclaimer: This routine works fine for the specific instance
    ' it was created for, but could need additional modifications
    ' for different circumstances.

    Dim idate As String
    Dim d As Date
    Dim i As Long = 0
    Dim c As Integer
    Dim x As Integer
    Dim bufferpointer As Long = 0
    Dim ByteCount As Long = 0
    Dim CurrentYear As String = "0000"
    Dim NewYear As String = "1111"
    Dim ByteStop As Long = 0
    Dim ByteStart As Long = 0
    Dim size As Long
    Dim posts As Integer
    Dim InitialPostDate As Date
    Dim semicolon As String = ";"
    Dim CommentHeader() As Integer = {254, 250, 251, 252, 89, 0, 48, 0, 59, 0} '10 bytes
    Dim FileHeader() As Integer = {97, 117, 100, 49, 46, 48, 49, 48} '8 bytes
    Dim AuditTrailFolder As String
    Dim AuditTrailBackupFolder As String
    Dim AuditTrailFile As String
    Dim time As DateTime = Date.Now
    Dim format As String = "M_d_yyyy_HHmm"

    AuditTrailFolder = "f:\axys3\aud"
    AuditTrailBackupFolder = "f:\axys3\aud\backup"
    AuditTrailFile = "didpost.aud"

    System.IO.Directory.CreateDirectory(AuditTrailBackupFolder)
    MsgBox("Please make sure no users are posting blotters before continuing ...")

    My.Computer.FileSystem.CopyFile(AuditTrailFolder + AuditTrailFile,
    AuditTrailBackupFolder + time.ToString(format) + ".aud")

    MsgBox("Audit Trail Backup complete. Click okay to continue and analyze file.")

    Dim sample As String
    sample = Space(11)
    Dim bytes() As Byte
    bytes = File.ReadAllBytes(AuditTrailFolder + AuditTrailFile)
    size = My.Computer.FileSystem.GetFileInfo(AuditTrailFolder + AuditTrailFile).Length

    MsgBox("File is " + size.ToString + " bytes long.")

    'put the file header in the buffer
    Dim buffer(size) As Byte

    For i = 0 To 7

      buffer(i) = FileHeader(i)

    Next i

    bufferpointer = 8
    ByteCount = 0
    ByteStart = 8

    Maxloop:

    Do While ByteCount < size

      ByteCount = ByteCount + 1
      c = 0

      For x = 0 To 9

        If ByteCount + x = size Then Exit Do
        If bytes(ByteCount + x) = CommentHeader(x) Then c = c + 1

      Next x

      If c = 10 Then

        If Strings.Chr(bytes(ByteCount + 84)) = semicolon And Strings.Chr(bytes(ByteCount + 32)) = "a" And Strings.Chr(bytes(ByteCount + 33)) = "t" Then

          idate = (Strings.Chr(bytes(ByteCount + 23)) + Strings.Chr(bytes(ByteCount + 24)) + "/" + Strings.Chr(bytes(ByteCount + 26)) + Strings.Chr(bytes(ByteCount + 27)) + "/" + Strings.Chr(bytes(ByteCount + 29)) + Strings.Chr(bytes(ByteCount + 30)))

        End If

        d = Convert.ToDateTime(idate)
        posts = posts + 1
 
        'if post date is greater than threshold of collection save buffer as distinct audit trail and reset buffer file
        NewYear = d.ToString("yyyy")

        If posts > 1 And (NewYear <> CurrentYear) Then

          'a change in the current year to a new year triggers audit trail creation for the current year
          ByteStop = ByteCount - 1

          For i = ByteStart To ByteStop

            buffer(bufferpointer) = bytes(i)
            bufferpointer = bufferpointer + 1

          Next

          Dim s As New System.IO.FileStream(AuditTrailFolder + CurrentYear + ".aud", System.IO.FileMode.Append, System.IO.FileAccess.Write, System.IO.FileShare.ReadWrite)

          s.Write(buffer, 0, bufferpointer)
          s.Close()

          ByteStart = ByteCount
          CurrentYear = NewYear
          bufferpointer = 8

        End If

      End If

      If posts = 1 Then

        InitialPostDate = d
        CurrentYear = InitialPostDate.ToString("yyyy")

      End If

    End If

  Loop

  'We did everything, but the current year.  Now let's do that too.
  For i = ByteStart To size - 1

    buffer(bufferpointer) = bytes(i)
    bufferpointer = bufferpointer + 1

  Next

  buffer(bufferpointer) = 0

  Dim f As New System.IO.FileStream(AuditTrailFolder + CurrentYear + ".aud", System.IO.FileMode.Append, System.IO.FileAccess.Write, System.IO.FileShare.ReadWrite)

  f.Write(buffer, 0, bufferpointer)

  f.Close()

  MsgBox(posts.ToString + " posts analyzed.  The first post was " + InitialPostDate.ToString("MM/dd/yyyy") + ". The last post was " + d.ToString("MM/dd/yyyy") + ".")

  MsgBox("File copy with binary read all bytes and write all bytes complete.")

  End Sub

End Module

Everyone has a different level of interest in the tech end of things. Some people want to understand it completely; others don’t want to know anything about it. I suspect that most of my readers fit somewhere in between those two extremes. On any given day, depending on what else is on your plate, you may lean towards one end or the other. In some cases, it simply makes more sense just to have someone like me take care of these things. If you have a problem with your Audit Trail that needs to be resolved, it may be one of those things.


About the Author: Kevin Shea is the Founder and Principal Kevin Shea Impact 2010Consultant of Quartare; Quartare provides a wide variety of technology solutions to investment advisors nationwide.

For details, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.

With the impending end of Windows 7 support rapidly approaching, businesses that aren’t already on Windows 10 are quickly moving towards adoption of Windows 10. Windows 7 support ends January 14, 2020. If you are familiar with my blog you probably already know that even though I use Windows 10 on my primary PCs, I am not the biggest fan. Even so, I recognize the inevitability of Windows 10 for my clients. There simply isn’t another viable choice for financial services firms and other security/compliance sensitive businesses.

With that in mind, I have stopped howling at the moon for the time being and if you share my sentiments about Windows 10 you probably should too. Accepting that this upgrade from Windows 7 to Windows 10 must be done, there is perhaps no time better than the summer to get these upgrades out of the way. The alternative of Windows 7 workstations in your office without ongoing security updates from Microsoft in January is much less appealing. So how do you make the best of this forced software reset?

970 EVO Plus - 500GB

You’ll need to take the opportunity to upgrade your systems in such a way that the disruptions associated with Windows 10’s shortcomings and fluid updates to the OS will be as limited in duration as possible. In order to do so, you should place an emphasis on the hardware to support Windows 10 and other software well into the future. As someone that shares in the decision process or possibly makes that decision at your firm, you need to be sure that your hardware is up to the task of running Windows 10 effectively.

Trust me. Spending money in the right places on hardware that is running Windows 10 will pay for itself. Like much of life, it isn’t necessarily about how much you spend, but more importantly what you spend it on that will determine how happy you are in the future.

IT is all about potential bottlenecks and eliminating any you can. If you are knowledgeable enough you can pick and choose what you need to focus on, or you can simplify things further by removing as many barriers to optimal performance as possible. When configuring workstations and servers I prefer to do it in such a way that they never need to be upgraded over their useful life. My recommendations to help achieve that goal follow.

 

CPU
Your processor is the most obvious bottleneck to local processing of data, spreadsheets, file management and other locally processed information such as Axys reporting. Picking a processor that performs poorly in comparison to a significantly more powerful processor at a justifiable price point is a decision you will pay for as long as you own the PC. When evaluating CPUs, it is important to understand your primary applications and whether those apps require optimal single thread performance or can utilize multi-core performance to its full potential. I frequently reference Passmark’s CPU benchmarks when choosing processors.

Armed with this information, you can make the right choice regarding what processor is best for you. Though I tend to lean towards Intel, AMD’s latest Ryzen processors are surprisingly powerful yet affordable. For example, when comparing the Passmark CPU Mark of AMD’s Ryzen 3700X and Intel’s i7-9700 @ 3.00GHz, AMD appears to deliver about 50% more processing power than Intel at the same price point. That is a huge difference and shouldn’t be ignored.

However, buying new workstations with Ryzen processors instead of Intel from traditional channels like Dell isn’t as easy. If you decide to go with Ryzen processors, you may need to make your purchases through a less traditional sales channel and that may be an unpopular idea since Dell is well-established as perhaps the most popular and reliable source of high-quality PC builds for businesses.

How fast is fast enough? That depends on the applications that are most important to you and your firm, but given what is available today and the overhead of Windows 10, I wouldn’t want a new PC that has a CPU Mark less than 10,000.  I’d prefer to see a new workstation configured with a CPU Mark somewhere between 12,500 and 20,000 depending on the type of heavy-lifting involved and the available budget.  Intel’s newer generation of processors (the i5-9500, i7-9700 and i9-9900) fill out this range.

 

Memory
It makes no sense to have less than 16gb of RAM on a new workstation running Windows 10 Pro. Though arguments can be made about whether you truly need that much, the current price of RAM makes this a decision a no-brainer. For an extra $100, some may be tempted to go for 32gb of RAM in the name of future proofing their configuration.

 

Motherboard
Your motherboard either empowers you take advantage of the latest technology advances or hinders your ability to do so. Newer motherboards offer support for some of the fastest processors and have compatible expansion slots available for latest NVMe storage devices, which you want in your new systems.

 

Storage Device aka Hard Drive
Last, but not least, one of the most important decisions you can make regarding your new system is that it supports the latest NVMe tech. The newest storage devices are cost-efficient game-changers that will make any system not running them seem soooo sloooow by comparison.

To understand the import of the latest NVMe technology you should know that some of the fastest mechanical hard drives, which are still in use today operate a roughly 1/5 of the speed of more traditional SSD drives that plug into a SATA port. By comparison, the fastest traditional SSD drives are roughly 1/7 the speed of today’s fastest NVMe devices. An NVMe could potentially be 35x faster than the hard drive in your PC today.

If you are considering configuring a new PC with anything less powerful than one of the fastest NVMe drives available, please STOP. You can buy a 512gb NVMe drive for about $110 and that is more than enough storage space for most workstations.

 

Caveats
As always, there are caveats regarding what works with what and how well.  There are different types of NVMe devices (4x, 2x, et cetera). You must combine a NVMe storage devices with compatible motherboards to get optimal speeds.  It is also possible to double the performance of the fastest NVMe device by purchasing a motherboard capable of supporting dual NVMe on the motherboard in a RAID 0 configuration, but in that configuration a single NVMe failure would cause you to lose all of your data.

I generalize quite a bit here because this blog isn’t aiming to give every last detail related to NVMe drives, just the fact that you should have one.  If you want to learn more about the technical specifics of NVMe drives this article from PC World should help.

 

Commitment to Dell
I still like to use Dell for many of my customers’ workstations and servers, but dealing with Dell is rarely a perfect customer experience. For example, Dell classifies hard drives into types 10 to 50, which I find both frustrating and potentially deceptive. I suspect the practice is intended to simplify decision making for decision-makers, but frankly I want to know the specs of the drives I am putting in my systems not the hard drive class Dell has assigned those drives.

If you are not familiar with Dell’s hard drive classes, you are likely to spend more time than you should searching the web for a description of Dell’s system of categorizing hard drive classes. In my own experience, I eventually found a link that did a better job describing Dell’s “classes”, but I really don’t need Dell to classify drives for me. I can do that myself.

Recently, I requested NVMe storage devices from Dell in Optiplex and Precision lines, and received some Precision line quotes with class 20 hard drives. If I actually ordered those PCs the class 20 drives would have hamstrung system performance. My advice regarding purchases from Dell and anyone else is to carefully review the specifications before making your order. Making assumptions about what tech vendors are including in your systems is bound to leave you disappointed.

In summary, if you are moving to Windows 10 this year make sure that your new PCs can support the fastest, affordable 4x PCIe NVMe drives available and don’t be cheap when it comes to processing power. A properly configured workstation should have a useful life of at least three years.  For most users, that configuration should include 16gb RAM, a 500gb NVMe storage device with a compatible motherboard that facilitates maximum sequential-read speeds of 3,500MBps, and an Intel i5-9500 processor or better.  Spending a little extra money on the right equipment now should save you many hours of frustration over the life of your new equipment.

 


About the Author: Kevin Shea is the Founder and Principal Kevin Shea Impact 2010Consultant of Quartare; Quartare provides a wide variety of technology solutions to investment advisors nationwide.

For details, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.

grayscale photo of computer laptop near white notebook and ceramic mug on table

In previous articles, I have written about the ability to take an Advent Report Writer report and make changes to it in the underlying REPLANG code. Though the program is actually called Report Writer Pro, I won’t be referring to it by that name since the “Pro” part seems a bit much.

In this article, I will take you through the basics of this process and also share some code I wrote recently to deal with a troublesome Report Writer exception. Please note, there are many modifications that can easily be done within the Report Writer interface, but this article is not about that.

You should have a good reason for attempting to modify Report Writer reports outside of the app.  For me, two fundamental issues drive this decision.  The modification either cannot be handled due to limitations of Report Writer or it is simply much more expedient to hack and slash REPLANG than it is to work within the confines of the Report Writer environment.

Okay.  Let’s assume that like me you think you have a good reason to do this. There are a few things you should know before you modify a report created by Report Writer outside of Report Writer. First of all, any report created in the Report Writer has a RPW extension, but the underlying format is REPLANG.  In most cases, any report with a REP extension was not created by Report Writer – except of course for reports like the ones we are talking about creating.

Second, your report containing code not created by Report Writer can no longer be modified by Report Writer in the future, so make a backup of the RPW file before any changes, and save the file you modify as a REP file.  That way if you need to make a change to the report later – that Report Writer would be better suited to make – you can.  In that scenario, once you have updated the report you can reapply your manual updates to the newer RPW file and then save it as REP report again.

The reason that your new REP report cannot be modified in Report Writer is not only due to the fact that the extension isn’t an RPW, but more specifically that a checksum created by Report Writer when it was last modified by the software will no longer match.  The logic behind why this is done is understandable.  Report Writer only works with certain predefined templates and does some really impressive things, but it is not designed to interpret code changes that were not created by Report Writer.

Lastly, if you haven’t waded into REPLANG code created by Report Writer it will take some getting used to.  By nature of the fact that RPW files are created to be extensible the REPLANG code is more abstract. In other words, the code generated by Report Writer appears far less direct with regard to its purpose than code that an individual might write for a singular and well-defined purpose.  If you are not familiar with REPLANG coding, I wouldn’t recommend trying to modify REPLANG reports created by Report Writer.

Some examples of the types of things I find myself doing when I modify Report Writer reports from REPLANG directly follow:

  1. I need to do a calculation that I find difficult or impossible within the Report Writer interface.  If you have any experience using Report Writer to create user-defined formulas, this really isn’t that hard to imagine.
  2. Adding a piece of data that isn’t readily available from the Report Writer.
  3. For expediency sake, I know how to do something in REPLANG in a minute, but doing it in Report Writer might take an unreasonably long time.

Dealing with Report Writer Exceptions

Recently, while working on some reporting extracts for a client, I attempted to take what appeared to be a relatively simple Report Writer report and change the format to a CSV file.  Doing so is a basic function of the product and only requires a few mouse clicks.  It is something I tend to do on a regular basis and it usually works well.  However, this time Report Writer went haywire.

Instead of taking the twenty-two thousand line report and making another similar sized report as I expected, it created a ridiculously long report (hundreds of thousands of lines of code) that Report Writer could not test or run.  Even a twenty-two thousand line report sounds big and it is.  By comparison, the longest standard report(persave.rep), which updates performance files,  is under four thousand lines. Most of the standard reports are significantly less than a thousand lines of code.

It is not the first time I have seen this problem, but this time I decided to do something more proactive to deal with this issue in the future.   The report that I was attempting to change is fairly simple.  There are no summary records.  So, the output is just a table of detail records and that helps to simplify the coding requirements of what I want to do.  To deal with this issue in the past, I have simply renamed the RPW to a REP file and modified 10 to 20 lines specific to placing the output on the screen and reformatted them to be CSV friendly.

There are alternative ways to do this. You could create the Excel file output manually by exporting the report output to Excel once the report has been generated and saving it, or by writing a script to generate an Excel file, but in this particular instance I preferred to create a more flexible CSV report.  We could also do a standard search and replace function within a text editor, but having seen this issue more than a few times I wanted to create a utility I could use going forward.

So I quickly wrote the code in the section below to change the original report, which was sending output to the report view screen, to send all the output directly to a file.  I find it very easy to code things like this using Visual Basic (VB) or Visual Basic for Applications (VBA). Since most users also have access to VBA via Microsoft Office, it is a programming option you can find and use on almost any PC. As such, I frequently use VBA when I am doing spontaneous and simple coding tasks or projects that don’t require a more complex development environment. I am not a huge fan of verbose comments, but I thought they might be useful here.

Sub MAIN()

' This routine takes a REPORT WRITER report which sends output to the
' screen and creates a report that sends output to a file. The routine
' was created to address an exception where the normal function to make
' a change in a RPW file created an obscenely large Report Writer file
' that couldn't be tested or run.

' written in VBA by Kevin Shea (aka AdventGuru) & published 06/26/2019

' Disclaimer: This routine works fine for the specific instance it was
' created for, but could need additional modifications for different
' circumstances.

'Initialize variables
Dim InputFileHandler As Integer
Dim OutputFileHandler As Integer
Dim Filename$
Dim Record$
Dim IgnoreRecords As Boolean
Dim ReportLocation$
Dim OutputFolder$

ReportLocation$ = "f:\axys3\rep\"
OutputFolder$ = "f:\axys3\exp\"

' Axys users will probably need to change the two locations above to
' match their actual folder locations of Axys.

' APX users, depending on the APX server name, the preceding statements
' might need to change to ReportLocation$="\\apxappserver\APX$\rep\"
' and OutputFolder$ = \\apxappserver\APX$\exp\"

InputFileHandler = FreeFile 'fetch an unused file handler
IgnoreRecords = False
Filename$ = InputBox("Report file name?")

Open ReportLocation$ + Filename$ For Input As #InputFileHandler

OutputFileHandler = FreeFile 'fetch an unused file handler

Open ReportLocation$ + Left$(Filename$, Len(Filename$) - 4) + ".out" For Output As #OutputFileHandler

Do While Not EOF(InputFileHandler)

  Line Input #InputFileHandler, Record$ 'read a record from the report

  If Left$(Record$, 4) = "head" Then IgnoreRecords = Not (IgnoreRecords)

  ' REPLANG marks the beginning of the header definition with the
  ' statement "head" and ends the header definition with the same
  ' statement. We don't want to write the header to our output file,
  ' so we are going to ignore all of the code between the two head
  ' lines of code.

  If IgnoreRecords = False Then

    If InStr(Record$, ".") > 0 Then

      ' The "." in REPLANG is the syntax usually associated with
      ' sending output to the screen most, but not all, "." lines
      ' end with "\n", which marks the end of the output line. By
      ' making the "." a requirement of this logic we will only
      ' process REPLANG statements that create output.

      For i = 1 To Len(Record$)

        'ignore any code lines that are less than two characters
        If i > 1 Then

          ' We may not need to be this specific, but I want to err
          ' on the side of caution. We will only process lines that
          ' start with a period. This prevents us from processing
          ' other lines that may have periods in the remarks or other
          ' areas of the REPLANG report.

          If Trim$(Left$(Record$, i - 1)) = "" And Mid$(Record$, i, 1) = "." Then

            'Somewhat self-explantory...
            Record$ = Replace(Record$, "~", ",")
            Record$ = Replace(Record$, "#,", ",#")
            Record$ = Replace(Record$, " ,", ",")
            Record$ = Replace(Record$, "?", "")

            Print #OutputFileHandler, Record$ 'write the record we modified

            Exit For
            'Exit logic early if we are done processing the record.

          End If

        End If

      Next i

    Else

      ' This bit of code inserts the code that writes all records
      ' that weren't modified, but contains some minor code insertion
      ' (outfile) make the output go to a file. The proper place to
      ' insert the outfile statement is before any accounts are
      ' processed, which is immediately before the load cli statement.
      ' Writing to a file requires that we close (fclose) the file
      ' or the result will not get written properly. The right place
      ' to insert the fclose statement is when all of the processing
      ' has been completed, which is immediately after the next cli
      ' statement.

      If Left$(record, 8) = "load cli" Then Print #OutputFileHandler, "outfile "+OutputFolder$+"outfile.csv n"
      Print #OutputFileHandler, Record$
      If record = "next cli" Then Print #OutputFileHandler, "fclose"
 
    End If

  End If

Loop

Close #OutputFileHandler
Close #InputFileHandler

End Sub

As to why Report Writer erroneously attempts and fails to create such a long report, it likely has something to do with user-defined calculations and parsing of strings. However, for the purpose of this article we are not trying to fix that issue. It may just be a limitation of the Report Writer.


About the Author: Kevin Shea is the Founder and Principal Kevin Shea Impact 2010Consultant of Quartare; Quartare provides a wide variety of technology solutions to investment advisors nationwide. For details, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.