TokenNum()

TokenNum()

Get the total number of tokens in a token environment

Syntax

      TokenNum( [<@cTokenEnvironment>] ) -> nNumberofTokens

Arguments

<@cTokenEnvironment> a token environment

Returns

<nNumberofTokens> number of tokens in the token environment

Description

The TokenNum() function can be used to retrieve the total number of tokens in a token environment. If the parameter <@cTokenEnvironment> is supplied (must be by reference), the information from this token environment is used, otherwise the global token environment is used.

Examples

      tokeninit( "a.b.c.d", ".", 1 )  // initialize global token environment
      ? TokenNum()  // --> 4

Compliance

TokenNum() is a new function in Harbour’s CT3 library.

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKENINIT(), TOKENEXIT(), TOKENNEXT(), TOKENAT(), SAVETOKEN(), RESTTOKEN(), TOKENEND()

TokenInit()

TokenInit()

Initializes a token environment

Syntax

      TokenInit( <[@]cString>], [<cTokenizer>], [<nSkipWidth>],
                 [<@cTokenEnvironment>] ) -> lState

Arguments

<[@]cString> is the processed string

<cTokenizer> is a list of characters separating the tokens in <cString> Default: chr(0) + chr(9) + chr(10) + chr(13) + chr(26) +  chr(32) + chr(32) + chr(138) + chr(141) +  “, .;:!\?/\\<>()#&%+-*”

<nSkipWidth> specifies the maximum number of successive tokenizing characters that are combined as ONE token stop, e.g. specifying 1 can yield to empty token Default: 0, any number of successive tokenizing characters are combined as ONE token stop

<@cTokenEnvironment> is a token environment stored in a binary encoded string

Returns

<lState> success of the initialization

Description

The TokenInit() function initializes a token environment. A token environment is the information about how a string is to be tokenized. This information is created in the process of tokenization of the string <cString> – equal to the one used in the TOKEN() function with the help of the <cTokenizer> and <nSkipWidth> parameters.

This token environment can be very useful when large strings have to be tokenized since the tokenization has to take place only once whereas the TOKEN() function must always start the tokenizing process from scratch.

Unlike CT3, this function provides two mechanisms of storing the resulting token environment. If a variable is passed by reference as 4th parameter, the token environment is stored in this variable, otherwise the global token environment is used. Do not modify the token environment string directly !

Additionally, a counter is stored in the token environment, so that the tokens can successivly be obtained. This counter is first set to 1. When the TokenInit() function is called without a string a tokenize, the counter of either the global environment or the environment given by reference in the 4th parameter is rewind to 1.

Additionally, unlike CT3, TokenInit() does not need the string <cString> to be passed by reference, since one must provide the string in calls to TOKENNEXT() again.

Examples

  TokenInit( cString )             // tokenize the string <cString> with 
                                   // default rules and store the token 
                                   // environment globally and eventually 
                                   // delete an old global token environment
  TokenInit( @cString )            // no difference in result, but eventually 
                                   // faster, since the string must not be 
  TokenInit()                      // copied rewind counter of global TE to 1
  TokenInit( "1,2,3", "," , 1 )    // tokenize constant string, store in 
                                   // global token environment  
  TokenInit( cString, , 1, @cTE1)  // tokenize cString and store token 
                                   // environment in cTE1 only without 
                                   // overriding global token environment
  TokenInit( cString, , 1, cTE1 )  // tokenize cString and store token 
                                   // environment in GLOBAL token environment 
                                   // since 4th parameter is not given by 
                                   // reference !!!
  TokenInit( ,,, @cTE1 )           // set counter in TE stored in cTE1 to 1

Compliance

TokenInit() is compatible with CT3’s TokenInit(), but there is an additional parameter featuring local token environments.

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKEN(), TOKENEXIT(), TOKENNEXT(), TOKENNUM(), TOKENAT(), SAVETOKEN(), RESTTOKEN(), TOKENEND()

TokenExit()

TokenExit()

Release global token environment

Syntax

      TokenExit() -> lStaticEnvironmentReleased

Returns

<lStaticEnvironmentReleased> .T., if global token environment is successfully released

Description

The TokenExit() function releases the memory associated with the global token environment. One should use it for every tokeninit() using the global token environment. Additionally, TokenExit() is implicitly called from CTEXIT() to free the memory at library shutdown.

Examples

      tokeninit( cString ) // initialize a token environment
      DO WHILE ! tokenend()
         ? tokennext( cString )  // get all tokens successivly
      ENDDO
      ? tokennext( cString, 3 )  // get the 3rd token, counter 
                                 // will remain the same
      TokenExit()                // free the memory used for the 
                                 // global token environment

Compliance

TokenExit() is a new function in Harbour’s CT3 library.

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKENINIT(), TOKENNEXT(), TOKENNUM(), TOKENAT(), SAVETOKEN(), RESTTOKEN(), TOKENEND()

TokenEnd()

TokenEnd()

Check whether additional tokens are available with TOKENNEXT()

Syntax

      TokenEnd( [<@cTokenEnvironment>] ) -> lTokenEnd

Arguments

<@cTokenEnvironment> a token environment

Returns

<lTokenEnd> .T., if additional tokens are available

Description

The TokenEnd() function can be used to check whether the next call to TOKENNEXT() would return a new token. This can not be decided with TOKENNEXT() alone, since an empty token cannot be distinguished from a “no more” tokens.

If the parameter <@cTokenEnvironment> is supplied (must be by reference), the information from this token environment is used, otherwise the global TE is used.

With a combination of TokenEnd() and TOKENNEXT(), all tokens from a string can be retrieved successivly (see example).

Examples

      tokeninit( "a.b.c.d", ".", 1 )  // initialize global token environment
      DO WHILE ! TokenEnd()
         ? tokennext( "a.b.c.d" )     // get all tokens successivly
      ENDDO

Compliance

TokenEnd() is compatible with CT3’s TokenEnd(), but there are is an additional parameter featuring local token environments.

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKENINIT(), TOKENEXIT(), TOKENNEXT(), TOKENNUM(), TOKENAT(), SAVETOKEN(), RESTTOKEN()

TokenAt()

TOKENAT()

Get start and end positions of tokens in a token environment

Syntax

      TOKENAT( [<lSeparatorPositionBehindToken>], [<nToken>],
               [<@cTokenEnvironment>] ) -> nPosition

Arguments

<lSeparatorPositionBehindToken> .T., if TOKENAT() should return the position of the separator character BEHIND the token. Default: .F., return start position of a token.

<nToken> a token number <@cTokenEnvironment> a token environment

Returns

<nPosition> See description

Description

The TOKENAT() function is used to retrieve the start and end position of the tokens in a token environment. Note however that the position of last character of a token is given by tokenat (.T.)-1 !!

If the 2nd parameter, <nToken> is given, TOKENAT() returns the positions of the <nToken>th token. Otherwise the token pointed to by the TE counter, i.e. the token that will be retrieved by TOKENNEXT() _NEXT_ is used.

If the parameter <@cTokenEnvironment> is supplied (must be by reference), the information from this token environment is used, otherwise the global TE is used.

Tests

      tokeninit( cString ) // initialize a token environment
      DO WHILE ! tokenend()
         ? "From", tokenat(), "to", tokenat( .T. ) - 1
         ? tokennext( cString )  // get all tokens successivly
      ENDDO
      ? tokennext( cString, 3 )  // get the 3rd token, 
// counter will remain the same tokenexit() // free the memory used for the
// global token environment

Compliance

TOKENAT() is compatible with CT3’s TOKENAT(), but there are two additional parameters featuring local token environments and optional access to tokens.

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKENINIT(), TOKENEXIT(), TOKENNEXT(), TOKENNUM(), SAVETOKEN(), RESTTOKEN(), TOKENEND()

SaveToken()

SaveToken()

Save the global token environment

Syntax

      SaveToken() -> cStaticTokenEnvironment

Returns

<cStaticTokenEnvironment> a binary string encoding the global token environment

Description

The SaveToken() function can be used to store the global token environment for future use or when two or more incremental tokenizers must the nested. Note however that the latter can now be solved with locally stored token environments.

Compliance

SaveToken() is compatible with CT3’s SaveToken(),

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKENINIT(), TOKENEXIT(), TOKENNEXT(), TOKENNUM(), TOKENAT(), RESTTOKEN(), TOKENEND()

RestToken()

RestToken()

Restore global token environment

Syntax

      RestToken( <cStaticTokenEnvironment> ) -> cOldStaticEnvironment

Arguments

<cStaticTokenEnvironment> a binary string encoding a token environment

Returns

<cOldStaticEnvironment> a string encoding the old global token environment

Description

The RESTTOKEN() function restores the global token environment to the one encoded in <cStaticTokenEnvironment>. This can either be the return value of SAVETOKEN() or the value stored in the 4th parameter in a TOKENINIT() call.

Compliance

RestToken() is compatible with CT3’s RestToken(),

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKENINIT(), TOKENEXIT(), TOKENNEXT(), TOKENNUM(), TOKENAT(), SAVETOKEN(), TOKENEND()

Harbour All Functions – T

TabExpand
TabPack

Tan

TanH

TBrowseDB

TBrowseNew

TFileRead

THtml

Time

TimeValid

TNortonGuide 

Token
TokenAt
TokenEnd
TokenExit
TokenInit
TokenLower
TokenNext
TokenNum
TokenSep
TokenUpper

Tone

TOs2

Transform
Trim

TRtf

TTroff

 Type

String Functions

AddASCII

AfterAtNum

AllTrim
Asc

ASCIISum

ASCPos
At

AtAdjust

AtNum
AtRepl
AtToken

BeforAtNum

Chr

CharAdd
CharAnd
CharEven
CharHist
CharList
CharMirr
CharMix
CharNoList
CharNot
CharOdd
CharOne
CharOnly
CharOr
CharPix
CharRela
CharRelRep
CharRem
CharRepl
CharRLL
CharRLR
CharSHL
CharSHR
CharSList
CharSort
CharSub
CharSwap
CharWin
CharXOR

CountLeft
CountRight
Descend
Empty
hb_At
hb_RAt
hb_ValToStr
IsAlpha
IsDigit
IsLower
IsUpper

JustLeft
JustRight

Left
Len
Lower
LTrim

NumAt
NumToken
PadLeft
PadRight

PadC
PadL
PadR

POSALPHA
POSCHAR
POSDEL
POSDIFF
POSEQUAL
POSINS
POSLOWER
POSRANGE
POSREPL
POSUPPER

RangeRem
RangeRepl

RAt

RemAll

RemLeft
RemRight
ReplAll

Replicate

ReplLeft

ReplRight

RestToken

Right
RTrim

SaveToken

SetAtLike
Space
Str

StrDiff

StrFormat

StrSwap

StrTran
StrZero
SubStr

TabExpand
TabPack

Token

TokenAt
TokenEnd
TokenExit
TokenInit
TokenLower
TokenNext
TokenNum
TokenSep
TokenUpper

Transform
Trim
Upper
Val

ValPos
WordOne
WordOnly
WordRem
WordRepl
WordSwap

WordToChar


CT_TOKENNEXT

 TOKENNEXT()
 Provides an incremental tokenizer
------------------------------------------------------------------------------
 Syntax

     TOKENNEXT(<idTokenInitVar>) --> cToken

 Argument

     <idTokenInitVar>  Designates the name of the character string
     previously initialized using TOKENINIT().  This is the only way to
     access this string once it has been stored by the Virtual Memory Manager
     (VMM).

 Returns

     TOKENNEXT() returns the next token from the string variable designated
     by TOKENINIT().  If there are no more tokens, a null string is returned.

 Description

     In conjunction with TOKENINIT(), this function provides a speed
     optimized variation of the normal TOKEN() function.  The increase in
     speed is achieved in two ways.

     TOKENINIT() exchanges all delimiting characters for the first delimiter
     in the list.  This means the entire delimiter list does not have to be
     searched every time.  The second advantage is that TOKENNEXT() does not
     always begin its search for the token that is extracted at the beginning
     of the string.

     The TOKENAT() function allows you to determine the position of
     TOKENNEXT().

     Although the address of the string that is processed and the internal
     counter have already been determined or initialized by TOKENINIT(),
     Clipper Tools still needs the variable name.  The function does work
     without this parameter, but only as long as the initialized character
     sequence has not been stored to VMM.  You should enter the variable name
     in each case because there is no explicit control over this!  If there
     is no access to the variable, a runtime error occurs.

 Notes

     .  When you use TOKENINIT() or TOKENNEXT(), you cannot use the
        TOKENSEP() function.  The required information can be determined
        using TOKENAT() in conjunction with the original string (status
        before TOKENINIT()).

     .  To determine the delimiter position before the last token, set
        TOKENAT() to -1.  To determine the delimiter position after the last
        token, set TOKENAT to .T..

 Example

     Break down a string:

     cDelim   :=  "!?.,-"
     cString   :=  "A.B-C,D!E??"

     TOKENINIT(@cString, cDelim)      // "A!B!C!D!E!!"

     DO WHILE .NOT. TOKENEND()
        cWord  :=  TOKENNEXT(cString)
        ? cWord
     ENDDO

See Also: RESTTOKEN() SAVETOKEN() TOKENINIT() TOKENAT()